Manus brings the dawn of AGI, AI Security is also worth pondering

robot
Abstract generation in progress

Author: 0xResearcher

Manus has achieved SOTA (State-of-the-Art) performance in the GAIA benchmark, demonstrating its performance surpasses that of Open AI's models at the same level. In other words, it can independently complete complex tasks, such as cross-border business negotiations, involving contract clause decomposition, strategic forecasting, scenario generation, and even coordinating legal and financial teams. Compared to traditional systems, Manus excels in its ability for dynamic objective decomposition, cross-modal reasoning, and memory-enhanced learning. It can decompose large tasks into hundreds of executable sub-tasks, simultaneously handle multiple types of data, and continuously improve its decision-making efficiency and reduce error rates using reinforcement learning.

Manus brings the dawn of AGI, and AI security is also worth pondering

Amidst the rapid development of technology, Manus has once again sparked debate in the industry about the future evolution path of AI: will AGI dominate the world, or will MAS co-lead?

It all starts with the design concept of Manus, which implies two possibilities:

One is the AGI path. By continuously improving the intelligence level of individuals to approach the comprehensive decision-making ability of humans.

There is also a MAS path. As a super coordinator, command thousands of vertical domain Agents to collaborate in combat.

On the surface, we are discussing different paths of divergence, but in fact, we are discussing the underlying contradiction of AI development: how should the balance between efficiency and security be achieved? As a single intelligent entity approaches AGI, the risk of decision-making black boxing increases; while multi-agent collaboration can diversify risks, it may miss the critical decision window due to communication delays.

The evolution of Manus has invisibly magnified the inherent risks of AI development. For example, there are data privacy black holes: in medical scenarios, Manus needs real-time access to patient's genomic data; during financial negotiations, it may involve undisclosed financial information of enterprises; there are algorithmic bias traps, where Manus may give salary suggestions below the average level to candidates of specific ethnic groups during recruitment negotiations; the misjudgment rate of emerging industry terms during legal contract reviews is close to half. Another example is combating attack vulnerabilities, where hackers implant specific voice frequencies to make Manus misjudge the opponent's price range during negotiations.

We have to face a scary pain point of AI systems: the smarter the system, the wider the attack surface.

However, security has always been a term constantly mentioned in web3, and under Vitalik's impossible triangle (blockchain networks cannot simultaneously achieve security, decentralization, and scalability), it has also led to various encryption methods:

  • Zero Trust Security Model: The core idea of the zero trust security model is "trust no one, always verify", which means that no matter whether the device is located in the internal network, it should not be trusted by default. The model emphasizes strict identity verification and authorization for each access request to ensure system security.
  • Decentralized Identity (DID): DID is a set of identifier standards that enables entities to obtain verifiable and persistent identification in a decentralized manner without the need for a centralized registry. This achieves a new decentralized digital identity model, often compared with self-sovereign identity, and is an important component of Web3.
  • Fully Homomorphic Encryption (FHE) is an advanced encryption technology that allows arbitrary computations to be performed on encrypted data without decrypting it. This means that a third party can operate on the ciphertext and the result obtained after decryption is consistent with performing the same operation on plaintext. This feature is important for scenarios that require calculations to be performed without exposing the original data, such as cloud computing and data outsourcing.

In multiple rounds of bull markets, both zero-trust security models and DID have a certain number of projects breaking through. They may succeed or be drowned in the wave of encryption. As the youngest encryption method, Fully Homomorphic Encryption (FHE) is also a powerful weapon to solve security issues in the AI era. Fully Homomorphic Encryption (FHE) is a technology that allows computation on encrypted data.

How to solve?

First, at the data level. All information entered by users (including biological features, speech patterns) is processed in encrypted form, and even Manus cannot decrypt the original data. For example, in a medical diagnosis case, the patient's genomic data is fully involved in the analysis in ciphertext form to avoid the leakage of biological information.

Algorithmically, with FHE, the "encrypted model training" implemented, even developers cannot peek into the decision-making path of AI.

At the collaborative level. Multiple Agents communicate using threshold encryption, and compromising a single node will not lead to global data leakage. Even in supply chain attack and defense exercises, attackers cannot obtain a complete business view after infiltrating multiple Agents.

Due to technical limitations, web3 security may not have direct contact with most users, but it has intricate indirect interests. In this dark forest, without arming oneself, there will never be a day to escape the identity of "leeks".

  • uPort was launched on the Ethereum mainnet in 2017, possibly the earliest decentralized identity (DID) project to launch on the mainnet.
  • In terms of zero-trust security models, NKN launched its mainnet in 2019.
  • Mind Network is the first FHE project to go live on the mainnet and has taken the lead in collaborating with ZAMA, Google, DeepSeek, and other partners.

uPort and NKN are projects that the editor has never heard of. It seems that secure projects are not really of interest to speculators. Can Mind network escape this curse and become a leader in the security field? Let's wait and see.

The future is already here. As AI approaches human intelligence, it increasingly requires a non-human defense system. The value of FHE lies not only in solving current problems but also in paving the way for the era of strong AI. On this treacherous road to AGI, FHE is not optional but a necessity for survival.

View Original
The content is for reference only, not a solicitation or offer. No investment, tax, or legal advice provided. See Disclaimer for more risks disclosure.
  • Reward
  • Comment
  • Share
Comment
0/400
No comments