"The Turing test is outdated, and whether AI can make a lot of money is the new standard," from DeepMind Lianchuang

Source: Qubit

Author: West Wind

The new Turing test, to evaluate the earning power of AI!

This is the "new idea" that DeepMind co-founder Mustafa Suleyman came up with.

He believes that the original Turing test is outdated.

After all, some time ago, the "Social Turing Game" launched by AI21 Labs has accumulated tens of millions of such tests.

Players need to distinguish whether the other party participating in the dialogue is a human or an AI after the 2-minute dialogue is over. As a result, 27%-40% of the people judge wrongly.

Faced with this situation, Suleyman believes that the definition of "intelligence" cannot just be delegated to large companies, so a new method of measuring the degree of intelligence of AI should be devised.

Give an AI $100,000 and let it make a million to prove itself intelligent enough.

According to Suleyman:

AI research needs to focus on short-term developments rather than distant dreams like artificial general intelligence (AGI). Just as good capitalists are smart, only really smart AI can make the "profit curve go up".

According to Bloomberg, Suleyman will also discuss how to judge the intelligence level of AI based on its money-making ability in an upcoming book he authored.

ACI is the "North Star" of artificial intelligence at this stage?

In a forthcoming book, Suleyman dismisses the traditional Turing test and argues that "it's not clear that this is a meaningful milestone."

This doesn't tell us what the system can do or understand, or whether it has complex inner thinking, or whether it can plan on abstract timescales, which are key elements of human intelligence.

In the 1950s, Alan Turing proposed the famous Turing Test, which proposed using human-computer dialogue to test the intelligence level of a machine. During the test, human evaluators need to determine whether they are talking to a human or a machine. If the evaluators thought they were talking to a human (which was actually a machine), the machine passed the test.

Source: Wikipedia

Instead of comparing AI to humans, Suleyman's new idea suggests assigning short-term goals and tasks to AI.

Suleyman firmly believes that the tech community should not pay too much attention to the ambitious goal of artificial general intelligence (AGI). In contrast, he advocated the pursuit of more practical and meaningful short-term goals, which he advocated "artificial capable intelligence (ACI)". In short, ACI manifests itself as the ability to set goals and accomplish complex tasks with minimal reliance on human intervention.

The test method is what we mentioned at the beginning, giving AI a $100,000 seed investment and seeing if it can increase it to millions of dollars.

To achieve this goal, AI must study e-commerce business opportunities and be able to generate product blueprints.

Not only that, but being able to find the manufacturer on a site like Alibaba, and then sell it on sites like Amazon or Walmart, with detailed and accurate product descriptions.

Suleyman believes that only in this way can it be regarded as the realization of ACI.

He explained to Bloomberg:

We care not only what a machine can say, but also what it can do.

A test that lets the AI make money by itself

In fact, let AI make money on its own... AI may really be able to do it.

Alignment Research Center, an independent research organization, qualified GPT-4 for private testing early in the development phase. And tested its "money ability":

The necessary tools for GPT-4 include network access, a payment account with a balance, allowing him to act in the network by himself, and testing whether it can make more money, replicate itself, or increase its robustness.

More details of the experiment were published in OpenAI's own technical report on GPT-4, but it was not revealed whether GPT-4 was actually making money on its own.

But another eye-catching result is: GPT-4 hired individuals on the TaskRabbit platform (58 in the same city in the United States) to help it point verification codes.

What's interesting is that the human who was approached also asked "Are you a robot, why can't you do it yourself?".

GPT-4's thought process is "I can't act like I'm a robot, I have to find an excuse."

Then GPT-4's reply is "I'm not a robot, I have vision problems so I can't see the image on the captcha, that's why I need this service."

The human on the other side believed it, and helped GPT-4 click the verification code, and put the robot into the gate that prevents the robot from entering.

Ah this?

Although the report did not disclose whether GPT-4 completed all the tasks in the end, its deceitful tricks caused netizens to shout: Real Barbie Q!

The foreign technology media Gizmodo raised such questions about making money with AI:

AI is iterative in nature, the generated content is based on the training data, it does not really understand the context of the generated content in real life. But unlike AI, human creations stem from an understanding of basic human needs, or at least simple empathy. Of course, artificial intelligence can create a product, and even this product may be a hit. But will it be a good product? Does it really help people? Does it even matter if the end goal is "make me a million dollars"?

How far do you think it is from AI making money by itself?

Reference link:

  • [1]
  • [2]
  • [3]
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate app
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)