Care About What Machines Can Say, But 'Also Care About What It Can Do'

Mustafa Suleyman
co-founder of Inflection AI, co-founder and former Head of Applied AI of DeepMind

In 1950, British mathematician Alan Turing proposed a test to determine whether a computer can think like a human being.

The method involves human evaluators who conduct blind, text-only conversations with two subjects - one human and one machine. If the computer can trick the evaluators into thinking that it's a human, the computer shall pass the Turing Test.

When this happens, it's said that the computer has achieved that amorphous concept thought of as intelligence.

More than half a century has past, and Mustafa Suleyman, the co-founder of Inflection AI, co-founder and former Head of Applied AI of DeepMind, thinks that the world-renown method test from Alan Turing is no longer relevant.

Especially following the generative AI trend that was pioneered by the likes of OpenAI's ChatGPT.

According to Suleyman, generative AI tools are very close to passing this legendary threshold.

Mustafa Suleyman
Mustafa Suleyman, co-founder of Inflection AI, co-founder and former Head of Applied AI of DeepMind, during the Bloomberg Technology Summit in San Francisco, U.S..

Writing in his book The Coming Wave: Technology, Power, and the Twenty-first Century's Greatest Dilemma, Suleyman said that the traditional Turing test is pointless:

"We don't just care about what a machine can say; we also care about what it can do."

This is because the test doesn't say anything about what the system can do or understand, or explain anything about whether it has established complex inner monologues, or can engage in planning over abstract time horizons, which is key to human intelligence.

In his proposal, Suleyman believed that the tech industry shall achieve artificial general intelligence, or AGI, which is a term to describe algorithms with cognitive abilities that match or exceed humans.

But that is a long-term goal. Instead of pursuing that, people must also focus on the more achievable and meaningful short-term goal, which he calls the Artificial Capable Intelligence.

Or also called the ACI, its an AI that can set goals and achieve complex tasks with minimal human intervention.

According to him in an example, the test he calls the "modern Turing test" can include a test whether an AI can turn $100,000 investment into $1 million.

As part of the test, the AI must do on its own, researches an e-commerce business idea, develop a plan for the product, find a manufacturer, and then sell the item.

Suleyman expects this to be accomplished from an AI worth calling an ACI.

"The consequences for the world economy are seismic."

ChatGPT, Google Bard, Microsoft's Bing chatbot and some others, are generative AIs based on Large Language Models (LLMs).

These AIs can engage in fluid conversations.

In one example, Google’s internal language model LaMDA notoriously convinced one of the company’s researchers, Blake Lemoine, that it was sentient. And in a more recent online poll, where participants were given two minutes to decide whether they were talking to a person or robot in an online chat, participants guessed correctly just 60% of the time that they were communicating with a bot.

And the Turing Test, introduced by Alan Turning to examine whether a machine has human-level intelligence, has become the north star in artificial intelligence, even before AI was a thing.

In the modern era of AI, where an increasing number of products are reliant on AI for automation and decision making, Suleyman suggested that passing the traditional Turing test may not be "a meaningful milestone or not."

In his book, which showcases the emerging power of AI, Suleyman argues that the technology is coming crashing like an unstoppable wave over society and shall soon change practically everything.