AI Can Be 'The Single Largest Danger' When It Powers Autonomous Weapons

Kai-Fu Lee
AI Expert, Technology Executive

Artificial intelligence (AI) is among the most transformative technology humanity has ever seen. While most people may not see the technology in person, most should have at least experience its presence on a daily basis.

From the technology embedded into mobile devices, the technology that powers many parts of the world's most popular websites and apps, to online banking and online shopping, to the cars people own, the technology that powers parts of the healthcare systems, and lots more.

Some people accept the existence of such technology with open arms, others with caution, and others with fear.

Kai-Fu Lee is a Taiwanese-born American computer scientist, businessman, and writer. As a person who worked as an executive at Google, Apple, and Microsoft, and has experience dealing with the technology for decades, Lee knows more than a thing or two about AI.

"As I’ve traveled the world talking about this subject, I’m constantly asked, 'what will the future hold for humans and AI?' This is an essential question for this moment in history. Some believe that we’re in the midst of an 'AI bubble' that will eventually pop, or at least cool off. Those with more drastic and dystopian views believe everything from the notion that AI giants will 'hijack our minds' and form a utopian new race of 'human cyborgs', to the arrival of an AI-driven apocalypse."

"Each of these projections may be born out of genuine curiosity or understandable fear, but they are usually speculative or exaggerated. They miss the complete picture."

Kai-Fu Lee

The first thing the people are wrong, is not understanding that AI is just a tool. At least at this time, it's far from superintelligence.

"Based on my work experience in the research, development and investment of the AI field [...], such outrageous statements like 'superintelligence' and 'the extinction of humanity' have no practical basis in engineering," said Lee.

"Think tanks and scientists should discuss AI security issues and the changes which AI is bringing to society, but the opinion leaders of the tech community should not mislead the public at this time by telling them that AI will control or destroy mankind. Making such statements is irresponsible on their part. Since most people already have only a limited knowledge of AI, this can cause a false mass panic which is not based on reality."

This makes Lee unlike Tesla boss and OpenAI co-founder Elon Musk, who thinks that the greatest risk people is facing as a civilization is AI

Lee who works directly with the technology, has a similar opinion with Andrew Ng, a British-born American computer scientist and technology entrepreneur focusing on machine learning and AI.

Ng once said that he "cannot see any possibility of AI developing into some evil power in the future," adding that after listening to ideas such as the “theory of singularities”, Ng said that "my eyes naturally roll back in disbelief.”

Likewise, Lee thinks that the main threat AI is posing, is creating unemployment, which then might cause problems of depression, loss of people’s determination, and even inequality between the rich and poor.

But that doesn't mean people should "express an extremely low or zero probability of 'superintelligence' at this point.

This is because fearing AI will hinder its development. Instead, people should embrace the opportunity of having intelligent AIs, and try to solve the problem that can be caused by this superintelligence.

If not, more problems will arise. And this is when Lee's thought is more like Musk's.

Lee explained the top four dangers of burgeoning AI technology: externalities, personal data risks, inability to explain consequential choices, and warfare.

"The single largest danger is autonomous weapons," he said. "That's when AI can be trained to kill, and more specifically trained to assassinate."

Autonomous weapons can significantly transform warfare since their affordability and precision can make it easier to wreak havoc. What's more, with intelligent machines that control the weapons, it will be near-impossible to identify who committed the crime, Lee said.

"I think that changes the future of terrorism, because no longer are terrorists potentially losing their lives to do something bad," he says. "It also allows a terrorist group to use 10,000 of these drones to perform something as terrible as genocide," he says.

"It changes the future of warfare," he adds. "We need to figure out how to ban or regulate it."

This is why many have signed an open letter to call a ban of autonomous weapons.

While Lee's thought may be similar to Musk, but his reasoning differs. When Musk is afraid of "Super AI" in destroying humanity, Lee's concern is more about automation that cannot be made accountable.

When decisions made by AI on autonomous weapons are crucial in determining life or death, this is like the trolley problem, where the decision-maker must choose whether to divert a car with no brakes from killing a lot of pedestrians, by toppling the car and kill less people.

"Can AI explain to us why it made decisions that it made?" asked Lee. Because humans have yet to understand how AIs make decision due to the so-called black box, autonomous weapon is too risky to have.

"I am confident that by combining regulation, private sector mechanisms, and technology solutions, the AI-induced risks and vulnerabilities will be addressed, in ways similar to every other technology tidal wave that we have experienced," said Lee.

Further reading: Paving The Roads To Artificial Intelligence: It's Either Us, Or Them