AI Arms Race May Create A Dystopian World Occupied By 'Swarms Of Slaughterbots'

Jaan Tallinn
investor, founding engineer of Skype and Kazaa, co-founder of Centre for Study of Existential Risk and Future of Life Institute

Artificial Intelligence, or AI, is when a form is intelligence is demonstrated by non-living things. And in the modern world of tech and internet, computers have indeed show huge advancements in this field.

On one side, a lot of people have shown interest in the technology, and start depending on it like never before. But on the other side, a lot of people have also showed fear, as many researchers, experts and leaders in the area of AI technology continue to voice their concerns.

And Jaan Tallinn is one of the latter.

The billionaire Estonian billionaire computer programmer and investor has seen the tech world first-hand.

Having founded two organizations, the Cambridge Centre for the Study of Existential Risk and the Future of Life Institute, to study and mitigate the risks of advancing AI technologies, Tallin believes that AI can pose real danger to humanity.

The founding engineer of Skype thinks that one day in the future, it may no longer be safe to even go outside.

Jaan Tallinn.
Jaan Tallinn.

During his interview with Al Jazeera, Jaan Tallinn remarked:

"We might just be creating a world where it's no longer safe to be outside because you might be chased down by swarms of slaughterbots."

"The reason that humans are in permanent control of this planet and not chimpanzees is that because we are more intelligent than they are. We are not stronger, but we know how to do long-term planning, etc.. Now we as a species are in a race to yield that advantage to machines, which the intuitions of people say is not a good idea."

Tallinn's reference comes from a 2017 short film called Slaughterbots, which was released by the Future of Life Institute as part of a campaign warning about the dangers of weaponized AI.

In other words, what Tallinn said here, goes beyond the general fear about AI-powered robots and computers replacing jobs.

His remarks extend to when AI is being utilized by military forces throughout the world.

"Putting AI in the military makes it very hard for humanity to control AI's trajectory, because at this point you are in a literal arms race."

"When you're in an arms race, you don't have much maneuvering room when it comes to thinking about how to approach this new technology. You just have to go where the capabilities are and where the strategic advantage is."

Tallinn also referred to a statement by Alan Turing back in 1951, saying that "once AI becomes smarter than humans, we will lose control to it."

So here, Tallinn suggests that humans should remain to control to prevent the dystopian future many people are afraid of.

This should help prevent the existential risk humanity may be facing when dealing with AI. A pause and step back should help the effort are regulated and that more guardrails can be created.

"The natural evolution for fully automated warfare is swarms of miniaturized drones that anyone with money can produce and release without attribution."

"I think the correct position to take here is that as soon as we cannot rule out that we will remain in control for a long time, we should take necessary precautions to make sure either we remain in control, or if we lose control, the future will still be good for us."

Maintaining control is the key to return the danger back to the Pandora box.

Tallinn's statement came at a time when tensions between major powers have escalated around the world.

If humans allow AI to make decisions to take life in the military, the results would be pretty much like the automated army of drones and humanoid robots that have been fantasized in Hollywood movies for some time.

James Cameron, for example. The director of Terminator has voiced his fear, saying that "I warned you guys in 1984, and you didn't listen."