Technological Singularity: How And When AI Can Be Considered 'Alive'

Borna Jalšenjak
Scientist at Zagreb School of Economics and Management in Croatia

Smart computers have long been the dreams of many scientists, researchers and enthusiasts. But for almost that long, smart computers of the future are often portrayed as evil.

From the famous Skynet in the Terminator films series, V.I.K.I in I, Robot, the Machines in The Matrix, The Red Queen in Resident Evil, and many many more.

On almost all of the science-fiction stories, the human race wins. This is because AIs of the future in general, were frequently imagined as the bad guys.

In movies humans may have less to no hesitation when killing an evil robot or AI. But in real life, should we feel bad about pulling the plug? When AIs finally become as smart, or smarter than us, can computers develop consciousness? Can AIs be considered "alive"?

And if so, does it mean that killing an AI is similar to "killing" a living thing?

Debates about the consequences of Artificial General Intelligence (AGI) are almost as old as the history of AI itself.

Borna Jalšenjak, scientist at Zagreb School of Economics and Management in Croatia, has published an essay that discusses super-intelligent AI and the analogies between biological and artificial life.

Titled The Artificial Intelligence Singularity: What It Is and What It Is Not, Jalšenjak explains his philosophical anthropological view of life and how it applies to AI systems.

Borna Jalšenjak
Borna Jalšenjak.

Jalšenjak theorizes that “thinking machines” will emerge when AI develops its own version of “life”.

At that time, AI has reached what it is called 'singularity'. It's a term that describes a hypothetical point in time in which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization. This happens following an 'intelligence explosion'.

It's a time when AI can evolve through their own manipulations, capable of self-improving, and create newer selves that are more intelligent than the previous ones, and appearing more rapidly.

Another way of saying it, singularity is when AI has become a powerful super-intelligence that qualitatively, they far surpasses all human intelligence.

“Said in a more succinct way, once there is an AI which is at the level of human beings and that AI can create a slightly more intelligent AI, and then that one can create an even more intelligent AI, and then the next one creates even more intelligent one and it continues like that until there is an AI which is remarkably more advanced than what humans can achieve."

While computers are made up of hardware and powered by electricity, they can never be considered organic like humans that are made up of blood and organs.

But when they have become so advanced that they surpassed us humans as the most intelligent beings on Earth, can they be considered "alive"?

There’s great tendency in the AI community to view machines as humans or living things, especially as they develop the abilities that show even the glimpse signs of intelligence. While thinking AIs as "alive" when it's still a Artificial Narrow Intelligence (ANI) is clearly an overestimation, Jalšenjak reminded that AGI does not necessarily have to be a replication of the human mind.

“There is no reason to think that advanced AI will have the same structure as human intelligence if it even ever happens, but since it is in human nature to present states of the world in a way that is closest to us, a certain degree of anthropomorphizing is hard to avoid,” he said.

For all this time, the greatest differences between humans and modern AI is that humans are "alive" and AI algorithms are not.

“The state of technology today leaves no doubt that technology is not alive,” Jalšenjak said, adding that, “What we can be curious about is if there ever appears a super-intelligence such like it is being predicted in discussions on singularity it might be worthwhile to try and see if we can also consider it to be alive.”

But smart AIs would have tremendous repercussions on how we perceive AI and act toward it.

Drawing from concepts of philosophical anthropology, Jalšenjak noted that living beings should be able to act autonomously and take care of themselves and their species, in what is known as “immanent activity.”

“Now at least, no matter how advanced machines are, they in that regard always serve in their purpose only as extensions of humans,” Jalsenjak observed.

There are different levels to life. And if following concepts of philosophical anthropology, AIs are indeed slowly making their way toward becoming alive.

For example, the first signs of "life" take shape when something develops a purpose in its existence, which is something that is present in the modern days of AIs. And thinking that AI is not "aware" is kind of irrelevant, as Jalšenjak said, plants and trees are alive even though they do not have that sense of awareness.

Human robot

Another key factor for being considered alive is having the ability to repair and improve itself, to the degree that the organism allows.

So "alive" also means that something should be able to reproduce and take care of its offspring.

All that humans have considered alive, can do all those things. Environment and genetics have allowed living things to reproduce, sexually or asexually, and to develop the mechanisms needed to learn and adapt for survival.

Jalšenjak suggests that AI's method of reproduction and survival mechanisms don't have to be the same as living things.

“Machines do not need offspring to ensure the survival of the species. AI could solve material deterioration problems with merely having enough replacement parts on hand to swap the malfunctioned (dead) parts with the new ones. Live beings reproduce in many ways, so the actual method is not essential."

Jalšenjak also pointed out that some sophisticated AI algorithms are capable of self-modification.

In other words, modern days machine-learning technologies are, to some extent are already capable of adapting their behavior to their environment. They gain information from the real-world as input, capable of redefining their parameters so when the world changes, they can retrain themselves with the new information.

Recursive self-improvement, however, is the key factor that will give AI the "possibility to replace the algorithm that is being used altogether," noted Jalšenjak. "This last point is what is needed for the singularity to occur."

"They will have their own goals, and probably their rights as well. Humans will, for the first time, share Earth with an entity which is at least as smart as they are and probably a lot smarter.”

Further reading: Paving The Roads To Artificial Intelligence: It's Either Us, Or Them