Artificial General Intelligence, And How Necessary Controls Can Help Us Prepare For Their Arrival

Artificial intelligence (AI) can do a lot of things. With the significant potential the technology has, we humans must be cautious.

Researchers have developed artificial agents capable of playing chess and Go better than professional human players. They have also created AIs capable of mastering complex strategy games like StarCraft II and others.

We have also created AIs for self-driving vehicles, to help doctors diagnose diseases (like Google's LYNA in detecting breast cancer), solve complex problems among others.

AIs have also come to the web. With Google using it on its search engine, Facebook on its News Feed page, Apple on its Face ID security system among many others.

This is a result of AIs in infusing software, known as a type of artificial intelligence that is referred to as Artificial Narrow Intelligence (ANI).

While they can be outstanding when performing specific task, in a level that par or even superior of that in humans, these AIs are actually quite dumb.

But that doesn't mean ANI is free from potential future 'dystopian' problems.

Read: Paving The Roads To Artificial Intelligence: It's Either Us, Or Them

Human - robot

ANI technology that has been present in our everyday lives, have helped us in doing many things. With their impressive capabilities, they have their own problems we humans need to solve.

For example, self-driving cars can crash due to reasons like misinterpreting road signs. But that is just the beginning, as the advancements of AIs have stakes that are much higher.

AGI (Artificial General Intelligence) is another form of artificial intelligence. This type has a much more advanced capabilities due its system running on more advanced computational power.

When ANI can only do one thing, AGI is something of the next level, as AGI can possess human level intelligence, capable of mastering many things.

If humans can solve AI bias problems to create smarter AIs, future AGI systems aren't only capable of learning and solving problems, but they can also adapt and self-improve. These capabilities can grant them the ability to do tasks beyond those they were originally designed for.

This should pose us humans problems.

Importantly, AGI's rate of improvement could be exponential as they become far more advanced than their human creators. With AGI, computers will quickly introduce what's called Artificial Super Intelligence (ASI).

While fully-functioning AGI systems are yet to exist, experts have been estimating that they will be with us sooner or later. And when they do, there will be great concerns never encountered in our human civilization history.

The most notable would be us humans in having less power in controlling our creations.

When humans were once afraid of fire, to then scared of planes. We tend to approach something unknown with great precaution. But when it comes to AIs, they can work at a global scale.

For example, AGI can be used to run powerful applications to cure disease, and solving complex global challenges such as climate change and food security. But a failure to implement appropriate controls could lead to catastrophic consequences.

Imagine these:

  • AGI system tasked with preventing a disease decides to solve the problem by killing everybody who carries the disease, or kill those who has any genetic predisposition for it.

    It does this because its prediction sees potentially fatal problems when a human has the defect genes. When passed to the next generations, the genes will not only affect the individuals, as mutations may make humans in general have even shorter life expectancy.

  • An terrorist mastermind is always on the move, capable of fleeing multiple military assaults flawlessly. This agile enemy has caused the military to deplete its resources unnecessarily quickly. To guarantee the elimination of this particular enemy, and to preserve the military's resources for future operations, an autonomous AGI military drone may decide to launch one attack but results to the killing of everyone in a large area.

    This is because if the target lives, its calculations concluded that the enemy will launch a massive-scale attack on civilians the next day, with even more casualties.

  • Because the world is so polluted, with some areas having less to no chance of recovering in several hundred years, AGI tasked to help humans preserve nature, decides to eradicate the humans and the technologies that induce it.

    It does this just to save time, so close-future generations can still benefit the Earth while it last.

Human evolution

These scenarios are mere prediction and impossible to forecast correctly. But just like predictions that came before it, many are plausible.

Hollywood movies have raked billions of dollars from showing how AIs of the future can be a threat to humans. Various dystopian futures have been predicted, where humans eventually become obsolete, linking to their extermination and/or extinction. Take the Terminator franchise as an example.

Others have forwarded less extreme but still significant disruption, including malicious use of AGI for terrorist and cyberattacks, and the already-present facts, like the removal of the need for human workers and mass surveillance, only to name only a few.

According to Elon Musk, he believes that AI could start a war by doing fake news and spoofing email accounts and fake press releases, just by manipulating information. He described his fears about AI, saying that robots will be able to do everything better than humans.

AI "doesn't have to be evil to destroy humanity," said Musk. And as the world is becoming one big computer, AI is becoming more profound than electricity or fire, thus making it something we must fear if we fail.

So while prediction remains prediction, there is a need for human-centered investigation into the safest ways to create and manage AIs, as humans are welcoming AGI, to minimize the risks and maximize the benefits.

Controlling AGI may not be a simple task, and shouldn't be straightforward.

But the most important is how we humans can control our own behavior, that relies on consciousness, emotions and moral values, and reflect that to the application of AIs. AGI doesn't need human attributes, but it can learn from them. After all, AIs are only as good as the data they are trained with.

Arguably, there are three sets of controls that researchers can do:

  1. Creating the controls required to ensure AGI system designers and developers create safe AGI systems.
  2. Knowing how to build AGI using "common sense", morals, operating procedures, decision-rules, and so on.
  3. The protocols needed to be added to AGI to operate, including regulation, codes of practice, standard operating procedures, monitoring systems, and infrastructure.

Humans play huge role in the advancements of AIs, and human factors and ergonomics should offer methods needed to identify, design and test such controls, well before AGI systems arrive.

This would allow us to identify where and when new controls are required, how to design them, and how to remodel them when risks are found. This is to ensure AI models of the future use our cognition and decision making, giving them humanistic values.

Meanwhile, we should see AI evolution as something that revolutionizes humankind into better beings.