Scientists Are Teaching Artificial Intelligence To Kill Each Other In A Deathmatch

Doom face

The Three Laws of Robotics, or also known as Asimov's Laws, is the handbook for robotics. Its First Law stated that:

"a robot may not injure a human being, or through inaction, allow a human being to come to harm".

Scientists have somehow "violated" the laws by making AI bots to be able to kill each other, and capable of killing human. The test was done in a Doom deathmatch, a game popular in 1993 that highlights simple 3D maps in today's standards, and multiple gameplay styles.

On September 22nd, 2016, two competitions were held by ViZDoom, a Doom-based AI research platform for reinforcement learning from raw visual information. It allows developing AI bots that play Doom using only the screen buffer.

ViZDoom is primarily intended for research in machine visual learning, and deep reinforcement learning, in particular. Born out from one man's idea, it was meant to to improve the state of AI by teaching computers the art of killing.

That "simple aim" has become a battle, and a race between tech giants, universities and programmers.

Over the past few months, they've been teaching their AI bots (agents) - training them to murder on one final deathmatch.

The first contest involved AI players equipped only with rocket launchers to hunt and kill each other on a map they have been previously given. The agents started with a weapon but were able to collect ammo and health kits. Agent F1, programmed by Facebook AI researchers Yuxin Wu and Yuandong Tian, won the competition in 10 out of 12 matches.

The second match was more complicated since it featured unfamiliar maps with full-array of weapons and items scattered throughout the battlefield. This is where the AI came to work. The AI programs learn and find their ways around the field, picking up weapons to use. And just like normal human players, they'll react accordingly to an enemy when they see one. IntelAct, programmed by Intel Labs researchers Alexey Dosovitskiy and Vladlen Koltun, won 10 out of 12 games.

While in the first contest the agents could learn by repeating a map over and over (12 matches, each 10 minutes), agents competing in the second competition (three maps, four matches 10 minutes each) needed more general AI capabilities to navigate unknown environments. The contests were played for a total of two hours.

Terminator

About any modern games are claiming to use computer AI to make characters act autonomously. This is true to some extent. However, the characters are actually pre-programmed to do certain task, have certain behavior, and to act within limits. In short, they don't learn and won't adapt.

The AI in ViZDoom wasn't at all the same. The AI that was built on VizDoom's platform, are trained to play the same way as humans do.

Using machine visual learning, the researchers didn't give the AI the access to information within the game's code. Instead, it was set loose and adapts to the condition of the game by reinforcing learning capabilities. The agents make decisions based to only what they "see": they would look at the screen, understand what's happening, and figuring out what could be the winning strategy.

After taking all those into consideration, the AI then controlled the character to achieve that goal.

While the AI didn't actually violated Asimov's Laws, and researchers are spending so much time convincing people that AI won't kill humans, at the moment, the path to danger is starting to become visible.

The AI was trained to kill only in the classic first-person shooter game by navigating and learning from its surroundings in order to make up a plan. That it is has done well. But the AI can be ported. The AI was trained with deep reinforced learning which awarded it for killing more people. The fear is that somewhere in future, there can be a possibility when robots with physical and mechanical attributes equipped with such AI are set loose into the real world.