Background

DeepMind AI Can Self-Taught, And Learns How To Understand Others' Thoughts

To make machines more capable, we need to make them learn about their surroundings by themselves.

Here, researchers at Google's DeepMind was able to create an AI that can figure things out for themselves.

While computers or robots aren't good at exploring the world on their own (yet), AI is known to only be capable in parsing data through its neural network, has been improvised with a learning paradigm called ‘Scheduled Auxiliary Control (SAC-X).’

This gives robots a simple goal it needs to achieve, and to give a reward when it completes the task.

The SAC-X follows a general principle: encouraging the agent to explore the world using its sensors. But here, the researchers didn't tell the AI how to complete the given task. For example, for moving an object, the AI is not taught how to move its arm.

So here, the AI explores the environment, and tests the functionality of its sensors. When it finally learns how to do the task, it is then rewarded with a point. If it fails, no point is given.

In the said example, the robot arm will move around and fumble the box. But the interesting fact is that, the machine is not programmed to do something it's designed for. The robot do mistakes so it can learn things by figuring things out by themselves.

The possibility of this advancement is near infinite.

In the future, this advancement can make robots to eventually be capable in moving its joints by itself without being programmed to, and adapt to the ever-changing environment. It may soon be able to make a bed, empty the trash, serve food, for example, and other seemingly impossible tasks for a robot to complete properly.

There is infinite way to do those tasks.

And this advancement that starts with a robot moving its arm, can be the base of that future.

Other advancement that DeepMind has achieved, was making an AI capable in predicting what other AI will do.

In humans, this skill only develop when we pass the age of 4. This ability is to know other people's desire, beliefs or intentions. Throughout history, we thought that this trait is unique to humans. But as technology advances, evidence suggests that the theory is somewhat untrue.

One example is when looking someone drinking water. We assume that the person has a 'desire; to quench thirst, and had a 'belief' that drinking the water would achieve this.

This 'theory of mind' is the key to our social interaction, and we thought that it's too complex for computers to understand. Here, DeepMind hopes that AI can imitate humans by developing this basic ability.

To do this, the researchers created a bot with the intention of developing a basic 'theory of mind.'

Known as Theory of Mind-net, or ToM-net, the bot is able to predict what other AI agents will do in a virtual environment. With the ability to predict what other AIs will do, and can understand whether they hold 'false beliefs' about the world, AI could help humans create better care robots.

Published: 
03/03/2018