The Next AI Revolution 'Will Not Be Supervised' Because Not Everything Can Be Predicted

Yann LeCun
Chief AI Scientist at Facebook, Inc.

The world of computing has advanced faster due to the fact that Artificial Intelligence changes the may machines "learn".

But still, computers couldn't be as smart as humans.

A baby, for example, can learn the concept of when he/she learns to walk and fall. When falling does hurt, the baby will learn and try to not fall again. This learning process flows quickly, in ways that computers couldn't achieve.

Deep learning, the category of AI algorithms that started the revolution of AI, has allowed researchers to create astoundingly smart machines. With abilities like computer vision, researchers can create computers capable of spotting something far faster than even the most capable of humans.

But computers fail when dealing with sophisticated reasoning.

In other words, machines don’t truly understand the world around them, which makes them fall short in their ability to engage with the environment.

"Obviously we’re missing something," said Yann LeCun, a professor at New York University and Facebook's Chief AI Scientist, at the International Solid State Circuits Conference in San Francisco.

Yann LeCun
Yann LeCun
“Nobody tells the baby that objects are supposed to fall."

"Hardware capabilities and software tools both motivate and limit the type of ideas that AI researchers will imagine and will allow themselves to pursue. The tools at our disposal fashion our thoughts more than we care to admit."

For years, researchers have managed to train AI algorithms using supervised learning, easing the process of machines to find the relationships between data, and to learn the patterns.

This approach proves useful for building text prediction systems like autocomplete or for generating convincing prose, for example. A vast majority of AI research has focused on supervised or reinforcement learning.

But this has limitations.

LeCun suggests that new techniques should be developed to help AIs learn the foundational skill in understanding the world. For example, by giving machines a kind of working memory so that as they can learn and derive basic facts and principles, for them to accumulate and draw in future interactions.

The answer, thinks LeCun who received the Turing Award, is by using the deep learning subcategory known as "unsupervised learning".

While algorithms based on supervised and reinforcement learning are taught to learn something based on human input, unsupervised learning allows AIs to learn patterns on its own.

LeCun prefers the term “self-supervised learning” because it essentially uses part of the training data to predict the rest of the training data.

"Everything we learn as humans - almost everything - is learned through self-supervised learning. There’s a thin layer we learn through supervised learning, and a tiny amount we learn through reinforcement learning,” he said.

“If machine learning, or AI, is a cake, the vast majority of the cake is self-supervised learning."

In practice, researchers could start by focusing on temporal prediction. In other words, they can train large neural networks to predict the second half of a video after giving them the first half. While not everything in this world can be predicted, by having the foundational skill about the world, AIs should better understand the concept of reasoning.

"This is kind of a simulation of what’s going on in your head, if you want,” LeCun said.

Ultimately, unsupervised learning should help machines develop a model of the world that can then predict future states of the world, he said, predicting that if this is possible, "The next revolution of AI will not be supervised."

As big data and fast parallel computation became the norm that propelled machine learning, it brings more sophisticated deep learning.

And as deep learning becomes the focus of computing, it is pushing at the boundaries of what computers can do. And with deep learning poised to take over the majority of the world's computing activity, unsupervised learning could have important implications for researchers hoping to advance the boundaries of AI.

"If you go five, ten years into the future, and you look at what do computers spend their time doing, mostly, I think they will be doing things like deep learning - in terms of the amount of computation," LeCun added.

While this deep learning may not make up the bulk of computer sales by revenue, but, "in terms of how are we spending our milliwatts or our operations per second, they will be spent on neural nets."