Take AI Out Of Facebook, And Facebook Will 'Basically Crumble'

Yann LeCun
Turing Award recipient, Chief AI Scientist at Facebook

Artificial Intelligence (AI) has powered many parts of the web, the mobile, and beyond. Facebook as the largest social media of the internet, has been using the technology to such an extent that it's almost impossible for it to live without it.

Facebook executives, especially founder and CEO Mark Zuckerberg, have long preached that AI would be the technology to unlock a new future for humanity. And with that, Facebook at its core, is built upon that technology.

Yann LeCun is a French computer scientist known for his work in the fields of machine learning, computer vision, mobile robotics, and computational neuroscience. He is a founding father of convolutional networks, one of the main creators of the DjVu image compression technology, co-developed the Lush programming language, and among others, also the recipient of the Turing Award.

He was among the group of scientists who retained their faith in deep neural networks, during an interval known as the 'AI winter', which saw a reduced funding and interest in the field.

LeCun, together with Geoffrey Hinton and Yoshua Bengio, helped underpin the modern proliferation of AI technologies. And this is why they are referred to by some as the "Godfathers of AI" and "Godfathers of Deep Learning".

After joining Facebook in late December 2013, AI was entering a boom period. LeCun became the first director of Facebook AI Research (FAIR). And later, he became the Chief AI Scientist at Facebook.

With AI already the lifeblood of Facebook and its future, Facebook cannot simply live without it.

Yann LeCun

LeCun said:

"You take AI out of Facebook, and basically the services crumble"

This is because Facebook is using that technology to run everything from its creepy and ultra-precise ad targeting, photo tagging, News Feed ranking, translations, and many many more.

AI has also become an integral part of Facebook that helps the company detect hate speech, fake news, as well as forecasting the spread of COVID-19.

LeCun compares the method of machines learning, to how babies learn by interacting with their surroundings.

If machines can better replicate the techniques babies are using, LeCun believes that AI could gain the common sense they need to reach human-level intelligence:

"Once we have a working methodology for that, we’ll have a tool that enables a machine to learn enormous amounts of knowledge about how the world works from physical reality — just by observing the world. They’d be able to learn world models that are predictive, which is essential to intelligence."

While LeCun said that his expertise lies in research, especially setting goals and working with researchers to generate new AI techniques, he is not fond of the term Artificial General Intelligence (AGI).

AGI promises the idea that computers can perform any intellectual task a human can. LeCun argued by once saying that "there is no such thing as AGI" because "human intelligence is nowhere near general."

However, he is pursuing “human-level AI.” And his chosen technique to help AI become smarter, is through self-supervised learning.

In supervised learning, researchers need to label data, to then feed the labeled data to an algorithm in order to teach an AI understand the intended pattern. The process can be painstakingly difficult, time consuming, and sometimes error prone. But in self-supervised learning, there’s no need for human annotation.

Instead, the system generates signals from the data, and uses that data to train itself.

During a session at the International Conference on Learning Representation (ICLR) 2020, which took place online due to the coronavirus pandemic, LeCun believes that self-supervised learning could lead to the creation of AI that’s more human-like in its reasoning.

At this time, self-supervised learning is already advancing Facebook's linguistic tools. But LaCun believes the technique will only reach its potential once it can reason like a human.

"Most of what we learn as humans and most of what animals learn is in a self-supervised mode, not a reinforcement mode. It’s basically observing the world and interacting with it a little bit, mostly by observation in a test-independent way."

"This is the type of [learning] that we don’t know how to reproduce with machines.”