Facebook Parent Introduces 'Few-Shot Learner' AI To Filter Harmful Content In Lesser Time

Meta, robot

As the largest social media on the planet, Meta, formerly known as Facebook, and its properties, deal with huge amount of contents every single day.

Due to the sheer amount of users generating and uploading things, no living beings can moderate it. This is why the company has long used automation to filter out dangerous content.

Unfortunately, for almost all of its life, the AI that powered the automation has failed numerous times in preventing worrying contents from being posted and shared.

Among the reasons, it's reported that Meta lacked the moderation algorithms for the languages spoken in Pakistan and Ethiopia, and that the company also lacked adequate training data to understand the different dialects of Arabic.

This time, the company said that it has an improvised AI system that should do better.

Calling it the 'Few-Shot Learner', the AI is better because it requires much less training data.

Because of that, the AI can be deployed more quickly than its predecessors.

Models such as Few-Shot Learner can go to work faster because they work with less example of labeled data.

Meta created this Few-Shot Learner using a massive amount of pretrained data it gathered from the billions of Facebook posts and images in more than 100 languages.

Using the raw and unlabeled data, the team then fine-tuned the system to better moderate contents, using additional training with posts or imageries that were labeled in previous moderation projects, as well as using simplified descriptions of the policies many previous posts have breached.

After the system is ready, Meta then directed it to find new types of content, such as to enforce a new rule or expand into a new language.

It can even be used to look for categories of content without showing it any examples at all.

What it needs, is only a written description of a new policy, and with only that input, the AI can go to work.

According to Cornelia Carapcea, a product manager on moderation AI at Facebook, Few-Shot Learner can work using much less effort than previous moderation models.

More conventional moderation systems might need hundreds of thousands or millions of example posts before they can be deployed, she said. But Few-Shot Learner can immediately go to work after needing just dozens of examples.

This is where the AI gets its name.

"Because it’s seen so much already, learning a new problem or policy can be faster," Carapcea said. "There’s always a struggle to have enough labeled data across the huge variety of issues like violence, hate speech, and incitement; this allows us to react more quickly."

However, the unusually simple way of interacting with this AI makes results rather less reliable.

This happens because the AI was trained with less curated data. This method improves speed but sacrifices some control and knowledge of the system's capabilities.

But still, the system can be used very quickly, and this is a huge advantage.

"If we react faster, then we're able to launch interventions and content moderations in a more timely fashion," Meta Product Manager Cornelia Carapcea said in an interview. "Ultimately, the goal here is to keep users safe."

The company said that with Few-Shot Learner, it is able to automate moderation in about six weeks. This is a huge improvement, considering that previously, Meta needed at least six months to deploy its automation.

According to Meta, Facebook has deployed Few-Shot Learner and managed to reduce the number of worldwide prevalence of hate speech.

Meta said that the AI system is helping it enforce a rule it introduced back in September, which aims to ban posts that are discouraging people from getting COVID-19 vaccines, even when the posts aren't fake news or lies.

Few-Shot Learner works in more than 100 languages and can operate on images as well as text.

Meta, hate speech prevalence, FSL
Credit: Meta

Discussing the technology’s potential, Meta said in a blog post that:

"We believe that FSL can, over time, enhance the performance of all of our integrity AI systems by letting them leverage a single, shared knowledge base and backbone to deal with many different types of violations. There’s a lot more work to be done, but these early production results are an important milestone that signals a shift toward more intelligent, generalized AI systems."

The AI won't be able to solve all of Facebook's content challenges.

But this is an example of how concerned Meta is about toxic contents, and how the company is heavily reliant on automation and AI for some heavy works.

Facebook the company changed its name to Meta because it wants to focus more on building the metaverse, which is virtual spaces in which people can socialize and work. But even when in beta, people were already harassed.

This suggests that Meta is dealing with an increasingly complex ecosystem.

Carapcea thinks that Few-Shot Learner has an advantage here due to its speed, and that the AI could eventually be applied to virtual reality content as well.

"At the end of the day, Few-Shot Learner is a piece of tech that's used specifically for integrity," she said.

"But teaching machine learning systems with fewer and fewer examples is very much a topic that's being pushed at the forefront of research."

Read: Meta Introduces 'Horizon Worlds' As Its First Foray Into The Metaverse Business

Published: 
28/12/2021