Meme is simply an idea, behavior, or style that spreads by means of imitation from person to person within a culture.
On the internet, memes can spread like wildfire, and viral corresponding to their phenomenon. They have become the integral part of how people communicate on the internet, with their many cheerful and fun messages to convey.
Unfortunately, there are also many memes created to spread hateful and discriminatory messages. Facebook wants to end this.
Manual identification however, is not an option, given the many memes that people are generating at any given moment. This is why the company turns to AI.
However, AI models that are trained primarily with text to detect hate speech, can struggle to identify hateful memes. Facebook couldn't solve this problem alone.
To entice developers, Facebook launches a $100,000 challenge, so they can help it create AI models capable of recognizing hateful memes.
This, according to Facebook, is "a first-of-its-kind online competition", and has been accepted as part of the NeurIPS 2020 competition track.
In a blog post, the social giant said that:
"The Hateful Memes data set contains 10,000+ new multimodal examples created by Facebook AI. We licensed images from Getty Images so that researchers can use the data set to support their work. We are also releasing the code for baseline-trained models."
In the blog post, the company explained that an AI model capable of detecting hateful memes is a multimodal problem. What this means, it needs to be able to look at the contents of an image, to understand their context and how they are used in conjunction.
Facebook gave some examples of "mean" memes, as it opens only the needed dataset to approved researchers.
The company said the dataset contains meme of sensitive nature often reported on social media including the following categories:
Detecting hate speech is a difficult problem for AI.
While memes appear as simple images with straightforward messages, they are complex for computers to understand. It's their context that add an extra layers of complexity that no one-size-fits-all AI solution can understand.
As memes involve cultural, racial, and language-based context that change very frequently, creating AI capable of understanding meme has proven to be increasingly difficult.
Facebook's approach is to make a more effective detecting tool by making AI capable of understanding context "holistically", just like the way people do.
"To address this challenge, the research community is focused on building tools that take the different modalities present in a particular piece of content and then fuse them early in the classification process. This approach enables the system to analyze the different modalities together, like people do," wrote Facebook.
"The task requires subtle reasoning, yet is straightforward to evaluate as a binary classification problem," wrote the researchers on their paper.
When viewing a meme, for example, Facebook doesn't want to think about the words and the photo independently of each other.
"We understand the combined meaning together."
Facebook believes the best solutions to solve this problem should come from open collaboration by experts across the AI community.
By bringing the Hateful Memes data set, Facebook is establishing the baselines for the community using several well-known model architectures.
"We tested two unimodal systems and several multimodal systems. In late fusion, we trained the models separately and averaged their two scores during inference to get a prediction. In mid-fusion, we concatenated the BERT and ResNet-152 representations and fed them into a two-layer classifier (ConcatBERT)."
"Finally, we used several BERT-derived architectures that fuse image and text understanding earlier in the process: a supervised multimodal bi-transformer model (MMBT), and state-of-the-art self-supervised (ViLBERT and Visual BERT)."
This is where Facebook is releasing the code on GitHub.
"We continue to make progress in improving our AI systems to detect hate speech and other harmful content on our platforms, and we believe the Hateful Memes project will enable Facebook and others to do more to keep people safe."