Background

YouTube Uses Google AI To Hunt Down Extremist Contents

YouTube - sensor

YouTube is again stepping up its efforts to reduce the amount of extremist contents on its platform.

This time, the video-streaming platform is going beyond simply flagging and removing extremist videos, as YouTube implements machine learning capabilities to further hunt down those contents. The results are in: AI can indeed be a dramatic upgrade over humans when it comes to flagging terrorist content.

On Google’s blog, the company revealed a significant increase over previous efforts.

With AI, YouTube can flag posts before a human can 75 percent of the time. Researchers also report the AI is able to review contents twice as fast as humans can. Those numbers are expected to increase as Google further develop the system.

This attempt comes after YouTube announced 'Redirect Method' that redirects searches for that extremist content in order to debunk and delegitimize extreme viewpoints.

Google said that it will also create stricter guidelines on all videos, even for content that doesn’t actually violate policy, continued the company on its blog post.

YouTube

Google AI will begin policing YouTube videos that get reported. They include all extremist videos, hate-speech contents and those that are offensive and violent.

Even if a number of those videos didn't actually violate any specific YouTube policies, they can be placed in a state of limited access. This restriction will not only block the video, but also comments, likes, and search prioritization. These would prevent the uploaders from monetizing their videos.

The move comes after British Prime Minister Theresa May described online communities and social media platforms as "safe spaces" for terrorists and other extremist groups while calling on tech companies to do more to combat extremism online.

Social media giants like Facebook and Twitter have also rolled out their own initiatives to cut down on extremism on their respective services.

As for YouTube, it has suffered a direct financial hit from the proliferation of terrorism-affiliated videos online. Earlier in 2017, the company saw advertisers fleeing after they boycotted YouTube because their ads appeared next to violent and offensive content. Since then, the company has been working to recover from those boycotts, publicly apologizing to advertisers multiple times while promising to work harder to take out extremist content.

This time, it's using AI more extensively for help.

Published: 
03/08/2017