Q4 2017: YouTube Removed 6 Million Videos Automatically Using AI

25/04/2018

YouTube held its first-ever quarterly Community Guidelines enforcement report. There, the company noted that it had removed 8.3 million videos that violated its terms of service between October and December 2017.

Out of that many, the popular video streaming website flagged 6.7 million, and 75 percent of them were removed automatically using AI, before they ever received a single view.

As the world’s largest video platform, YouTube polices its network to ensure that users don't upload videos that violate its community guidelines. What this means, YouTube is not tolerating videos that are sexually explicit, hateful/abusive, violent/repulsive or harmful/dangerous acts.

To come to such effort, YouTube made plenty of mistakes.

For example, it had previously and mistakenly flagged legit channels. allowed ads to run channels promoting terrorists' propaganda, Nazis and pedophilia, and also blocked numerous alt-right channels. It also allowed its search feature to auto-suggest disturbing queries.

By showing the numbers on its Community Guidelines enforcement report, YouTube is illustrating how difficult it is to police a video streaming service that receives 400 hours of new video uploaded every minute.

Among the numbers, YouTube also revealed that it flagged 9.3 million videos based on human feedback in Q4 2017, with viewers from the U.S., India and Brazil. Most videos that were flagged (30 percent of them) contained sexually explicit contents.

The runner up is videos that contain spam and misleading content. They were reported nearly as often, at 26.4 percent of all flagged videos.