YouTube Is Recommending Videos That Violate Its Own Policies, Study Finds

YouTube, rabbit

YouTube is the largest video-streaming platform the internet has ever seen. No similar platform has come close in terms if size or influence.

The year-on-year growth rate of YouTube users between 2016 and 2021 is drastic, as in 2021, there are approximately 1.86 billion YouTube users worldwide, or up from 1.47 billion in 2017. Around 500 minutes of YouTube videos are uploaded every minute, and about 1 billion videos are watched every single day.

If those numbers aren't staggering enough, YouTube leads the global video-streaming traffic in the U.S. with a 90% user share.

With that many videos YouTube has to offer to its different users from different countries, languages, demography, age and more, the company relies on algorithms and AI to understand people's intention and preferences.

That, in order to lift the heavy burden, and to make experience better, which is a good thing.

The bad thing, according to an investigation by the nonprofit Mozilla Foundation, YouTube’s algorithm recommends “regrettable” videos that violate even its own content policies.

Related: The 'Autoplay' Feature: YouTube Denies The Existence Of ‘Rabbit Hole Effect’

Based on data collected through Mozilla’s RegretsReporter, a browser extension that allows users to provide information on harmful videos and the recommendations that led them there, Mozilla managed to gather information from roughly 30,000 YouTube users who used the tool to provide data about their experiences.

In an example, a volunteer who used the extension, flagged a content after watching a video about the U.S. military, to only realize that YouTube recommended him a misogynistic video titled “Man humilitates [sic] feminist in viral video.”

Mozilla said that the extension users have flagged a total of 3,362 regrettable videos, between July 2020 and May 2021, with the most frequent categories of regrettable videos include those that have misinformation in them, violent or graphic content, hate speech, and also spams and scams.

Mozilla also found that recommended videos were 40% more likely to be reported by volunteers than videos for which they searched.

And in the study, it is found that 71% of the videos that the volunteers deemed regrettable were recommended by YouTube’s own algorithm.

While all YouTube users are affected, the most affected are non-English speakers, saying that the rate of YouTube regrets was 60% higher in countries that don’t have English as their primary language.

Almost 200 videos recommended by the algorithm (around 9% of the total) have since been removed from YouTube.

But that is only after the videos have already racked up a collective 160 million views.

Read Mozilla 'YouTube Regrets': The Horror That Comes From YouTube's Recommendation Algorithm

YouTube

Brandi Geurkink, Mozilla’s Senior Manager of Advocacy, said that:

"Our research confirms that YouTube not only hosts, but actively recommends videos that violate its very own policies. We also now know that people in non-English speaking countries are the most likely to bear the brunt of YouTube’s out-of-control recommendation algorithm."

"Mozilla hopes that these findings — which are just the tip of the iceberg — will convince the public and lawmakers of the urgent need for better transparency into YouTube’s AI."

YouTube has tried to reduce the spread of harmful content by making numerous tweaks, but Mozilla‘s study suggests there’s still a lot of work to do.

In response, YouTube said that:

“The goal of our recommendation system is to connect viewers with content they love and on any given day, more than 200 million videos are recommended on the homepage alone."

"Over 80 billion pieces of information is used to help inform our systems, including survey responses from viewers on what they want to watch. We constantly work to improve the experience on YouTube and over the past year alone, we’ve launched over 30 different changes to reduce recommendations of harmful content. Thanks to this change, consumption of borderline content that comes from our recommendations is now significantly below 1%."

Adding to the statement, YouTube also said that it is welcoming further research on this front, and is also exploring more options to bring in external researchers to study its systems.

Published: 
09/07/2021