Mozilla 'YouTube Regrets': The Horror That Comes From YouTube's Recommendation Algorithm

YouTube’s recommendation algorithm has been facing a lot of scrutiny for radicalization, pedophilia, and for generally being “toxic”.

People on the web are exposed to algorithms and AI-powered suggestions for longer than they realize. AI learns from patterns, and by gathering all the necessary data, it can scout deep inside databases to show what it thinks will appeal users.

This method has indeed benefited many. One of which, is YouTube.

The popular video-streaming platform has that recommended section, which shows up on YouTube as 'up next'. Powered by YouTube's recommendation algorithm, this section shows what YouTube thinks will appeal viewers, based on many criteria.

This particular feature has made a lot of people glued to their screen. Videos after videos, YouTube has literally no end to what it can recommend.

However, the recommendation algorithm can pop up videos that derive far from the viewers original intention. It's problematic because 70% of the platform’s viewing time comes from here.

This is why Mozilla launched the #YouTubeRegrets project, to highlight the issue and urge YouTube to change its practice.


According to Mozilla on its page explaining its #YouTubeRegret project:

"#YouTubeRegrets is a crowdsourced public awareness campaign run by the nonprofit Mozilla. Mozilla collected YouTube users’ stories about the platform’s recommendation engine leading them down bizarre and sometimes dangerous pathways. This work was catalyzed by our own research on trustworthy AI; by stories in the New York Times and other publications; and by YouTube engineers who have spoken out."

"Our campaign is an attempt to find out. We gave no specific guidance on what these stories should be about, so submissions were from people who self-identified particular content as being bizarre or dangerous."

The stories collected by this project show the darker sides of YouTube’s recommendations. The stories are chilling, and can be disturbing.

"The stories show the algorithm values engagement over all else — it serves up content that keeps people watching, whether or not that content is harmful," explained Ashley Boyd, Mozilla’s VP of Advocacy.

"We believe these stories accurately represent the broad problem with YouTube’s algorithm: recommendations that can aggressively push bizarre or dangerous content."

From casual YouTube users, to children, even when the parental control is turned on, YouTube recommendation algorithms can pop up videos that contain graphic depiction of gore, violence and hate.

Because YouTube users cannot turn this recommendations feature off, people that are glued to YouTube can be fed with problematic content, without having the means to steer away from it. Making things worse, YouTube monetizes those videos.

While it's true that YouTube is doing what it can to eliminate such incidents.

YouTube said that it banned inappropriate children videos, redirecting users away from extremist videos, stopped recommending conspiracy videos, and used AI to address child exploitation issues among others.

But yet, it couldn't stop predators from posting inappropriate comments. Tweaking the algorithms to be more aggressive, YouTube inadvertently considered fighting robots as animal cruelty, and had wrongly removed Syria violence videos.

Another way of saying it: YouTube's recommendation can go bizarre in ways that are unintended by YouTube. As a result, the algorithm can show things that are completely against the viewer’s interests in harmful and upsetting ways.

YouTube's recommendations are indeed useful to many, with weird recommendations found to be sometimes amusing. But for others, like children, misleading videos can "damage" their "impressionable mind".

Searching for "fail videos" where people hurt themselves in amusing ways, can certainly bring in some laughs. But several videos later, YouTube can show videos where people really get hurt and "clearly didn't survive the accident". There are also a story about how a user's "sidebar were full of anti-LGBT and similar hateful content".

There is also a story about how an 80-year-old retired scientist from Ecuador had his mind deranged after being fed with "alternative theories".

#YouTubeRegrets also has stories about how YouTube's recommendation algorithm gave paranoia to a woman with mental health problems, making her "an extreme religious fundamentalist", how a simple term like "Achilles tendon" ended a user with "fetish videos of girls walking in high heels, how knife-making videos were followed by "grotesque murders and unsolved crimes from the Victorian Era", and more.


According to Mozilla:

"#YouTubeRegrets is part of Mozilla’s larger focus to ensure that in a world of AI, consumer technology helps, rather than harms, humanity. We believe that AI should be designed with personal agency in mind, and that companies should be held to account when their AI harms people."

With the project, Mozilla isn't suggesting or advocating specific contents to be removed from YouTube.

"Rather, our campaign is focused on 'reach' — drawing attention to the way that AI, in the form of recommendation engines, can amplify certain types of content more than others."

"We believe there should be greater transparency around YouTube’s methods for determining what gets recommended. It is up to YouTube to determine what kind of content their site encourages and recommends, and they must build social responsibility into their recommendation engine in the same way that they have optimized for user engagement."

#YouTubeRegrets gathers stories from people over a two-week period beginning on September 10. The organization sent emails to its global list of newsletter subscribers, and asked them to submit their stories.

"Overall we received hundreds of submissions in five languages. We did not (and could not) verify the authenticity of these stories, so we used our best judgment to determine which ones to include in this showcase," said Boyd as a disclosure.