To Fight Trolls, YouTube Updated Its Spam Detector, And Began Enforcing Stricter Rules

Troll film 2022, YouTube

Trolls are everywhere on the internet, and on Google’s YouTube, they live there and thrive.

YouTube as the most popular video-sharing and search engines across multiple device platforms in the world, offers much more than just entertainment, because the platforms hosts lots of informational content, promotional, and more.

YouTube is also known as a place where content creators show off their skills and knowledge on almost all imaginable topics.

And with ad revenue-sharing models to entice the creation of compelling content, some have even given up their jobs and become full-time YouTube vloggers.

The thing is, content creators have to deal with trolls, and on YouTube, the trolls are amongst the worst kinds of audience.

For more than many times, some people simply got jealous of counterparts who get better view counts and start abusing them in the comments sections. Some cybercriminals are also known to use the comments section to place phishing URLs to compromised websites in order to steal their personal information.

There are also many instances of bots that ruin user experience, particularly during live streaming.

Google has done its part, by saying that it successfully removed more than 1.1 billion spammy comments in the first 6 months of 2022.

This translates to a little less than a hundred spams removed every single second.

But even with that many violating comments removed, the company is dealing with lots more.

Google has been using machine-learning technology to help it thwart spammy comments. But since the comments keep on coming, it needs a better AI.

And this is where the company improved YouTube's machine-learning technology to better detect new types of spams faster than ever before.

"As spammers change their tactics, our machine learning models are continuously improving to better detect the new types of spam," said Google.

Google wants more than that.

Besides improving its AI, Google users who troll other users, would receive a timeout, and are placed in a temporarily suspension which prevents them from creating any comment for up to 24 hours. If they continue their bad behavior and keep on leaving multiple abusive comments after that time period, they may face stricter consequences.

Trolls on YouTube are advised to stop trolling around, or face a permanent ban.

Do not feed the trolls

Google essentially improved YouTube's Spam Detection on comments and bot detection in live chat.

"Our testing has shown that these warnings/timeouts reduce the likelihood of users leaving violative comments again," the Alphabet property said in a statement.

Previously, YouTube would only remove abusive comments, and that the commenters faced no real repercussions.

YouTube not only wants to protect creators from trolls, because it also wishes to offer transparency to people whose comments are removed.

The company does this by sending a message to those who have been caught by its system.

Initially, YouTube’s abusive comments and bots detection features only work on comments in English, with Google planning to expand it to other languages, when it's ready.

Published: 
17/12/2022