TikTok Proposes A 'Global Coalition' To Help Protect Users From Harmful Content


Anything can be shared on the web.

From the good things and the bad things, they are the content that populate the ever growing web. It's the contents of the web that make the internet so interesting. It's them that make it informational, but also at the same time, damaging.

When some web content goes viral, many things can happen. If good things go viral, it may turn into memes. But when bad things go viral, long criticisms can follow. This can result to scrutiny and worse. It may force a shift in policy, a downturn of a company or others.

The internet can be unforgiving, and TikTok is experiencing this.

When a graphic suicide video that went viral on TikTok in September, the Chinese company defended itself by saying that it was "the result of a coordinated raid from the dark web."

Giving evidence to the Commons committee for digital, culture, media and sport (DCMS), Theo Bertram, TikTok’s European director of public policy, said that the video, which was originally broadcast on Facebook Live, was used in a “coordinated attack” on the social video app a week after it was originally recorded.

Just some of the young faces of TikTok that faced death
Some of the young faces of TikTok users who faced death while, or when attempting to create a video, or because of depression the app has caused.

“We learned that groups operating on the dark web made plans to raid social media platforms, including TikTok, in order to spread the video across the internet,” Bertram said.

“What we saw was a group of users who were repeatedly attempting to upload the video to our platform, and splicing it, editing it, cutting it in different ways,” he added. “I don’t want to say too much publicly in this forum about how we detect and manage that, but our emergency machine-learning services kicked in, and they detected the videos.”

In the suicide video that went viral, Ronnie McNutt, born in May 1987, was a “very caring, committed, loyal, dependable, and eccentric”.

But a shotgun blast to the head ended his life instantly.

The view of the footage is very disturbing, and TikTok with its autoplay feature is making it worse.

It took hours before TikTok managed to contain it.

But when video was re-shared, TikTok lost control. And when the video spread to other platforms, the virality is inevitable. The video can even run alongside ads, which again attracted more viewers.

Platforms seem powerless to stop the spread, echoing their multiple past failures to block acts of violence and disinformation.

From the New Zealand Christchurch's mass shooting for example, or Bianca Devins' death, and many more.

While the video may haunt people from the day they see it until the moment they sleep, some others were re-sharing the video, making it a non-stop viral. With McNutt's death widely shared on TikTok, prominent users stepped it by sharing advise on others on how to spot and avoid the video.

Read: India Bans TikTok Because The Platform Turns People Into Murderers

Ronnie McNutt.
Ronnie McNutt was said to have suffered from depression after losing his job and his girlfriend.

Policing a platform that have millions, or tens of millions of users, is indeed difficult. Let also policing platforms that have hundreds of millions or even billions of users.

Things got worse during the 'COVID-19' coronavirus pandemic, as Facebook has shown on its 'Community Standards Enforcement Report, August 2020' that admitted its army of human contractors whose job is to review sexual and violent contents all day, had been partly disabled due to the crisis.

"Due to the COVID-19 pandemic, we sent our content reviewers home in March to protect their health and safety and relied more heavily on our technology to help us review content." Facebook said on the report.

This is why TikTok proposed a “global coalition” to help protect users against such harmful content.

“Last night, we wrote to the CEOs of Facebook, Instagram, Google YouTube, Twitter, Twitch, Snapchat, Pinterest and Reddit,” Bertram said, “and what we are proposing is that, in the same way these companies already work together around [child sexual abuse imagery] and terrorist-related content, we should now establish a partnership around dealing with this type of content.”

With this kind of partnership, tech companies can share technical details of the graphic video, so others can stop it from being uploaded and re-shared.

That could in turn help smaller tech companies by easing the burden on moderation.