The Ways Social Media Can Do To Stop Hateful And Bad Contents From Plaguing The Web

The internet is good for many reasons. But it's also bad for the same amount of reasons.

With the technology being available in more places, people have been given the power of speech. Not anymore that thoughts are restricted and limited to media publishing only, as with social media platforms, everyone with an internet connection can deliver their opinions.

In a good and bad ways.

While the web in general has documented the human knowledge, it has also created toxic environments where hate speech and fake news spread like wildfire.

Other risks the web has, are associated with terrorism and other harmful contents, such as pornography, suicide, pedophilia and others.

For many reasons, contents like these can go viral, spreading very quickly on the web.

The scapegoats are usually those big platforms that include Google, YouTube, Facebook, Twitter, Instagram and some others. While they all have committed to remove illegal hateful contents, but in many cases, they failed to do so.

Social media apps iPhone

With the huge amount of contents generated by web users, monitoring bad contents is difficult.

Gone are the days humans are needed to police the internet, as the staggering amount of information is too much for any group of people to handle. This is where computers powered by machine intelligence have proven themselves handy.

But still, things may slip through the filtering process. As a result, the web is never as clean as some people would expect.

So what can platforms do to take down extremist and hateful content immediately after incidents or tragedy?

Limit Users' Ability To Share

A no brainer. The less people can share, the less misinformation and hoaxes will spread.

Sharing is a fundamental part of social media. Those platforms are actively encouraging users to share and create new contents, which is essential to their business model. Having more viral contents would create more visitors, benefiting those companies' revenue.

But here is the risk: hateful contents are created and shared into these mainstream platforms so they can quickly spread to large audiences as quickly as possible. And during tragic incidents, the amount of fake news spike from one platform to another, somehow overwhelming the platforms' filtering process.

To prevent this from happening, social media networks should at least limit the times that specific contents can be shared within their site.

This particular strategy has been adopted by WhatsApp, which limits the number of times a post can be forwarded.

Create A More Sensitive Hate Detection Tools

Just like humans, computers too have a margin of error.

AIs are actively being developed, and also like other technologies that came before them, there can be signs or hiccups here or there. Developers have to decide how many false negatives and false positives they are happy with.

False negatives are those contents which are allowed to be shared, even though they are hateful, and false positives are contents that are blocked even though they are non-hateful.

There is always a trade off between the two, and this is where a proper balance is needed.

The only way to truly ensure that no bad contents go online is to ban all content from being uploaded. But this would be a mistake. A better way is to adjust the sensitivity of the algorithms so that people are allowed to share content, but platforms can still catch a lot more of the hateful things before being posted or shared.

Faster And Easier Takedowns

While social media platforms' filtering process is powered by both humans and AI, bad contents which do get through, can be flagged by users.

When a content is flagged, it is sent for a manual review by a human content moderator, which checks whether the said content violate the platform's policy and guidelines. The process of content moderation is a fundamentally difficult business.

This is mainly because platforms aim to minimize inaccurate reviews, to avoid public criticisms and complaints..

Considering the amount of contents the web is generating, manual moderation can be a daunting task.

Algorithms should play a bigger role here, as they can help human moderators to check flagged materials, to then forward them only when necessary.

This can be useful during incidents and tragedy. Platforms could introduce special procedures so that AIs can quickly filter hateful contents, with human moderators working alongside the technology to quickly work through contents with less performance hits.

Shared Database Between Platforms

Similar contents can be shared on multiple platforms. And this is why those platforms should have a unified way to prevent bad ones from passing through.

Google, YouTube, Facebook, Twitter and others have very similar guidelines on what is considered as “hate”, and they are all trying to take down the same type of contents following attacks. To streamline the process of filtering, those companies should have a shared database of hateful contents.

This would ensure that contents that are removed from one site, are automatically and quickly banned from the other.

This in turn would not only avoid needless duplication, but also enable those platforms to quickly devote their resources to the really challenging content that is hard to detect.

Removing hateful content should be seen as an industry-wide effort, and not a problem individual platform should face alone. While shared databases exist, more efforts are need to step up the effort to broaden their scope.

In the long term, platforms need to keep investing in content moderation and developing advanced systems which seamlessly integrate human moderators and machine learning.

Humans can make mistakes, and so can computers. But with the two working together, they should be able to mitigate each other's weaknesses to create some advantages.

Hate icon

Conclusion

There are a lot of types of people in this world, and between the good and the bad, the internet is just a representation of them.

What this means, with the web becoming available to more people, there is no way of guaranteeing a safe internet.

What websites, blogs, and especially big internet companies can do, is to moderate their respective platform to ensure that bad contents are eliminated, if not contained. And to make this possible, everyone involved, should willingly take part.

The web is full of knowledge. It powers free speech and deliver people's opinion in lightning speed across the globe, in a way otherwise impossible.

So it's wise to maintain this ecosystem, preventing toxic contents from plaguing the overall knowledge the internet has to give.

Read: Between Fake News And Social Media, There Is People Actively Sharing Things