As the largest social media, Facebook is also considered one of the most influential tech company. For many reasons, this made it experience numerous endless problems.
From fake news to privacy and security issues, the company has made frequent tech headlines because of them. Due to the consequences the company needs to face, Facebook that needs to protect its brand, appears to have been prioritizing strategies to monitor its own image and its top executives.
To prevent the web from eroding user trust on its products, Bloomberg reported that the social media network has used specialized software to monitor the many public opinions that are shared all the time on its products.
The company reportedly involved the use of two programs: Stormchaser and Night’s Watch.
Citing former employees and internal documents, it was reported that the Stormchaser has been used by Facebook employees since 2016 to track viral contents that involve everything from "Delete Facebook" campaigns to claims that Zuckerberg is an alien.
In some cases, Stormchaser can also target users of contents Facebook disliked, with specialized messages to debunk the claims.
Night’s Watch on the other hand, allowed Facebook to see how information about Facebook is spread on its platforms, including Instagram and WhatsApp.
But for the latter, the end-to-end encryption prevented Night's Watch to do too deep. As a workaround, Facebook allegedly cross-referenced on how some users cited information from WhatsApp on Facebook to gauge virality.
Facebook in utilizing special tools to ascertain public reception is not a surprise. Given that social media has become the way to go for many people on the web, Facebook should have at least several attempts to control the narratives of people.
And this should happen as Facebook needs to control forms of misinformation that plagued the platform. Without these kind of tools, Facebook could literally be crushed under its own weight.
However, a spokeswoman told that Facebook didn’t actually use Stormchaser to fight false news, saying that:
"The tool was built with simple technology that helped us detect posts about Facebook based on keywords, so we could consider whether to respond to product confusion on our own platform. Comparing the two is a false equivalence."
The biggest concern here is that, Facebook may have the tools needed to help users. But apparently, the company's priority is to defend its brand and its top executives.
Facebook repeatedly said that it wanted to curb misinformation. Whether or not the company is capable of doing that in any significant way, it doesn't appear to really focus on that attempt because many of those cases aren't really breaking Facebook's rules.
This is why the company is focusing its efforts elsewhere.
By employing fact-checkers for example, Facebook can monitor contents shared by its 2 billion users, and demote them when necessary. But that doesn't mean that Facebook wants to remove those contents, even if they are misleading or inaccurate.
Content removal is somehow reserved to only special cases. For example, if the content concerns safety threat.
In other words, this is because Facebook is a public company with huge profits, and that it wants to keep its throne. Facebook knows that in one way or another, the social media is centralizing the web by governing people's opinions and the flow of information, but still the company can't really do much to prevent those people to speak in the first place.
To stop people to speak means to curb free speech. This is not an option because it could create new problems, and can also translate to loss of profit and credibility.
So here, Facebook is playing it safe by choosing to shield itself rather than fixing the core of its problems.