3.2 Billion Fake Accounts On Facebook Removed Between April And September 2019

18/11/2019

As an online service provided free of charge, there is no doubt that Facebook has a lot of fake accounts.

In the first quarter of 2019, Facebook had removed 2.2 billion fake accounts. Since then, it had removed an additional 3.2 billion accounts as a result of improving its automated detection systems. In other words, about 5 percent of its 2.45 billion monthly active user accounts are fake.

In comparison, this is more than double the number of fake accounts taken down during the same period in 2018, when the company removed 1.55 billion accounts.

And for the first time, the world’s biggest social network also disclosed how many posts it had removed from popular photo-sharing app Instagram, which has been identified as one of the growing sources of fake news and disinformation.

According to Facebook on its November 2019 Community Standards Enforcement report:

"Starting in Q2 2019, thanks to continued progress in our systems’ abilities to correctly detect violations, we began removing some posts automatically, but only when content is either identical or near-identical to text or images previously removed by our content review team as violating our policy, or where content very closely matches common attacks that violate our policy."
Facebook logo

To put it in numbers, the company said that it proactively detected content affiliated with terrorist organizations 98.5% of the time on Facebook and 92.2% of the time on Instagram.

Facebook said that it removed more than 11.6 million pieces of content depicting child nudity and sexual exploitation of children on Facebook, and 754,000 pieces of contents on Instagram during the third quarter of 2019.

The company said it made progress in detecting child nudity and sexual exploitation on Instagram, removing more than 1.2 million pieces of content between April and September.

The company also added suicide and self-injury as a category of harmful content.

Between April and September, the social giant said that it removed more than 4.5 million pieces of suicide and self-injury content between April and September, and more than 1.6 million pieces of suicide and self-injury content on Instagram.

The company also removed about 4.4 million pieces involving drug sales during the quarter, it said in a blog post.

How prevalent were adult nudity and sexual activity violations on Facebook
How prevalent were adult nudity and sexual activity violations on Facebook. (Credit: Facebook)

"Our proactive rate remained above 99 percent for both quarters. Prevalence for fake accounts continues to be estimated at approximately 5 per cent of our worldwide monthly active users (MAU) on Facebook," said the company.

What these all mean, Facebook that has been under pressure for "not doing enough" to moderate its platform, is becoming more aggressive in policing its platforms. The numbers show how the company has been improving its algorithms.

Facebook routinely provides updates on how it enforces its Community Standards, which are the rules that govern what kinds of content can get users banned from the platform.

With the world's eyes on Facebook, the company is not taking chances. Especially since it has stumbled into numerous data-sharing controversies and leaks, Facebook wants to become more transparent about its enforcement decisions.

Previously, the company has also faced criticism for its failure to prevent U.S. election interference on the platform, including the spread of misinformation.

More recently, Facebook has come under fire for refusing to fact-check or remove political ads. The decision was a huge contrast to Twitter, its competitor, which banned political ads from its platform. CEO Mark Zuckerberg defended the decision under the guise of free speech.