Google And Facebook Ad Networks Had A Child Porn Problem: Scrambling For A Fix

"If there is a user-generated platform that exists, it is actively being manipulated right now."

That is what Imgur founder and CEO Alan Schaaf said, and he isn't wrong. As long as people kept posting things on the web, both the good things and the bad things will spread.

This time, evidence shows that advertisements for major brands were placed in "child abuse discovery apps" through Google and Facebook's ad networks.

Both Google and Facebook are titans of the web. With both having multiple services, many with users far surpassing other internet-based companies, the two are facing an increasing scrutiny as they failed policing their platforms more than multiple times.

In this case, the apps involved were used to be available on Google's Play Store for Android devices, in which users can be directed to WhatsApp groups containing the illegal contents.

In an investigation before and after Christmas by an Israeli-based child protection startup AntiToxin Technologies and two NGOs (non-governmental organizations) from the country, found that Google and Facebook's automated advertising technology had placed ads for household names in a total of six apps that let users search for WhatsApp groups to join.

This function is apparently missing from WhatsApp app itself.

But using third-party software, it was possible for users to search for groups containing inoffensive material, as well as groups will illegal contents.

For example, a quick search using the word "child" brought up a bunch of links where users can join WhatsApp groups that were meant to share illegal pictures and videos. These groups were listed under different names in WhatsApp app itself in order to make them harder to detect.

Brands' ads that were inadvertently shown included: Amazon, Microsoft, Motorola, Sprint, Sprite, Western Union, Dyson, DJI, Gett, Yandex Music, Q Link Wireless, Tik Tok and more, said TechCrunch.

"The link-sharing apps were mind-bogglingly easy to find and download off of Google Play," explained Roi Carthy, AntiToxin's chief marketing officer to the BBC. "Interestingly, none of the apps were to be found on Apple's App Store, a point which should raise serious questions about Google's app review policies."

Six of these apps ran Google AdMob, while one ran Google Firebase, two on Facebook Audience Network and one on StartApp, a firm specializing in online ads.

The problem is that both Google and Facebook ad networks failed to block ads to those contents, causing them to inadvertently earned a portion of the money from ads hosted on the third-party apps used to search for illegal contents.

WhatsApp

Google that operates Play Store and also the company behind Android, and Facebook which owns WhatsApp, said that they have taken steps to address the problem. Google have removed the apps from its Play Store, and both companies have cut off the apps income source by removing them from their ad networks.

But the National Society for the Prevention of Cruelty to Children (NSPCC) wants a new regulations to monitor these efforts.

The charity that is campaigning and working in child protection in the UK and the Channel Islands, believes a watchdog should impose a huge fine to those technology companies, which should urge them in hiring more staffs to tackle this issue.

"WhatsApp is not doing anywhere near enough to stop the spread of child sexual abuse images on its app," said Tony Stower, head of internet safety at the child protection campaign. "For too long tech companies have been left to their own devices and failed to keep children safe."

The reason why WhatsApp has been slow in policing its platform, is because it uses end-to-end encryption. This technology makes it possible for only the members of the group to see their contents.

But since group names and profile pictures are viewable, WhatsApp moderators scrambled to police the service after realizing this issue. As a result, earlier this December, the company revealed that it had terminated about 130,000 accounts in over a 10 days period.

With this problems said to be solved, both Google and Facebook said that they intend to reimburse affected advertisers.

Even so, the NSPCC still thinks the brands affected should hold the two tech companies to account.

"It should be patently obvious that advertisers must ensure their money is not supporting the spread of these vile images," said the charity's spokeswoman.

Published: 
29/12/2018