People use their phones to do things like taking photos and videos, send and receive private messages, files and more.
Google which operates Android, is certainly having a hard time controlling the mobile ecosystem, while trying to help the authorities solve crimes. Particularly, the authorities want Google to provide them the information about crimes related to child sexual abuse material, or sometimes called CSAM.
Because Google is not allowed to "read" messages and "see" or "view" photos and videos that pass or stored in its various platforms, the tech giant came up with another way. That that is by using algorithms and machine-learning technology.
While the technologies do get better, they're not a 100% percent foolproof.
And this is proven when a man was caught by Google's anti-CSAM system and was reported to the police
The man was accused for violating the company's policy, because he photographed his sick son's groin area for review by medical professionals, as reported by The New York Times.

The man, who is only referred to by the name Mark, is said to be a former software engineer in San Francisco.
At the time, he used his Android-powered smartphone to take pictures of his son's painful and swollen penis upon request by a nurse.
But unfortunately for him, two days later, Mark received a notification on his phone saying that his Google account had been disabled because of "harmful content" that was "a severe violation of Google's policies and might be illegal."
After Google closed Mark's Google account, the company then filed a report with the National Center for Missing and Exploited Children (NCMEC).
It was only ten months later, that Mark received a letter from the San Francisco Police Department, saying that an investigator had served a search warrant on Google less than a week after Mark took the photos, requesting access to his internet searches, location history, messages, documents, photos, and videos, per The New York Times.
The investigator concluded "no crime occurred."
While the police realized that the parent didn't commit any crime, and that the case is closed, Google thinks otherwise.
Google is deactivating Mark's Google account, permanently.
Without his Google account, Mark is no longer able to access his Gmail emails.
Not only that, because without his Google account, he also lost all contact information he stored in Google, as well as the whole documentation of his son's first years of life, and his Google Fi connection which he used to connect to the internet.
And because Mark is literally left without access to his old phone number and email address, he couldn’t get the security codes he needed to sign in to his other internet accounts.
Inadvertently, Google is locking him out of much of his digital life.
Mark said that he provided Google with the police report in a bid to reactivate his account, but his attempt was unsuccessful.
"The more eggs you have in one basket, the more likely the basket is to break," he said.
Google said in 2018 it had developed an artificially-intelligent tool to enable it to more quickly detect and remove CSAM.
The procedure includes running the system to automatically scam every single image and video that pass through its systems. Whenever a material is flagged by the AI, human moderators are tasked to review the material. If they're determined to depict child sexual abuse, Google will lock the user's accounts, searches for other exploitative material, and files a report to the the nonprofit NCMEC.
In this case, the photos Mark took, were detected as CSAM when they were backed up to Google Photos, which is Google's cloud storage for photos and videos.
Claire Lilley, Google's head of child-safety operations, told The New York Times that its human moderators had been taught about instances where photos may have been taken for medical reasons, but in Mark's case, they hadn't detected a medical issue, and that the team had also separately discovered a video from six months prior showing a young child in bed with a naked woman.
Mark said he couldn't recall taking that video, but said that: "If only we slept with pajamas on, this all could have been avoided."
Because of this, Google came to the conclusion that it needs to stand by its decision, and permanently suspend Mark's Google account.
It's worth noting that in 2021 alone, Google made more than 620,000 reports to the NCMEC, and disabled around 270,000 accounts linked to child sexual abuse content. That year, the nonprofit alerted authorities to more than 4,200 "potential new child victims."
In other words, Mark is not alone.
"Child sexual abuse material is abhorrent and we're committed to preventing the spread of it on our platforms," the company said.

According to the Electronic Frontier Foundation (EFF), the international non-profit digital rights group based in San Francisco, California, Google literally failed, and so did its employees.
"Google’s algorithms, and the employees who oversee them, had a different opinion about the photos. Without informing either parent, Google reported them to the government. That resulted in local police departments investigating the parents," the EFF said in its own website post.
"The company also chose to perform its own investigation. In the case of Mark, the San Francisco father, Google employees looked at not just the photo that had been flagged by their mistaken AI, but his entire collection of family and friend photos."
"Both the Houston Police Department and the San Francisco Police Department quickly cleared the fathers of any wrongdoing. But Google refused to hear Mark’s appeal or reinstate his account, even after he brought the company documentation showing that the SFPD had determined there was 'no crime committed.' Remarkably, even after the New York Times contacted Google and the error was clear, the company continues to refuse to restore any of Mark’s Google accounts, or help him get any data back."
Through its policy, Google can determine which users it wants to host. And also through its privacy policy, the company also stated that it has the rights to "share personal information outside of Google if we have a good-faith belief that access, use, preservation, or disclosure of the information is reasonably necessary" to help meet any applicable law, regulation, legal process, or enforceable governmental request.
In Mark's case, Google shared the data to "enforce applicable Terms of Service, including investigation of potential violations."
Unfortunately, Google's incorrect algorithms, and Google’s failed human review process, caused innocent people, like Mark, to be investigated by the police.