
Could naked images of your children on your Google Photos ensure you lose account access? That’s what appears to have happened in the case of a parent in San Francisco, and another in Texas. Both parents lost access to their Google accounts after the system flagged private images of their children as child sexual abuse material (CSAM). The story, reported by The New York Times, highlights the complicated terrains around user privacy and CSAM tracking that companies like Google deploy.
Flagged for potential abuse
The NCMEC can then involve law enforcement agencies. A similar tale unfolded with another father in Texas. While police cleared both parents, Google has not given them their access back.
This is not the first time this issue of Google automatically scanning photos and detecting CSAM has been reported. In 2020, Forbes reported how a warrant was issued against an artist based in Kansas after Google identified some of his art works as CSAM.
According to Google’s own transparency report, it sent close to 458,178 reports about CSAM to the NCMEC in the US and reported over 3.2 million pieces of content during the period of June to December, 2021. Also, 140,868 accounts were disabled during the same period.
How Google scans for CSAM
Google states that it relies on “automated detection and human review, in addition to relying on reports submitted by our users and third parties, such as NGOs, to detect, remove, and report CSAM on our platforms.”
But it is primarily using two main technologies to scan and tag CSAM. This applies to photos, videos, and files you might upload on Google Drive, Google Photos, etc. Remember, your Google account is linked to all these services.
The first technology is hash matching, and this includes YouTube’s CSAI (Child Sexual Abuse Imagery) match technology. CSAI Match is technology deployed on YouTube to fight videos of child abuse, and can spot “re-uploads of previously identified child sexual abuse material in videos,” according to Google. This particular API can also be used by other NGOs, companies to find CSAM by matching it against Google’s databases. Basically, every time Google detects an image potentially identified as CSAM, it is assigned a hash or numeric value, and then it is matched against an earlier hash from an existing database. Google is not the only one using this hash matching technology. Companies like Microsoft, Facebook, Apple also deploy similar techniques.
What hash matching does is that the company does not store the CSAM itself, but a value or hash that represents the image, video or content in question. If a similar hash is found for another photo or video, then it likely is CSAM, and the content in question is flagged.
But Google also deploys machine learning tools to search for CSAM, and it had first announced this back in 2018. It notes that these “machine learning classifiers can discover never-before-seen CSAM”.
The technology relies on machine learning and deep neural networks for image processing. The advantage here is that you can find content that might not be part of the hashed database. Google made technology available for free to NGOs and its industry partners at the time of the announcement. Google also notes that content identified as CSAM by its machine learning technology is “then confirmed by our specialist review teams”.
It is not clear what technology was used to identify CSAM in the reported cases. Google has not revealed the accuracy of this AI technology.
Google policy on the matter
Google’s policy page mentions a list of content banned and can make you lose access to its services. Regarding the definition of child pornography, Google uses the one set by the US government. So any image sexually exploitative of a minor (under 18 years) is defined as child pornography.
It has a detailed segment on CSAM and notes clearly that users should not “create, upload, or distribute content that exploits or abuses children,” and that “this includes all child sexual abuse materials”. It also encourages users to report abuse when they see content which is CSAM.
It also prohibits the use of its products “to endanger children”. The policy notes that using Google products to ‘groom children’ for sexual content, sextortion (blackmailing or threatening a child), “sexualization of a minor”, “trafficking of a child,” are all banned.
The policy clearly states that users should also not “distribute content that contains sexually explicit material, such as nudity, graphic sex acts, and pornographic material. This includes driving traffic to commercial pornography sites”. It allows “nudity for educational, documentary, scientific, or artistic purposes.”
Delay in restoring access
That is what the debate is about. While CSAM remains a serious problem, The NYT report highlights how fighting it can often mean navigating tricky waters. Balancing user privacy and fighting problematic content is clearly easier said than done.
In both parents’ cases, they took what was an innocent picture, and both lost access. While both were cleared by the police, Google has not restored their accounts. For parents, the case could serve as an eye-opener, that they may need to delete all those accidental nude photos of their toddlers or babies.
In today’s world, where parents, grandparents feel the need to record and document each move of their child, it is likely that some of the more candid moments could end up being recorded and uploaded to the cloud. These could later be marked as CSAM. And given how dependent we are on the cloud, losing access to one’s Google account also means losing access to photos, memories, mails, etc. Perhaps this is another reminder that users should also save some of their content offline if the worst happens and their account access is taken away.