scorecardresearch
Follow Us:
Wednesday, September 22, 2021

Explained: How Apple will scan for child exploitation images on devices, and why it is raising eyebrows

Expected to go live in the United States initially, the features include use of new technology to limit the spread of CSAM online, especially via Apple platform.

Written by Nandagopal Rajan , Edited by Explained Desk | New Delhi |
Updated: August 13, 2021 3:06:06 pm
In a blog post, Apple explained that it will use cryptography applications via iOS and iPadOS to match known CSAM images stored on iCloud Photo.

Apple has announced that software updates later this year will bring new features that will “help protect children from predators who use communication tools to recruit and exploit them, and limit the spread of Child Sexual Abuse Material (CSAM)”.

Expected to go live in the United States initially, the features include use of new technology to limit the spread of CSAM online, especially via Apple platform.

Then there will be on-device protection for children from sending or receiving sensitive content, with mechanisms to alert parents in case the user is below the age of 13. Apple will also intervene when Siri or Search is used to look up CSAM-related topics.

What tech is Apple doing to prevent spread of CSAM online?

In a blog post, Apple explained that it will use cryptography applications via iOS and iPadOS to match known CSAM images stored on iCloud Photo. The technology will match images on a user’s iCloud with known images provided by child safety organisations. And this is done without actually seeing the image and only by looking for what is like a fingerprint match. In case there are matches crossing a threshold, Apple will “report these instances to the National Center for Missing and Exploited Children (NCMEC)”.

Apple clarified that its technology keeps user privacy in mind, and hence the database is transformed into “an unreadable set of hashes that is securely stored on users’ devices”. It added that before any image is uploaded to iCloud, the operating system will match it against the known CSAM hashes using a “cryptographic technology called private set intersection”. This technology will also determine a match without revealing the result.

At this point, the device creates a “cryptographic safety voucher” with the match result and additional encrypted data and saves it to iClouds with the image. Threshold secret sharing technology ensures that these vouchers cannot be interpreted by Apple unless the iCloud Photos account crosses a threshold of known CSAM content. This threshold, the blog claimed, has been put in place to “provide an extremely high level of accuracy and ensures less than a one in one trillion chance per year of incorrectly flagging a given account”. So a single image is unlikely to trigger an alert.

But if the threshold is exceeded, Apple can interpret the contents of the safety vouchers and manually review each report for a match, disable the user’s account, and send a report to NCMEC. Apple said users will be able to appeal if they think they have been wrongly flagged.

How do the other features work?

Apple’s new communication safety for Messages will blur a sensitive image and warn a child about the nature of the content. If enabled from the backend, the child could also be told that their parents have been alerted about the message they have viewed. The same will apply if the child decides to send a sensitive message. Apple said Messages will use “on-device machine learning to analyse image attachments and determine if a photo is sexually explicit” and that Apple will not get access to the messages. The feature will come as an update on accounts set up as families in iCloud for latest operating system versions.
Also, with the update, when a user tries to look up potential CSAM topics, Siri and Search will explain why this could be harmful and problematic. Users will also get guidance on how to file a report on child exploitation if they ask for it.

Why is Apple doing this and what are the concerns being raised?

Big tech companies have for years been under pressure to crack down on the use of their platform for exploitation of children. Many reports have over the years underlined how enough was not being done to stop technology from making CSAM content available more widely.

However, Apple’s announcement has been met with criticism with many underlining how this is exactly the kind of surveillance technology many governments would want to have and love to misuse. The fact that this has come from Apple, which has for long been the votary of privacy, has surprised many.

Also, cryptography experts like Matthew Green of Johns Hopkins University have expressed fears that the system could be used to frame innocent people sending them images intended to trigger the matches for CSAM. “Researchers have been able to do this pretty easily,” he told NPR, adding that it is possible to fool such algorithms.

But The New York Times quoted Apple’s chief privacy officer Erik Neuenschwander as saying these features will not mean anything different for regular users.

“If you’re storing a collection of CSAM material, yes, this is bad for you,” he told the publication.

Newsletter | Click to get the day’s best explainers in your inbox

Do other big tech companies have similar technologies?

Yes. In fact, Apple is a relatively late entrant on the scene as Microsoft, Google and Facebook have been alerting law enforcement agencies about CSAM images. Apple has been lagging behind because any such technology would have gone against its much-touted commitment to user privacy. As a result, in 2020, when Facebook reported 20.3 million CSAM violations to the NCMEC, Apple could report only 265 cases, The New York Times reported.

It is only now that it has been able to find the technology sweetspot to do this without impacting regular users, or at least spooking them. However, as the initial backlash has shown, it is still a tightrope walk.

📣 The Indian Express is now on Telegram. Click here to join our channel (@indianexpress) and stay updated with the latest headlines

For all the latest Explained News, download Indian Express App.

  • The Indian Express website has been rated GREEN for its credibility and trustworthiness by Newsguard, a global service that rates news sources for their journalistic standards.
0 Comment(s) *
* The moderation of comments is automated and not cleared manually by indianexpress.com.
Advertisement
Advertisement
Advertisement
Advertisement