Under His Eye

Gina Costanza Johnson
8 min readAug 30, 2021

Apple Just Wants To Do What’s Best For Us And Our Children

I knew this day would come. It was inevitable.

Apple has officially announced the launch of an iPhone update meant to protect children from predators who use communication tools to recruit and exploit them, and limit the spread of Child Sexual Abuse Material (CSAM). Apple said its messaging app will use on-device machine learning to warn about sensitive content without making private communications readable by the company. The tool Apple calls “neuralMatch” will detect known images of child sexual abuse without decrypting people’s messages. If it finds a match, the image will be reviewed by a human who can notify law enforcement if necessary.

One thing Apple is emphasizing is that its new program is ambitious and “protecting children is an important responsibility.” With that in mind, Apple says that its efforts will “evolve and expand over time.”

Therein lies the problem.

“A world where you know that you’re being surveilled all the time has far-ranging implications.” ~Tim Cook

Privacy Imposters

While child exploitation is a serious problem, and while efforts to combat it are almost unquestionably well-intentioned, Apple’s new update introduces a backdoor that threatens to undermine fundamental privacy protections for all users of Apple products.

As seen through films like The Social Dilemma we can all agree that child exploitation on social media poses a serious threat to our children’s safety and well-being. On trend, Apple has strategically postured themselves as technology heroes with a mission to help mitigate and erradicate child exploitation not only online but now offline.

Image Scanning: Decreasing Privacy

Apple’s plan for scanning photos that have been uploaded to iCloud Photos is similar in some ways to Microsoft’s PhotoDNA. The main product difference is that Apple’s scanning will happen on-device. The (unauditable) database of processed CSAM images will be distributed in the operating system (OS), the processed images transformed so that users cannot see what the image is, and matching done on those transformed images using private set intersection where the device will not know whether a match has been found. This means that when the features are rolled out, a version of the NCMEC CSAM database will be uploaded onto every single iPhone. The result of the matching will be sent up to Apple, but Apple can only tell that matches were found once a sufficient number of photos have matched a preset threshold.

Once a certain number of photos are detected, the photos in question will be sent to human reviewers within Apple, who determine that the photos are in fact part of the CSAM database. If confirmed by the human reviewer, those photos will be sent to NCMEC, and the user’s account disabled. The bottom line here is that whatever privacy and security aspects are in the technical details, all photos uploaded to iCloud will be scanned.

Make no mistake: This is a decrease in privacy for all iCloud Photos users, not an improvement.

How The Feature Works

There are two main features that are set to install on every Apple device.

  1. A scanning feature that will scan all photos as they get uploaded into iCloud Photos to see if they match a photo in the database of known child sexual abuse material (CSAM) maintained by the National Center for Missing & Exploited Children (NCMEC)
  2. A feature that scans all iMessage images sent or received by child accounts, that is accounts designated as owned by a minor for sexually explicit material, and if the child is young enough, notifies the parent when these images are sent or received. This feature can be turned on or off by parents.
iPhone Messages will warn children and their parents when receiving or sending sexually explicit photos.

Do We Trust Apple’s Intentions?

No. As Ben Thompson writes at Stratechery, the issue isn’t whether Apple is only sending notifications to parents or restricting its searches to specific categories of content. It’s that the company is searching through data before it leaves your phone.

“Instead of adding CSAM scanning to iCloud Photos in the cloud that they own and operate, Apple is compromising the phone that you and I own and operate, without any of us having a say in the matter. Yes, you can turn off iCloud Photos to disable Apple’s scanning, but that is a policy decision; the capability to reach into a user’s phone now exists, and there is nothing an iPhone user can do to get rid of it.”

CSAM is illegal. Period. Now that Apple has set this precedent, it will almost certainly face calls to expand it. And if Apple later rolls out end-to-end encryption for iCloud, something they have reportedly considered doing, it’s laid out a possible roadmap for getting around E2EE’s protections.

This is the segway to the larger issue at hand; Apple has the power to modify already weakened safeguards. The concern is that the system is so easy to change. Historically, Apple has remained steadfast in previouse clashes with governments; it famously defied a Federal Bureau of Investigation demand for data from a mass shooter’s iPhone. But it’s acceded to other requests like storing Chinese iCloud data locally, even if it insists it hasn’t compromised user security by doing so.

Stanford Internet Observatory professor Alex Stamos also questioned how well Apple had worked with the larger encryption expert community, saying that the company had declined to participate in a series of discussions about safety, privacy, and encryption. “With this announcement they just busted into the balancing debate and pushed everybody into the furthest corners with no public consultation or debate,” he tweeted.

Machine Learning and Parental Notifications

Whether sending or receiving explicit content, the under-13 user has the option to decline without the parent being notified. Nevertheless, these notifications give the sense that Apple is watching over the user’s shoulder. In the case of under-13s, that’s essentially what Apple has given parents the ability to do.

It is also important to note that Apple has chosen to use the notoriously difficult-to-audit technology of machine learning classifiers to determine what constitutes a sexually explicit image. There has been years of documentation and research proving that machine-learning technologies, used without human oversight, have a habit of wrongfully classifying content, including supposedly “sexually explicit” content. When blogging platform Tumblr instituted a filter for sexual content in 2018, it famously caught all sorts of other imagery on the internet, including pictures of Pomeranian puppies, selfies of fully-clothed individuals, and more. Facebook’s attempts to police nudity have resulted in the removal of pictures of famous statues such as Copenhagen’s Little Mermaid. These filters have a history of chilling expression, and there’s plenty of reason to believe that Apple will do the same.

Since the detection of a “sexually explicit image” will be using on-device machine learning to scan the contents of messages, Apple will no longer be able to honestly call iMessage “end-to-end encrypted.” Apple and its proponents may argue that scanning before or after a message is encrypted or decrypted keep the “end-to-end” promise intact, but that would be semantic maneuvering to cover up a tectonic shift in the company’s stance toward strong encryption.

Why So Defensive? And, Here Go Again.

Reuter’s reports an internal Slack channel dedicated to the new feature has received over 800 messages, many of them expressing concern about how the new features could be abused and expanded.

On August 12th, Reuters reported that Apple’s internal Slack had hundreds of messages from Apple employees who were concerned that the CSAM scanner could be exploited by other governments as well as how its reputation for privacy was being damaged. A reactive PR push from Apple followed. Craig Federighi, Apple’s chief of software engineering, talked to the Wall Street Journal in a slickly produced video. Apple then released a security threat model review of its child safety features that included some new details about the process and how Apple was ensuring it would only be used for its intended purpose.

Apple’s Source Image Correctness Requirement

Apple claims that their databases will be provided by at least two separate, non-government child safety agencies to prevent governments from inserting images that are not CSAM but that they might want to scan their citizens’ phones for. Apple thinks that this, combined with its refusal to abide by any government’s demands that this system be used for anything except CSAM as well as the fact that matches will be reviewed by an Apple employee before being reported to anyone else, will be sufficient protection against users being scanned and punished for anything but CSAM.

Surveilling Kids. Why Not?

Apple is moving forward with the update irregardless of their customers concerns nor comfort level with Apple’s move to embed technology into their devices that scans data they consider to be private and sensitive. Yes, other services scan their users’ photos for CSAM, too, but doing it on their device is a line that most people don’t want or expect Apple to cross. After all, Apple spent years convincing them that it never would. There is no longer secure messaging

“The move is a shocking about-face for users who have relied on the company’s leadership in privacy and security,” ~India McKinney and Erica Portnoy of the Electronic Frontier Foundation

Tech Does, What Tech Wants To Do.

Apple has acknowledged objections to the new update but has not indicated any intention to modify or abandon their plans. On August 5th, an internal memo acknowledged “misunderstandings” but praised the changes.

“What we announced today is the product of this incredible collaboration, one that delivers tools to protect children, but also maintain Apple’s deep commitment to user privacy.”

“We know some people have misunderstandings, and more than a few are worried about the implications, but we will continue to explain and detail the features so people understand what we’ve built.”

--

--

Gina Costanza Johnson

Digital Media Change Agent | Digital Philanthropist | Digital Design Ethicist | Humane Technology Advocate