Apple releases document to address and clarify doubts about its system to tackle child abuse

The new measures to protect children from abuse presented by Apple last week caused a stir and controversy. In Applesfera, we explained in detail how these analytics metrics work in iCloud and Messages, but now Apple has wanted to fix a lot of issues with a new document. This is a question and answer on extended protection for children on Apple platforms.

Measures to keep children safe and prevent the dissemination of abusive material

At Apple, our goal is to create technology that empowers people and enriches their lives, while helping them stay safe. We want to protect children from predators who use communication tools to recruit and exploit them, as well as limit the dissemination of child pornography (CSAM).

This is how the Apple document begins, in which we see a short intro where it’s clear that security in iMessage and CSAM hardware detection are two different things and use different technology. The company says that in the first, the images sent and received in iMessage are scanned within the device, only for devices set up for children in a family account.

The second is independent of the first and focuses on detecting photos uploaded to iCloud that match CSAM material, not applying to users who have not enabled photos in iCloud. The document is quite detailed and comprehensive, so it is worth reading it in its entirety. Next, we’ll try to synthesize it as best as possible, although it’s always advisable to take a look at the original.

IMessage communications security

Applies only to Apple accounts defined as families in iCloud. Parents must enable it for their family group, and only they can receive message alerts for children 12 and under. iMessage will not share information with Apple, NCMEC (The United States Association for the Protection of Children), or the police. Message encryption, privacy, and security are not broken because Apple never has access to communications. The user remains in control of their communications and Apple does not intervene at all, not even in notifications of sexual content to children under 12. For children 13-17 years old, parents will not receive notifications because this feature is only present for children 12 years old and under. They are the ones who will decide if they want to see or send an image and in doing so, their parents will receive a notification.

Detecting CSAM Abusive Material in iCloud Photos

For iCloud photo scanning, Apple only applies it to photos that the user uploads to iCloud. About them, Apple only receives alerts for images that match already known CSAM content. It does not apply to users with iCloud Photo Library turned off and does not work on photos stored locally in iPhone Photo Library. Under no circumstances are CSAM images downloaded to the device in order to perform analysis, only hashes of CSAM photos. A hash is a string of numbers that represent those images, which cannot be read or reconstructed, that come from verified CSAM content. The scan takes place on the device itself, where these hashes are compared to photos uploaded to iCloud. Under no circumstances does Apple know or see which photos are uploaded to iCloud. There are other companies that digitize all photos into the cloud. Apple’s method protects privacy because it only focuses on photos that match a CSAM image and are included in an iCloud library.

Other security issues regarding CSAM photo detection

One of the complaints or concerns about the new system announced by Apple is whether it could be used to detect things other than CSAM images. An alert will always arise when multiple photos are positive for CSAM hashes and this will be analyzed by people before reporting them to the US Child Abuse Center. Therefore, only photos that correspond to child abuse material will be reported to this organization.

Many have argued that governments could force Apple to add non-CSAM images to its list. The Cupertino company says it will deny such lawsuits, if they occur. He assures that in the past, he has opposed head-on requests from governments to create backdoors in their devices, clearly referring to the iPhone case of the terrorist from San Bernardino and his confrontation with the FBI.

Apple says the system will enter service in the United States and will explore the possibility of expanding it to other countries and regions

The company claims that its system does not allow injecting images to trigger alerts, since hashes are stored in the operating system of the iPhone and iPad. There is no possibility of carrying out an attack for a specific individual and we must remember that each report is then reviewed by Apple manually. Similarly, it is not possible for innocent people to be falsely accused because the system has an error of 1 in 1,000,000,000,000. If this happens, manual review would rule it out.

A controversy that could have been avoided with good communication

Of course, the steps Apple will be implementing with iOS 15 are impressive. The company has gone to great lengths to maintain the privacy, encryption, and security of communications and photos, while also creating one system to recognize CSAM material and another to prevent underage sexual communications. Even and all, the controversy was quick to spring up given Apple’s stance in favor of the privacy and security of its users.

This strong demand that we have on these issues with the company has been deserved. But it is also an indicator that this issue should have been treated differently. Of course, allowing this to be leaked to the press, only to announce it out of the blue, was not a good idea. Now the company has been forced to comment here and there, finally releasing this document which resolves many legitimate questions from users.

We are facing a blunder in terms of communication, which has allowed to arouse fear and disinformation on a very demanding and delicate subject.

It would have been enough to select a handful of journalists to tell about all these measures beforehand. Disclose information both officially and through these means. As is the case with embargoes on the analysis of new products.

It’s likely that Apple has thought about doing something similar as we approach the launch of iOS 15 and other operating systems. Thinking that at the moment it was not necessary. What has become clear is that when it comes to privacy and security, they cannot walk behind events.

Back to top button