Apple yesterday unveiled a series of three measures designed to protect children in various ways. A triple system of more information via Siri, scanning the contents of iCloud Photos for child pornography and a prevention system against explicit images in iMessage for users under 13 years old. Measures that will arrive with iOS 15, iPadOS 15 and macOS 12 Monterey and which will expand protections for children while always respecting the confidentiality of all data.
Of the three measures Apple has presented to protect children, two are of interest to us for this article: the detection of images of minors in iCloud photos and the iMessage security measures against explicit photos. Let’s start by explaining the latter.
Security in Message communications
In short, it will be said that the new iMessage communications security system will detect and block the explicit images received in the conversations of children under the age of 13. This is an optional setting that parents can configure if they want, and it will take care of hiding inappropriate pictures from conversations.
The child will however be able to view the image anyway. If the image is touched, the system warns that this image could be sensitive and explains the situation in three points: that sensitive photos show parts of the body that are covered in swimsuits, that these images can be used to injure the body. sensitivity and that the person appearing there may not want to be seen. After this explanation, “Not now” or “I’m sure” is suggested.
By pressing the second option, the system warns that the parents “want to be sure that you are well”, so they will receive a notification. He also warns the child not to share what he doesn’t want and to talk to someone he trusts if he feels pressured and, finally, he offers help stating that he is not alone in certain situations. After that, he suggests “Do not see the photo” or “See the photo”.
As we have already said, this system is designed and accessible only to those under the age of 13 as long as their parents deem it appropriate to activate it. The way of communicating the information, although in childish language, is very clear and reveals a very simple operation: if an explicit image is detected, it is blocked, the possibility of choosing is given and parents are warned in case of end. visualization.
This system works through machine learning on the same device which analyzes the images received and also sent via iMessage to present the appropriate notices. Since all processing happens locally on the device, no one, not even Apple, has access to the messages, maintaining the security of iMessage that we all trust.
Detecting images of minors in iCloud
The other measure Apple will implement is the detection of images of minors in iCloud Photo Libraries. A system which, while preserving the confidentiality of all users, will allow the competent bodies to be informed if images contrary to the law are detected.
Instead of scanning the images into the cloud, with the resulting deterioration of privacy, Apple offers a system that compares images from devices to a database locally. This database is stored securely on the device and contains hashed versions of images reported by responsible agencies, so its content is completely unreadable.
These hashes are designed in such a way that they not only represent the original image, but also allow variations of it, such as the image becoming black and white or parts of it being cut out. Thus, versions of the same image can be detected.
The system works in such a way that before an image is uploaded to iCloud photos, the device generates a hash of it and compares it locally with the reported image hashes. The result of this check is unreadable from the device, instead of being scanned there, the system sends an encrypted voucher which is uploaded to the iCloud photos next to the image.
This is where another technology called Threshold Shared Secret comes in. In short, we can say that the key to deciphering the content of the vouchers is divided into X pieces. When there are enough vouchers in an account, Apple receives all the pieces of the full key and can decrypt the contents of those vouchers, as well as the images that have been uploaded, and review them manually. The coupon threshold is adjusted to ensure that there is a 1 in a billion chance per year of incorrectly tagging certain content.
Once the voucher threshold is exceeded, Apple receives a report that it can decipher and verify manually. If it is confirmed that the uploaded images match those in the database, Apple will suspend the account and notify the authorities. However, it offers a system of recourse to restore the account if the user considers that an error has been made.
This mechanism, which will work as long as the device has activated iCloud Photos, is designed to fully protect our privacy and at the same time allow the detection of illegal images in the system.
All of these protections will arrive this fall via iOS 15, iPadOS 15, and macOS Monterey. Protections which, while maintaining the confidentiality of our content and our communications, are capable of preventing certain content from having a place.
More information | Manzana