Apple will begin
automatically searching for photos of child sexual abuse
this year
that are uploaded to iCloud, the company's cloud storage service.
The new security tool, which will arrive with
versions of iOS 15, will
be known as neuralHash, and will include several safeguards to protect the privacy of users, although some researchers and security experts have been concerned about the implications that its application could have. implantation.
Initially, the verification will only be performed on
US user accounts
.
The fully automated system will not have direct access to the content of the images that users upload to iCloud, but will have direct access to their "fingerprint" (hash).
This footprint, which is basically a string of characters, is calculated based on the information present in the image and is subsequently compared with a
database of more than 200,000 image footprints
preselected by the National Center for Lost or Exploited Children ( NCMEC), the US agency that is responsible for stopping the distribution of explicit sexual images of minors on the network.
The system designed by Apple is advanced enough to
also detect similarities in images that have been slightly modified
(for example, cropped).
Detection will take place on the user's own phone before uploading the image to the iCloud service.
If a photo is detected that may match those that NCMEC has collected and marked as an image with sexually explicit content with minors, a record associated with the user's account is generated.
If a user account exceeds a registration threshold (which Apple has not specified), the
company will manually check the content of the images on iCloud
and alert NCMEC and authorities if it finally discovers explicit sexual content starring minors.
"The threshold is established to provide an
extremely high level of precision
and the possibility of incorrectly marking a certain account is one in a billion," they explain from the company.
A CONTROVERTED TOOL
These types of
automated systems for detecting images of sexually explicit content with minors
are common in cloud storage systems, such as Google Drive or Dropbox.
Apple also performs these checks on files stored on its servers, but its implementation on the iPhone itself opens a precedent that some researchers and security experts consider worrying.
It is, after all, a system that
automatically analyzes the user's images
, although it does not do it directly, and many have the uploading of images to iCloud activated by default, which affects all the photos stored in the reel.
"Whoever controls the fingerprint list can search for whatever content they want on your phone, and you really
have no way of knowing what's on that list because it's invisible to you,
" explains Matthew Green, professor of security and encryption at Johns University. Hopkins.
The fear of Green and other security experts is that, in the future, a government may decide to use this tool already implemented to search, for example,
images of a banner at a protest or other type of content
that endangers citizens , especially in countries with authoritarian regimes.
Several researchers have also shown that there are ways to fool the algorithms that create these unique fingerprints of each image, something that
could be used to generate false positives with seemingly innocent photos
and thus give the security forces an excuse to access the information. present on a phone.
In recent years, various law enforcement agencies and governments have asked Apple to create back doors that allow them to access messages and images sent or stored on phones to
aid in investigations of child abuse or terrorism
.
Apple, so far, has refused.
Their argument is that a back door could also be used by authoritarian governments to violate people's rights and that, once created, this type of door could also fall into the hands of cybercriminals.
NeuralHash is in a way a compromise solution that allows the
detection of explicit images of child abuse without there being a back door
that lowers the security or general level of privacy of the platform.
NOTICE TO PARENTS
In addition to this tool, Apple will include in iOS15 new protection systems in the accounts of minors to stop the exchange of sexually explicit photos.
If an underage user receives a photo with sexual content, it will appear blurred on the screen and the child will be warned that the photo contains potentially harmful content.
If you still decide to see it, you will be able to do so but a notice will be sent to the parents.
Multiple confirmation screens will explain why submitting these types of images can be harmful and what is considered a sexually explicit image (images showing parts of the body that are normally covered with a swimsuit).
Similar protections are available if a child tries to send sexually explicit photos.
The child will be warned before the photo is sent, and parents will receive a message if the child decides to send it.
The detection system uses machine learning techniques on the device to analyze image attachments and determine if a photo is sexually explicit.
The function is designed so that Apple does not have access to messages (all verification is done on the device itself) and will only work with Apple's own Messages application
According to the criteria of The Trust Project
Know more
iPhone
GadgetsOCU points out the obsolescence of the iPhone due to its new updates
AnalysisNothing Ear 1: a good alternative to the AirPods Pro for a third of its price
The mythical Pro Evolution Soccer will be free and the name is changed to eFootball
See links of interest
Last News
Holidays 2021
Home THE WORLD TODAY
Spain - Denmark, live