Earlier, it was reported that Apple would be introducing a new system in which they would scan photos on iPhones to detect for child abuse imagery. Apple has since confirmed those plans which corroborates the earlier claims in which the scanning will be done on the device, as opposed to being in the cloud.

While we suppose having your photos scanned still feels like an invasion of privacy, at least it’s done on your device, if that’s any consolation. According to Apple, “Apple’s method of detecting known CSAM is designed with user privacy in mind. Instead of scanning images in the cloud, the system performs on-device matching using a database of known CSAM image hashes provided by NCMEC and other child safety organizations. Apple further transforms this database into an unreadable set of hashes that is securely stored on users’ devices.”

Apple also claims that another system they’re using will also reduce the chances of an account being incorrectly flagged. “Using another technology called threshold secret sharing, the system ensures the contents of the safety vouchers cannot be interpreted by Apple unless the iCloud Photos account crosses a threshold of known CSAM content. The threshold is set to provide an extremely high level of accuracy and ensures less than a one in one trillion chance per year of incorrectly flagging a given account.”

The company claims that once content has been flagged, they will then manually review it to ensure it matches the images before sending it to the National Center for Missing and Exploited Children.

Filed in Apple >General. Read more about and . Source: apple

Discover more from Ubergizmo

Subscribe now to keep reading and get access to the full archive.

Continue reading