Apple has delayed the release of its software built primarily to detect photos indicating child abuse on iPhones. This comes as privacy advocates criticised the software. It was earlier slated to launch later this year in the US.
What is the software and how would it have worked?
Apple’s tool neuralMatch would scan photographs before they are uploaded to iCloud and check the content of messages sent on its end-to-end encrypted iMessage app. “The Messages app will use on-device machine learning to warn about sensitive content while keeping private communications unreadable by Apple,” the company had said.
Also Read | South Korea approves legislation to ban Google, Apple payment monopolies
The mechanism of the software works in such a way that it compares the pictures with a database of child abuse imagery, and manually reviews the images if a flag is found. The National Center for Missing and Exploited Children (NCMEC) is going to be notified is it is confirmed for child abuse.
What were the concerns?
While child protection agencies are impressed with the technology, privacy advocates have been flagging it as something that could potentially affect privacy in more ways than one. Will Cathcart, Head of end-to-end encrypted messaging service WhatsApp, said, “This is an Apple-built and operated surveillance system that could very easily be used to scan private content for anything they or a government decides it wants to control. Countries, where iPhones are sold, will have different definitions on what is acceptable”.
Also Read | South Korea to curb Apple, Google’s commission dominance
In a statement, Apple said “Based on feedback from customers, advocacy groups, researchers and others, we have decided to take additional time over the coming months to collect input and make improvements before releasing these critically important child safety features.”
According to Reuters, Apple had been defending the plan for weeks and given endless explanations and documents to prove that the risks of false detections were low.