Apple's new user espionage protocol: I can't wait for all the abuses to start

It will be interesting to see if it is possible to intelligently object to Apple bringing the potential thought police directly to the iPhone without people attempting to call all doubters as child abusers or worse. This really does seem to be the beginning of the end of privacy though. It’s not that there would be anything wrong with rooting out all the child abuse images… it’s that there is no way to know what content is really in the database. It would be very easy for that database of “banned” content to grow to include all sorts of things. If Apple is let away with it, how long before every OS and device is required to scan all of your documents for things the current government objects to. I can’t wait until someone finds a way to game the system, and make and spread around a bunch of funny viral memes that happen to be intentionally designed kick off the detector. It will be a funny day when the system reports that, say, one in every 10 iPhone users needs to be arrested. :smiley:

2 Likes

Feels like the same vein as the story from the other week about Mass automatically installing their state COVID tracker on people’s devices. My digital domain should be regarded in the same way as my physical domain. You can’t violate either without a warrant. Even if the authorities traipse through my property blindfolded sniffing for suspicious smells, it’s a violation.

2 Likes

Google, YouTube, Facebook, Twitter, Microsoft etc. already have to do this - although for their online services. That they don’t scan it in iCloud and not waste local storage and processor cycles is annoying.

Although it sounds like this is for iMessage only?

The mechanism that will enable Apple to scan images in Messages is not an alternative to a backdoor—it is a backdoor. Client-side scanning on one “end” of the communication breaks the security of the transmission, and informing a third party (the parent) about the content of the communication undermines its privacy.

The rhetoric from the “experts” is also baloney! This isn’t backdooring end-to-end encryption. This is looking at the images stored locally on the device before they are sent or after they have been received. This does not break the end to end encryption, so let’s throw that argument out, for a start!

Privacy violation? Yes, most certainly. Break of trust? Yes. Wasting my battery and local storage? Yes.
Backdoor? NO

This is how they should be monitoring end-to-end encrypted messages, the checking takes place on the device, it does not break the end-to-end encryption. I will, however, say that this sort of monitoring should only be available with a valid warrant.

1 Like

Looks like it is USA only and is disabled if you don’t use iCloud Photo.

Apple’s method works by identifying a known CSAM photo on device and then flagging it when it’s uploaded to ‌iCloud Photos‌ with an attached voucher. After a certain number of vouchers (aka flagged photos) have been uploaded to ‌iCloud Photos‌, Apple can interpret the vouchers and does a manual review. If CSAM content is found, the user account is disabled and the National Center for Missing and Exploited Children is notified.

Interesting, it doesn’t contact authorities when the first hash is matched, but only after a pre-determined number of matches has been met, then a manual review, before contacting authorities. I feel soory for the poor people who will be traumatised doing the manual reviews!

Still, users who have privacy concerns about Apple’s efforts to scan user photo libraries can disable ‌iCloud Photos‌.

The issue that I have with this is that once the capability exists, Apple will be forced to allow governments to abuse this or pull out of countries entirely.

Once this launches, what is to stop the Russian government from making Apple hand over a list of users with photos of same sex couples kissing? Or the Chinese government from making them hand over a list of users with photos of the Tianemen Square protests on their devices?

2 Likes

The other major cloud providers are probably doing this already so it’d be a case of why can’t Apple? I can’t help feeling that, if anyone is going to be ale to do this properly, it’ll be Apple

They certainly can. The question is whether they should. Apple prides itself on privacy, other companies don’t. When you sign up for Facebook or Google services, you usually don’t pay up front and you allow them to scrape your data. Apple prides itself on not doing that. This seems to contradict that. I believe that they are doing this for the right reason, but it isn’t much of a stretch to see how this can be abused.

2 Likes

Another potential issue is alerting parents when their children receive explicit photos. How many LGBTQ kids will be inadvertently outed by this? That has real and serious consequences.

3 Likes

The difference is that they are doing it in the cloud, not on the local device. A big difference.

4 Likes

I’m sure many of you are familiar with rule 34 of the Internet. It’s scope creep that can get really sticky. It’s very easy to say “oh this is also objectionable, we should check for this too…” For example I have to wonder if someone draws something sexual featuring a Disney character or something like that (because Disney characters frequently have a young audience) if Disney won’t say “Apple you need to root out these things too.” Then someone makes a sexual parody of some politician, and that could be called abusive, and oh boy, we better root that out too. I am convinced they won’t be able to reject any and all requests for “just one more thing” that needs to be detected and eliminated… because that’s how the thought police work.

Here’s a real example from my youth: In grade 10 I used to hang around with a talented artist. When I first got to know him, he used to draw Garfield strips at least as well as the original author (in my opinion at the time.) One day, we were messing around at a table in the library (when we should have been studying), and he drew something sexual involving Garfield characters. I honestly don’t even remember what it was, I just remember we all had a laugh and then he crumpled it up and threw it away in the garbage. Imagine his surprise when someone hauled that out of the garbage, and reported it to the school management. He got kicked out of school for that (which ended our friendship :frowning: )

From my above story: I feel like there needs to be room for unpopular though in private, on my own device in my own private space. Depending on what it is, it should be blocked from sharing, but it merely existing on my device doesn’t necessarily mean I won’t eventually reject that line of thinking (i.e. grow up/out of it.) It’s probably no different than the right for a child to have privacy in their personal diary… mom and dad don’t need to be the thought police. Catch bad actors, but be vary wary about trying to police bad thought.

1 Like

I thought it was just in iCloud… my mistake

No, the checks are local on the device, but only active if you use iCloud Photo storage.

2 Likes

I think the fact that you authorize iCloud means you are also authorizing this inspection on your phone. I don’t think the phone scanning will be limited to just files that go to the cloud, but will include any file on the phone. Basically, by enabling iCloud, you’re paying Apple to spy on your phone. I’m sure they argue they don’t have access to your files in iCloud, so they have to do the scanning at the source. This seems like a big like though, given that they have given China access to pretty much everything in their cloud. Like you suggest, there is more going on here, and it’s probably their attempt to sew as much confusion as they can, to give them as much cover as they can get.

Okay, here’s an even different explanation. There is so much confusion around this, and I don’t understand why it’s been reported so many different ways.

1 Like

They released an FAQ

This part makes me feel a little better:

“ Could governments force Apple to add non-CSAM images to the hash list?

Apple will refuse any such demands. Apple’s CSAM detection capability is built solely to detect known CSAM images stored in iCloud Photos that have been identified by experts at NCMEC and other child safety groups. We have faced demands to build and deploy government-man- dated changes that degrade the privacy of users before, and have steadfastly refused those demands. We will continue to refuse them in the future. Let us be clear, this technology is limit- ed to detecting CSAM stored in iCloud and we will not accede to any government’s request to expand it. Furthermore, Apple conducts human review before making a report to NCMEC. In a case where the system flags photos that do not match known CSAM images, the account would not be disabled and no report would be filed to NCMEC.”

This addresses the concern that I had when I spoke to @Leo, but I am still reluctant to believe that they would choose to pull out of major markets like to keep their promise.

2 Likes

Also worth noting that “… other child safety groups.” can mean many things. Russia’s anti-gay law that passed in 2013 was done so to “protect children” from homosexual imagery and to to prohibit the distribution of "propaganda of non-traditional sexual relationships.”

2 Likes

That is what I said, as soon as the news hit.

The downside of that is that they couldn’t warn minors, when they are sending/receiving naked images - but that has nothing to do with the CSAM database…

1 Like

Facebook and Google have been using it for several years and there haven’t been any stories about people being arrested on the basis of false positives.

1 Like

That is definitely the problem with this!

2 Likes

If you have iCloud photos turned off, but “my photo stream” on, is or will Apple scan those photos?

1 Like