You are viewing a single thread.
View all comments View context
1 point

I don’t believe there’s any actual data collection?

permalink
report
parent
reply
1 point

It’s something that not talked about, which, given our data-obsessed world, i interpret as “we just do it by default (because nobody will complain, it’s normal, yada yada)”.

Besides, it’s stated that the scanning itself does only happen on your device. If you scan locally for illegal stuff, it’s not really far fetched that someone gets informed about someone having, for example, CSAM on their device. Why else would you scan for it? So at the very least, that information is collected somewhere.

permalink
report
parent
reply
1 point

I think your threat model for this is wrong.

First of all, understand how it works: it’s a local feature that uses image recognition to identify nudity. The idea is, if someone sends you a dick pick (or worse, CSAM), you don’t have to view it to know what it is. That’s been an option on the accounts of minors for some time now and it is legitimately a useful feature.

Now they’re adding it as an option to adult accounts and letting third party developers add it to their apps.

The threat that suddenly they’re going to send the scanning results to corporate without telling anyone seems unlikely. It would be a huge liability to do so and have no real benefits for them.

But the threat is this: with this technology available, there will be pressure to make it not optional (“Why does Apple let you disable the child porn filter — wtf?”). If they bend to that pressure then why not introduce filters for other illegal content. Why not filter for comments criticizing the CCP in China or content that infringes on copyright?

Having a “dick pick filter” is a useful technology and I know some people who would love to have it. That doesn’t mean the technology could be misused for nefarious purposes.

permalink
report
parent
reply
1 point

I am aware that it’s local, i just assumed it would also call home.

My threat model here is based on cases like this: https://www.theverge.com/2022/8/21/23315513/google-photos-csam-scanning-account-deletion-investigation

And yes, i did see it as a privacy issue, not a censorship one. Inevitably, if this finds the pressure to expand it towards other content, it could be a problem comparable to the “Article 13” Europe was, or is, facing.

Generally, blocking specific types of content is a valid option to have. As long as it is an option, and the user knows it is an option. I just distrust it coming from the likes of google or apple.

permalink
report
parent
reply

Technology

!tech@partizle.com

Create post

Computers, phones, AI, whatever

Community stats

  • 1

    Monthly active users

  • 65

    Posts

  • 95

    Comments