I noticed a bit of panic around here lately and as I have had to continuously fight against pedos for the past year, I have developed tools to help me detect and prevent this content.
As luck would have it, we recently published one of our anti-csam checker tool as a python library that anyone can use. So I thought I could use this to help lemmy admins feel a bit more safe.
The tool can either go through all your images via your object storage and delete all CSAM, or it canrun continuously and scan and delete all new images as well. Suggested option is to run it using --all
once, and then run it as a daemon and leave it running.
Better options would be to be able to retrieve exact images uploaded via lemmy/pict-rs api but we’re not there quite yet.
Let me know if you have any issue or improvements.
EDIT: Just to clarify, you should run this on your desktop PC with a GPU, not on your lemmy server!
Hey @db0@lemmy.dbzer0.com, just so you know, this tool is most likely very illegal to use in the USA. Something that your users should be aware of. I don’t really have the energy to go into it now, but I’ll post what I told my users in the programming.dev discord:
that is almost definitely against the law in the USA. From what I’ve read, you have to follow very specific procedures to report CSAM as well as retain the evidence (yes, you actually have to keep the pictures), until the NCMEC tells you you should destroy the data. I’ve begun the process to sign up programming.dev (yes you actually have to register with the government as an ICS/ESP) and receive a login for reports.
If you operate a website, and knowingly destroy the evidence without reporting it, you can be jailed. It’s quite strange, and it’s quite a burden on websites. Funnily enough, if you completely ignore your website, so much so that you don’t know that you’re hosting CSAM then you are completely protected and have no obligation to report (in the USA at least)
Also, that script is likely to get you even more into trouble because you are knowingly transmitting CSAM to ‘other systems’, like dbzer0’s aihorde cluster. that’s pretty dang bad…
here are some sources:
- https://www.law.cornell.edu/uscode/text/18/2258A
- https://crsreports.congress.gov/product/pdf/LSB/LSB10713
- https://www.missingkids.org/theissues/csam
- https://www.cloudflare.com/service-specific-terms-application-services/#csam-scanning-tool-terms
- https://developers.cloudflare.com/cache/reference/csam-scanning/#what-happens-when-a-match-is-detected
- https://developers.cloudflare.com/cache/reference/csam-scanning/#what-action-should-i-take-when-a-match-is-detected
Note that the script I posted is not transmitting the images to the AI Horde.
Also keep in mind this tool is fully automated and catches a lot of false positives (due to the nature of the scan, it couldn’t be otherwise). So one could argue it’s a generic filtering operation, not an explicit knowledge of CSAM hosting. But IANAL of course.
This is unlike cloudflare or other services which compare with known CSAM.
EDIT: That is to mean, if you use this tool to forward these images to the govt, they are going to come after you for spamming them with garbage
Cloudflare still has false positives, the NCMEC does not care if they get false positives. If you read some of those links I provided it wouldn’t be considered a generic filtering operation, from how I’m reading it at least. I wouldn’t take the chance, especially not with running the software on your own hardware in your own house, split from the server.
I think you’re not in the US? So it’s probably different for your jurisdiction. Just want to make it clear that in the US, from what i’ve read up on, this would be considered against the law. You are running software to filter for CSAM, so you are obligated to report it. Up to 1 year jail time for not doing so.
One can easily hook this script to forward to whoever is needed, but I think they might be a bit annoyed after you send them a couple hundred thousand false positives without any csam.
Ugh, what a mess. Thought about this for a while today and three thoughts started circulating in my head:
-
Hire an actual lawyer and get firm legal advice on this issue. I think this would fall to the admins, not the devs. Maybe an admin who wanted could volunteer to contact a lawyer? We could do a gofundme for one-time consultation legal fees.
-
Stop using pictrs completely and instead use links to a third party such as Imgur or whatever. They’re in this business and I’m sure already have dealt with it and have a solution. Yes it sucks that Imgur (or whatever third party) could delete our legitimate images at any time, but IMHO it’s worth it to avoid this headache. At any rate it offloads the liability from an admin. Of course, IANAL and this is a question we would want to ask a lawyer about.
-
Needing a GPU increases the expenses for an admin significantly. It will start to not be worth it for quite a few to keep their instance running.
Thanks for bringing up this point. This is obviously a nuanced issue that is going to need a well-thought-out solution.
Depending on the country, those laws may be different. Here is a story of a guy who ran a TOR exit node in Australia who would have been protected as a company (law was later changed).
the ridiculous part of it is, as I understand it, if you completely ignore your website and essentially never know that you’re hosting CSAM then you cannot be held liable for it. But then, someone’s probably literally gonna come hunt you down to tell you in person (FBI) lol. So probably best to not ignore it.
This is extremely cool.
Because of the federated nature of Lemmy many instances might be scanning the same images. I wonder if there might be some way to pool resources that if one instance has already scanned an image some hash of it can be used to identify it and the whole AI model doesn’t need to be rerun.
Still the issue of how do you trust the cache but maybe there’s some way for a trusted entity to maintain this list?
How about a federated system for sharing “known safe” image attestations? That way, the trust list is something managed locally by each participating instance.
Edit: thinking about it some more, a federated image classification system would allow some instances to be more strict than others.
Consensus algorithms. But it means there will always be duplicate work.
No way around that unfortunately
Why? Use something like RAFT, elect the leader, have the leader run the AI tool, then exchange results, with each node running it’s own subset of image hashes.
That does mean you need a trust system, though.
I’d rather have a text-only instance with no media at all. Can this be done?
Yes it is definitely possible! Just have no pictrs installed/running with the server. Note it will still be possible to link external images.
My understanding was it’s bad practice to host images on Lemmy instances anyway as it contributes to storage bloat. Instead of coming up with a one-off script solution (albeit a good effort), wouldn’t it make sense to offload the scanning to a third party like imgur or catbox who would already be doing that and just link images into Lemmy? If nothing else wouldn’t that limit liability on the instance admins?
As a test, I ran this on a very early backup of lemm.ee images from when we had very little federation and very little uploads, and unfortunately it is finding a whole bunch of false positives. Just some examples it flagged as CSAM:
- Calvin and Hobbes comic
- The default Lemmy logo
- Some random user’s avatar, which is just a digital drawing of a person’s face
- a Pikachu image
Do you think the parameters of the script should be tuned? I’m happy to test it further on my backup, as I am reasonably certain that it doesn’t contain any actual CSAM
This is normal . You should be worried if it wasn’t catching any false positives as it would mean a lot of false negatives would slip though. I am planning to add args to make it more or less severe, but I it will never be perfect. So long as it’s not catching most images, and of the false positives most are porn or contain children, I consider with worthwhile.
I’ll let you know when the functionality for he severity is updated
How do you even safely test scripts/tools like this 😵💫
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
Fewer Letters | More Letters |
---|---|
CF | CloudFlare |
CSAM | Child Sexual Abuse Material |
DNS | Domain Name Service/System |
HTTP | Hypertext Transfer Protocol, the Web |
nginx | Popular HTTP server |
4 acronyms in this thread; the most compressed thread commented on today has 6 acronyms.
[Thread #88 for this sub, first seen 28th Aug 2023, 22:25] [FAQ] [Full list] [Contact] [Source code]