I noticed a bit of panic around here lately and as I have had to continuously fight against pedos for the past year, I have developed tools to help me detect and prevent this content.

As luck would have it, we recently published one of our anti-csam checker tool as a python library that anyone can use. So I thought I could use this to help lemmy admins feel a bit more safe.

The tool can either go through all your images via your object storage and delete all CSAM, or it canrun continuously and scan and delete all new images as well. Suggested option is to run it using --all once, and then run it as a daemon and leave it running.

Better options would be to be able to retrieve exact images uploaded via lemmy/pict-rs api but we’re not there quite yet.

Let me know if you have any issue or improvements.

EDIT: Just to clarify, you should run this on your desktop PC with a GPU, not on your lemmy server!

23 points

Don’t have a GPU on my server. How is performance on the CPU ?

permalink
report
reply
44 points

It will be atrocious. You can run it, but you’ll likely be waiting for weeks if not months.

permalink
report
parent
reply
12 points

the model under the hood is clip interrogator, and it looks like it is just the torch model.

it will run on cpu, but we can do better, an onnx version of the model will run a lot better on cpu.

permalink
report
parent
reply
12 points

sure, or a .cpp. But it will still not be anywhere near as good as a GPU. However it might be sufficient for something just checking new images

permalink
report
parent
reply
2 points

I’m not really convinced that a GPU backend is needed. Was there ever a comparison of the different CLIP model variants? Or a graph optimized / quantized ONNX version?

I think the proposed solution makes a lot of sense for the task at hand if it were integrated on the pic-rs end, but it would be worth investigating further improvements if it were on the lemmy server end.

permalink
report
parent
reply
9 points
*

Thank you for this! Awesome work!

By the way, this looks easy to put in a container. Have you considered doing that?

permalink
report
reply
8 points

I don’t speak docker, but anyone can send a PR

permalink
report
parent
reply
9 points
*

I’ll try it out today. I’m about to start my workday, so it will have to be in a few hours. Fingers crossed I can have a PR in about 16 hours from now.

permalink
report
parent
reply
9 points
*

Any thoughts about using this as a middleware between nginx and Lemmy for all image uploads?

Edit: I guess that wouldn’t work for external images - unless it also ran for all outgoing requests from pict-rs… I think the easiest way to integrate this with pict-rs would be through some upstream changes that would allow pict-rs itself to call this code on every image.

permalink
report
reply
10 points

Exactly. If the pict-rs dev allowed us to run an executable on each image before accepting it, it would make things much easier

permalink
report
parent
reply
8 points

You might be able however integrate with my AI Horde endpoint for NSFW checking between nginx and Lemmy.

https://aihorde.net/api/v2/interrogate/async

This might allow you to detect NSFW images before they are hosted

Just send a payload like this

curl -X 'POST' \
  'https://aihorde.net/api/v2/interrogate/async' \
  -H 'accept: application/json' \
  -H 'apikey: 0000000000' \
  -H 'Client-Agent: unknown:0:unknown' \
  -H 'Content-Type: application/json' \
  -d '{
  "forms": [
    {
      "name": "nsfw"
      }
  ],
  "source_image": "https://lemmy.dbzer0.com/pictrs/image/46c177f0-a7f8-43a3-a67b-7d2e4d696ced.jpeg?format=webp&thumbnail=256"
}'

Then retrieve the results asynchronously like this

{
  "state": "done",
  "forms": [
    {
      "form": "nsfw",
      "state": "done",
      "result": {
        "nsfw": false
      }
    }
  ]
}

or you could just run the nsfw model locally if you don’t have so many uploads.

if you know a way to pre-process uploads before nginx sends them to lemmy, it might be useful

permalink
report
parent
reply
77 points

This is extremely cool.

Because of the federated nature of Lemmy many instances might be scanning the same images. I wonder if there might be some way to pool resources that if one instance has already scanned an image some hash of it can be used to identify it and the whole AI model doesn’t need to be rerun.

Still the issue of how do you trust the cache but maybe there’s some way for a trusted entity to maintain this list?

permalink
report
reply
11 points

TBH, I wouldn’t be comfortable outsourcing the scanning like that if I were running an instance. It only takes a bit of resources to know that you have done your due diligence. Hopefully this can get optimized to get time to be faster.

permalink
report
parent
reply
18 points
*

How about a federated system for sharing “known safe” image attestations? That way, the trust list is something managed locally by each participating instance.

Edit: thinking about it some more, a federated image classification system would allow some instances to be more strict than others.

permalink
report
parent
reply
28 points

I think building such a system of some kind that can allow smaller instances to rely from help from larger instances would be extremely awesome.

Like, lemmy has the potential to lead the fediverse is safety tools if we put the work in.

permalink
report
parent
reply
15 points

Consensus algorithms. But it means there will always be duplicate work.

No way around that unfortunately

permalink
report
parent
reply
9 points
*

Why? Use something like RAFT, elect the leader, have the leader run the AI tool, then exchange results, with each node running it’s own subset of image hashes.

That does mean you need a trust system, though.

permalink
report
parent
reply
13 points

I’d rather have a text-only instance with no media at all. Can this be done?

permalink
report
parent
reply
17 points

Yes it is definitely possible! Just have no pictrs installed/running with the server. Note it will still be possible to link external images.

permalink
report
parent
reply
12 points

My understanding was it’s bad practice to host images on Lemmy instances anyway as it contributes to storage bloat. Instead of coming up with a one-off script solution (albeit a good effort), wouldn’t it make sense to offload the scanning to a third party like imgur or catbox who would already be doing that and just link images into Lemmy? If nothing else wouldn’t that limit liability on the instance admins?

permalink
report
parent
reply
18 points
*

Hi db0, if I could make an additional suggestion.

Add detection of additional content appended or attached to media files. Pict-rs does not reprocess all media types on upload and it’s not hard to attach an entire .zip file or other media within an image (https://wiki.linuxquestions.org/wiki/Embed_a_zip_file_into_an_image)

permalink
report
reply
19 points

Currently I delete on PIL exceptions. I assume if someone uploaded a .zip to your image storage, you’d want it deleted

permalink
report
parent
reply
9 points

The fun part is that it’s still a valid JPEG file if you put more data in it. The file should be fully re-encoded to be sure.

permalink
report
parent
reply
3 points

In that case, PIL should be able to read it, so no worries

permalink
report
parent
reply
5 points

As @Starbuck@lemmy.world stated. They’re still valid image files, they just have extra data.

permalink
report
parent
reply

Selfhosted

!selfhosted@lemmy.world

Create post

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.

Rules:

  1. Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it’s not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

Community stats

  • 4.8K

    Monthly active users

  • 3.5K

    Posts

  • 79K

    Comments