‘It scars you for life’: Workers sue Meta claiming viewing brutal videos caused psychological trauma::More than 20% of the staff Meta hired to check the violent content of Facebook and Instagram are on sick leave due to psychological trauma.

121 points

Couldn’t they hire from watchpeopledie or nothingtoxic or ebaum. Those users probably would do overtime for free.

permalink
report
reply
87 points

People that are completely desensitized to that kind of stuff would probably not be very good at moderating it really.

Also this is a terrible job and I’d be very worried if a company was paying and enabling people who find that fun. It’s horrible, but trauma is the normal outcome.

permalink
report
parent
reply
44 points

Sounds like the perfect job for AI

permalink
report
parent
reply
40 points

I feel sorry for whichever researchers are in charge of training and fine tuning those models… ouch

permalink
report
parent
reply
14 points

They used AI to flag the images but a human still had to search through them

permalink
report
parent
reply
10 points

I am of the kind that is very wary with what should or should not be an AI’s job, and you know what, in this very particular case, I think I agree.

At least as a first filter, anyway.

permalink
report
parent
reply
4 points

Huge industries emerging in this field right now for everything from this type of social media moderation to helping fight CSAM more effectively so humans aren’t having to be a frontline for that type of material. This is one area I can really, really get behind AI on and see a very valid use case that isn’t just marketing hype like so many others. I know there’s some great stuff happening just based on my own field of employment and being close to a few things in the works this year.

permalink
report
parent
reply
20 points

Honestly I don’t see an issue with it. If they can tell the difference between an image that should be moderated and one that shouldn’t they can do the job and I seriously doubt the vast majority of people desensitized to that kind of content can’t tell the difference. That’s like the arguments that we shouldn’t make graphic games or movies because people won’t be able to tell the difference between them and reality. Not everyone can do every job and these people would be the perfect fit for it and we would spare others from getting hurt

permalink
report
parent
reply
15 points

Desensitized doesn’t necessarily mean somebody doesn’t have reactions to something. It just means they can compartmentalize those reactions and move forward and deal with the ramifications later.

EMTs, ER Doctors, and Nurses are largely desensitized to graphic trauma and can press through and get the job done. But that doesn’t mean that they don’t process those scenes later in both healthy and unhealthy ways (there’s a few study out there that show ER staff have higher rates of alcoholism and substance abuse rates than the general public).

Tramua is trauma, whether you’re desensitized or not.

permalink
report
parent
reply
6 points
*

It would be a highly unethical but interesting research to see if those people experience long-term consequences nevertheless. Or if being desensitizes really does give someone immunity.

permalink
report
parent
reply
5 points

Except, you know, we’re talking people who are progressively desensitized to reality. So no, that’s not comparable at all.

permalink
report
parent
reply
2 points

Exactly. If they couldn’t tell the difference, then how could they know which content to seek out for their own enjoyment. It might not affect them much, if at all anymore, but they know what ‘it’ looks like.

Can you imagine them watching a cute cat video over and over and wondering why they aren’t getting the rush they must feel when watching gore.

I remember in the early days of the internet, i clicked a link on a forum and ended up watching a video of some guy being decapitated. I have never forgotten that image, 20+ years later, and i know i would be checking into a mental hospital if i had the job these facebook staff have had to do. But there are people who like this sort of stuff, and its not because they have forgotten what decapitation looks like.

permalink
report
parent
reply
18 points

You’re not thinking awful enough

permalink
report
parent
reply
2 points

I’ve seen users laugh at horrific gore videos on some forums. I’m not sick, but was curious at one point and googled.

permalink
report
parent
reply
53 points

Fuck that job.

permalink
report
reply
45 points

Secondary trauma is very real. I’ve done freelance work around focus groups for some traumatic things and the person I worked with on it made sure that I had proper support for it. That I took time to process the disturbing things. I don’t understand why when they obviously found a video that violated guidelines that they had to watch the entire thing if they weren’t going to give them proper psychological supports or processing time.

Jobs that involve this type of imagery are traumatic and should be treated as such. Extra vacation time, proper psychological support not just “your doing import work” but actual trauma processing work. But when has a large company like Meta cared about its employees?

permalink
report
reply
7 points

But when has a large company like Meta cared about its employees?

You pretty much nailed it. A company like Meta would never fail to maximize their profit, even if the result is detrimental to employees and/or users. Facebook is demonstrably detrimental to society in general, yet they don’t care - gotta profit more and more for the stockholders, consequences be damned.

permalink
report
parent
reply
41 points

I’ve heard this is also the case for civilians and officers working for police departments who are responsible for handling evidence related to child abuse. Takes a lot of psychological support, which I’m not sure can ever be enough.

permalink
report
reply
21 points

Not every job is something humans are cut out for. This is a job AI should be taking off our plates.

permalink
report
parent
reply
11 points

But then who will give therapy to the AI?

permalink
report
parent
reply
9 points

That’s the beauty of it. Each new instance of the AI has no prior memories, so the sorry devil gets to relive its first day on the job forever.

permalink
report
parent
reply
4 points
*

We get an dedicated ai to monitor the wellbeing of the first ai. The moment it does something unexpected we pull the plug.

Though we dont know what effect that may have on ai 2 so we should probably get a third ai to…

permalink
report
parent
reply
1 point
3 points

Here is an alternative Piped link(s):

https://www.piped.video/watch?v=z0NgUhEs1R4

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I’m open-source; check me out at GitHub.

permalink
report
parent
reply
12 points

Many places like that have protocols for how long your shifts can be and how long you can do it, with constant psych support while you do it in order to reduce and mitigate the impact of the material. Meta may have been pushing their workers too hard and cutting corners.

permalink
report
parent
reply
17 points
*

How much violent content is even out there that you couldn’t just trivially block by just collecting a list of content-id? How much of it would you need to watch in full to pass judgement?

I seriously don’t get why this is a problem in the first place. Every tiny nip-slip gets you instantly blocked on Facebook and Instagram. They always default to “block” without any closer inspection. They are content moderators after all, not criminal investigators, there shouldn’t be a need to watch it in every detail. So why are they watching enough violent videos to cause trauma and not just hitting the block button or let the computer do the work?

permalink
report
reply
26 points

Both lawyers agree that Meta’s policy of forcing employees to watch the entire video in order to explain all the reasons for censorship aggravates the trauma.

It’s in the article.

permalink
report
parent
reply
3 points

It’s trivial to circumvent automatic detection

permalink
report
parent
reply
3 points

The EU now has a rule that all reports of content must be checked and verified for illegal content like misinformation. They can’t automatically block that content because then people would weaponise reports. At best they can automatically block video and image hashes which have been previously verified as illegal, but these are trivial to circumvent. I think they’ve started using perceptual hashes but these are far from perfect.

I believe they use similar moderation for the US to proactively head off potentially similar legislation to the EU.

Something like 3 billion people actively use Facebook each month. There must be tens of millions of daily reports. I can only imagine the level of planning, staffing, and tools which are required to facilitate that.

permalink
report
parent
reply
1 point

They need an AI to curate that kind of content then.

permalink
report
parent
reply
2 points

AI is far from perfect, and is unlikely to satisfy the DSA requirements.

permalink
report
parent
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 18K

    Monthly active users

  • 12K

    Posts

  • 553K

    Comments