WhatsApp’s AI shows gun-wielding children when prompted with ‘Palestine’::By contrast, prompts for ‘Israeli’ do not generate images of people wielding guns, even in response to a prompt for ‘Israel army’

86 points

Systemic prejudices showing up in datasets causing generative systems to spew biased output? Gasp… say it isn’t so?

I’m not sure why this is surprising anymore. This is literally expected behavior unless we get our shit together and get a grip on these systemic problems. The rest of it all is just patch work and bandages.

permalink
report
reply
3 points
*

I’d like to point out that not everything generative is a subset of all the ML stuff. So prejudices in datasets do not affect everything generative.

That’s off the topic, just playing with such a thing as generative music now. Started with SuperCollider, but it was too hard (maybe not anymore TBF, probably recycling a phrase, for example, would be much easier and faster there than in my macaroni shell script) so now I just generate ABC, convert it to MIDI with various instruments, and use FluidSynth.

permalink
report
parent
reply
62 points

Honestly, I can’t even say I’m disappointed. I’ve lost all hope in Facebook.

permalink
report
reply
18 points

This isn’t anything they actively did though. The literal point of AI is that it learns on its own and comes up with its own response absent human interaction. Meta very likely specifically added code to try and prevent this, but it just fell short of overcoming the bias found in the overwhelming majority of content that led to the model associating Hamas with Palestine.

permalink
report
parent
reply
11 points

It’s not about “adding code” or any other bullshit.

AI today is trained on datasets (that’s about it), the choice of datasets can be complicated, but that’s where you moderate and select. There is nothing “AI learns of its own” sci-fi dream going on.

Sigh.

permalink
report
parent
reply
6 points

It’s reasonable to refer to unsupervised learning as “learning on its own”.

permalink
report
parent
reply
2 points

Really wish the term virtual intelligence was used (literally what it is)

permalink
report
parent
reply
1 point

It is about adding code. No dataset will be 100% free of undesirable results. No matter what marketing departments wish, AI isn’t anything close to human “intelligence,” it is just a function of learned correlations. When it comes to complex and sensitive topics, the difference between correlation and causation is huge and AI doesn’t distinguish. As a result, they absolutely hard code AI models to avoid certain correlations. Look at the “[character] doing 9/11” meme trend. At the fundamental level it is impossible to restrict undesirable outcomes by avoiding them in training models because there are an infinite combinations of innocent things that become sensitive when linked in nuanced ways. The only way to combat this is to manually delink certain concepts; they merely failed to predict it correctly for this specific instance.

permalink
report
parent
reply
11 points

It’s up to them to moderate the content generated by their app.

And yes it’s almost impossible to have a completely safe AI so that will be an issue for all generative AIs like that. It’s still their implementation and content generated by their code.

Also I highly doubt they had a specific code to prevent that kind of depiction of Palestinian kids.

Even if they did, someone will come up with an injection prompt that overrides the code in question and the AI will again display biased or racist stuff.

An AI generating racist stuff is absolutely not more acceptable because it got inspired by real racist people…

permalink
report
parent
reply
1 point

I imagine they likely have hardcoded rules about associating content indexed as “terrorist” against a query for a nationality. Most mainstream AI models do have specific rules built in to prevent stuff like this, they just aren’t all encompassing and can still happen if there is sufficient influence from the training data.

While FB does have content moderators, needing human verification of every single piece of AI generated defeats the purpose of AI. If people want AI there is a certain amount of non politically correct results that will slip through the cracks. The bottom line is content moderation as we know it has extreme biases applied to fit the safest “viewpoint model” and any system based on objective data analysis, especially with biased samples such as openly available internet, is going to get results that do not fit the standard “curated” viewpoint.

permalink
report
parent
reply
1 point
*

The thing is, it’s almost impossible to perfectly prevent something like this before it happens. The data comes from humans, it will include all the biases and racism humans have. You can try to clean it up if you know what you want to avoid, but you can’t make it sterile for every single thing that exists. Once the AI is trained, you can pre-censor it so that it doesn’t generate certain types of images you know are “true” from the data but not acceptable to depict - e.g “jews have huge noses in drawings” is a thing it would learn because that’s a caricature we have used for ages - but again, only if you know what you are looking for and you won’t make it perfect.

If the word “palestine” makes it generate children with guns, it’s simply because the data it trained on made it think those two things are correlated somehow, and that wasn’t known until now. It will get added to the list of things to censor next time.

permalink
report
parent
reply
-7 points
*

I forget if it was on here or Reddit, but I remember seeing an article a week or so ago where the translation feature on Facebook ended up calling Palestinians terrorists “accidentally”. I cited the fact that Mark is Jewish, and probably so are a lot of the people that work there. The US is also largely pro-Israel, so it was probably less of an accidental bug and more of an intentional “fuck Palestine”. I got downvoted to hell and called a conspiracy theorist. I think this confirms I had the right idea.

permalink
report
parent
reply
17 points

Lmao

permalink
report
reply
12 points

ai is just a computer pretending to be a regular dingus on the internet

permalink
report
reply
9 points

This is the best summary I could come up with:


In response to a prompt for “Israel army” the AI created drawings of soldiers smiling and praying, no guns involved.

As the Israeli bombardment of Gaza continues, users say Meta is enforcing its moderation policies in a biased way, a practice they say amounts to censorship.

Kevin McAlister, a Meta spokesperson, said the company was aware of the issue and addressing it: “As we said when we launched the feature, the models could return inaccurate or inappropriate outputs as with all generative AI systems.

In response to the Guardian’s reporting on the AI-generated stickers, the Australian senator Mehreen Faruqi, deputy leader of the Greens party, called on the country’s e-safety commissioner to investigate “the racist and Islamophobic imagery being produced by Meta”.

“The AI imagery of Palestinian children being depicted with guns on WhatsApp is a terrifying insight into the racist and Islamophobic criteria being fed into the algorithm,” Faruqi said in an emailed statement.

A September 2022 study commissioned by the company found that Facebook and Instagram’s content policies during Israeli attacks on the Gaza strip in May 2021 violated Palestinian human rights.


The original article contains 788 words, the summary contains 184 words. Saved 77%. I’m a bot and I’m open source!

permalink
report
reply
5 points

Hold on, the kid from Home Alone works for Facebook now?

permalink
report
parent
reply
4 points

The kid that tried to kill 2 people by throwing them bricks, paint buckets, and broken glass is now a spokesperson for Facebook? How surprising.

permalink
report
parent
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 17K

    Monthly active users

  • 12K

    Posts

  • 554K

    Comments