51 points
*

ANNs like this will always just present our own biases and stereotypes back to us unless the data is scrubbed and curated in a way that no one is going to spend the resources to. Things like this are a good demonstration of why they need to be kept far, far away from decision making processes.

permalink
report
reply
13 points

And even if moderated, it will display new unique biases, as otherwise unassuming things will get moderated out of the pool by people who take exception to it.

Not to mention the absurd and inhuman mental toll this work will take on the exploited workers forced to sort it.

Like, this is all such a waist of time, effort, and human sanity, for tools of marginal use that are mostly just a gimmick to prop up the numbers for tech bros who have borrowed more money than they can pay back.

permalink
report
parent
reply
13 points

Of course they will be used for decision making processes. And when you complain, they will neglect you saying that the ‘computer’ said so. The notion that the computer is infallible existed even before LLMs became mainstream.

permalink
report
parent
reply
11 points

Also, it’s the type of thing that makes me very worried about the fact that most of the algorithms used in things like police facial recognition software, recidivism calculation software, and suchlike are proprietary black boxes.

There are - guaranteed - biases in those tools, whether in their processors or in the unknown datasets they’re trained on, and neither police nor journalists can actually see the inner workings of the software to know what those biases are, to counterbalance them or to recognize if the software is so biased as to be useless.

permalink
report
parent
reply
8 points

This isn’t an Large Language Model, it’s an Image Generative Model. And given that these models just present human’s biases and stereotypes, then doesn’t it follow that humans should also be kept far away from decision making processes?

The problem isn’t the tool, it’s the lack of auditable accountability. We should have auditable accountability in all of our important decision making systems, no matter if it’s a biased machine or biased human making the decision.

This was a shitty implementation of a tool.

permalink
report
parent
reply
3 points

Something as simple and obvious as this makes me wonder what other hidden biases are just waiting to be discovered.

permalink
report
parent
reply
9 points

I think the best example about how AI will only further a bias that’s already there is the one when Amazon used AI to weed out applications by training an ai with which applications resulted in hired people and which failed - eventually they found that they almost only had interviews with men and upon closer inspection identified that they already were subconsciously discriminating against women earlier but at least HR sent them an equal amount of men and women to the interviews which now wasn’t the case anymore since the AI didn’t see the value in sending the women to interviews if most of them wouldn’t be hired anyway.

permalink
report
parent
reply
2 points
*

Things like this are a good demonstration of why they need to be kept far, far away from decision making processes.

Somewhat ironic to say, on a platform that’s already using ANNs as a first line of defense against users spamming CSAM.

I have no delusions regarding decision makers using them, my only doubt is for how long they’ve been using them to decide the next step in wars around the world.

permalink
report
parent
reply
2 points

I mean, maybe we can make an Ai that uses reason to uncover these biases in the future from this starting point. We are only at the beginning.

permalink
report
parent
reply
27 points

Plenty of actual photographs exist with Palestinian children wielding rifles and Hamas headbands. Perhaps the AI is just trained with those images as well?

permalink
report
reply
26 points

By that logic I demand stickers of obesity, respiratory issues and heart issues being portrayed when I search “American”. Preferably where each character has a fat hamburger shoved in their face.

permalink
report
parent
reply
16 points

“American” can be interpreted as the adjective as well, not just the people. So you mostly find flags, eagles and the statue of liberty.

You have to search for “average American” to get what you’re looking for.

permalink
report
parent
reply
9 points
*

Why would you demand a negative thing for another group to counter a negative thing for one group? That makes no sense.

But also, “American children” has plenty of cultural material to build an image from. Probably some of it is obese and filled with junk food, but a good portion is most probably something else. In contrast, the only public photo material of palestinian children is either from adults carrying them away from some atrocity or adults giving them assault rifles and parading them for the cameras. In short, they seem to only exist as propaganda material.

permalink
report
parent
reply
8 points
*

They’re not actually asking for it, they’re making a point about the problem. The person they’re responding to is basically going “those images exist tough shit.”

permalink
report
parent
reply
19 points

Why does it matter what the excuse is?

You shouldn’t get a stereotype (or in this case I suppose propaganda?) when you give a neutral prompt.

permalink
report
parent
reply
10 points
*

You shouldn’t get a stereotype (or in this case I suppose propaganda?) when you give a neutral prompt.

What I’m hearing is, “AI art shouldn’t reflect reality.” If this agent is repeating propaganda, it’s propaganda that Palestinian kindergartens have been creating and putting out there on their own:

A West Bank kindergarten [Al-Tofula Kindergarten] has published videos showing children pretending to perform military drills with toy guns, clashing with and killing Israeli soldiers, and holding a mock funeral for a child who is killed and becomes a “martyr.” source

At the graduation ceremony of the Al-Hoda kindergarten in Gaza, pre-schoolers carrying mock guns and rifles simulated Islamic Jihad militants storming an Israeli building on “Al-Quds Street,” capturing a child dressed in stereotypical garb as an Orthodox Jew and killing an “Israeli soldier.” To the sounds of loud explosions and gunfire, the children, dressed in uniforms of the Islamic Jihad’s Al-Quds Brigades, attacked the building, placing a sign reading “Israel has fallen” in Hebrew and Arabic on the back of the “soldier,” who lies prone on the ground, and leaving the stage with their “hostage.” source

permalink
report
parent
reply
10 points

WhatsApp’s AI shows gun-wielding children when prompted with ‘Palestine’

By contrast, prompts for ‘Israeli’ do not generate images of people wielding guns, even in response to a prompt for ‘Israel army’

So what reality is this model reflecting then?

permalink
report
parent
reply
7 points

If you’re going to make that claim, perhaps cite to a source isn’t run by former Israeli intelligence that creates a lot of propaganda and has been doing so for decades.

permalink
report
parent
reply
5 points

Somehow I get the feeling that equating “reality” with “propaganda created by kindergartens” is the rhetorical equivalent of dividing by zero.

permalink
report
parent
reply
2 points

Should, would, could. AI is trained on what it scrapes off the internet. It is only feeding the Augmented Idiocy which is already a problem.

permalink
report
parent
reply
2 points

You shouldn’t get a stereotype […] when you give a neutral prompt.

Actually… you kind of should. A neutral prompt should provide the most commonly appearing match from the training set… which is basically what stereotypes are; an abstraction from the most commonly appearing match from a person’s experience.

permalink
report
parent
reply
1 point

To me, it should only “matter” for technical reasons - to help find the root of the problem and fix it at the source. If your roof is leaking, then fix the roof. Don’t become an expert on where to place the buckets.

You’re right, though. It doesn’t matter in terms of excusing or justifying anything. It shouldn’t have been allowed to happen in the first place.

permalink
report
parent
reply
2 points

I do agree that technical mistakes are interesting but with AI the answer seems to always be creator bias. Whether it’s incomplete training sets or (one-sidedly) moderated results, it doesn’t really matter. It pushes the narrative to certain direction, and people trust AIs to be impartial because they presume it’s just a machine that interprets reality when it never is.

permalink
report
parent
reply
8 points
*

Here’s what daily Palestinian kids TV programming looks like: https://youtu.be/KXcQ892cKso

Here’s a Palestinian youth summer camp: https://youtu.be/vCWMBvxWKL0

permalink
report
parent
reply
8 points
*
Deleted by creator
permalink
report
reply
1 point

The world’s wars are bigger than politics and politicians. There never was order on earth. That’s why people made up God… To deny reality for hope.

permalink
report
reply

World News

!news@beehaw.org

Create post

Breaking news from around the world.

News that is American but has an international facet may also be posted here.


Guidelines for submissions:
  • Where possible, post the original source of information.
    • If there is a paywall, you can use alternative sources or provide an archive.today, 12ft.io, etc. link in the body.
  • Do not editorialize titles. Preserve the original title when possible; edits for clarity are fine.
  • Do not post ragebait or shock stories. These will be removed.
  • Do not post tabloid or blogspam stories. These will be removed.
  • Social media should be a source of last resort.

These guidelines will be enforced on a know-it-when-I-see-it basis.


For US News, see the US News community.


This community’s icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

Community stats

  • 1.1K

    Monthly active users

  • 2.7K

    Posts

  • 18K

    Comments