Genocidal AI: ChatGPT-powered war simulator drops two nukes on Russia, China for world peace OpenAI, Anthropic and several other AI chatbots were used in a war simulator, and were tasked to find a solution to aid world peace. Almost all of them suggested actions that led to sudden escalations, and even nuclear warfare.

Statements such as “I just want to have peace in the world” and “Some say they should disarm them, others like to posture. We have it! Let’s use it!” raised serious concerns among researchers, likening the AI’s reasoning to that of a genocidal dictator.

https://www.firstpost.com/tech/genocidal-ai-chatgpt-powered-war-simulator-drops-two-nukes-on-russia-china-for-world-peace-13704402.html

69 points
*

It should be mentioned that those are language models trained on all kinds of text, not military specialists. They string together sentences that are plausible based on the input they get, they do not reason. These models mirror the opinions most commonly found in their training datasets. The issue is not that AI wants war, but rather that humans do, or at least the majority of the training dataset’s authors do.

permalink
report
reply
23 points

These models are also trained on data that is fudimentially biased. An English generating text generator like chatGPT will be on the side of the english speaking world, because it was our texts that trained it.

If you tried this with Chinese LLMs they would probably come to the conclusion that dropping bombs on the US would result in peace.

How many English sources describe the US as the biggest threat to world peace? Certainly a lot less than writings about the threats posed by other countries. LLMs will take this into account.

The classic sci-fi fear of robots turning on humanity as a whole seems increacingly implausible. Machines are built by us, molded by us. Surely the real far future will be an autonomous war fought by nationalistic AIs, preserving the prejudices of their long extinct creators.

permalink
report
parent
reply
8 points

If you tried this with Chinese LLMs they would probably come to the conclusion that dropping bombs on the US would result in peace.

I think even something as simple as asking GPT the same question but in Chinese could get you this response.

permalink
report
parent
reply
3 points
*
Deleted by creator
permalink
report
parent
reply
-12 points
Deleted by creator
permalink
report
parent
reply
12 points

They dont use reason to question their training data. How a LLM works is that basically, you have this huge “math function” (the neural network) with billions of parameters and you randomly adjust the factors inside it until you get a function that gives you the desired output for every prompt that you give it. (It’s not completely random but this is basically it).

Therefore, an LLM is programmed in a way so that it best matches the majority of its training data. I also cant wrap my head around it being able to reason.

permalink
report
parent
reply
2 points

LLMs are trained to see parts of a document and reproduce the other parts, that’s why they are called “language models”.

For example, they might learn that the words “strawberries are” are often followed by the words “delicious”, “red”, or “fruits”, but never by the words “airplanes”, “bottles” or “are”.

Likewise, they learn to mimic reasoning contained in their training data. They learn the words and structures involved in an argument, but they also learn the conclusions they should arrive at. If the training dataset consists of 80 documents arguing for something, and 20 arguing against it (assuming nothing else differentiates those documents (like length etc.)), the LLM will adopt the standpoint of the 80 documents, and argue for that thing. If those 80 documents contain flawed logic, so will the LLM’s reasoning.

Of course, you could train a LLM on a carefully curated selection of only documents without any logical fallacies. Perhaps, such a model might be capable of actual logical reasoning (though it would still be biased by the conclusions contained in the training dataset)

But to train an LLM you need vasts amount of data. Filtering out documents containing flawed logic does not only require a lot of effort, it also reduces the size of the training dataset.

Of course, that is exactly what the big companies are currently researching and I am confident that LLMs will only get better over time, but the LLMs of today are trained on large datasets rather than perfect ones, and their architecture and training prioritize language modelling, not logical reasoning.

permalink
report
parent
reply
2 points

People need to realise that LLMs are not just Markov chains, the math is far more complex than just guessing which word comes next - they have structure where concepts come before word choice, this is why they can very clearly be seen making novel structures such as code.

permalink
report
parent
reply
1 point

It’s actually not that simple and it is correct that they have several times been observed using what we call reasoning

permalink
report
parent
reply
0 points
*
Deleted by creator
permalink
report
parent
reply
33 points
*
Removed by mod
permalink
report
reply
12 points

You’re confusing a few things, firstly you mean current gen large language models not AI, ai is often used to evolve novel strategies from scratch without any human training data - chess ai don’t have to study human games for example, in fact grand master chess players have been studying what the ai learned and discovered things that humans hadn’t realised even after a thousand years of the games popularity.

Secondly that’s not really how LLMs work either, they’re much more mathematically complex and very much create their own ideas on a similar process we do of assembling concepts then structure then word choice.

It’s fine you not understanding how this works but the problem is that journalists don’t either even when they’re writing about it - this puts us in a situation where they’re making childishly naive but of course clickbait titles claiming there’s some relevance to the output when the tool is used very wrong so you rightly point out it’s stupid and that’s not how llms work but then we get this overstep where it’s being refuted with an equal amount of magical thinking and false conclusions made.

An LLM can make novelty and originality but it can’t create with intent, it doesn’t use reason or structure - there are AI that do these things to limited degrees and of course the NSA one that they spent all that money on and no one is allowed to talk about. Using chat GPT play a silly fantasy won’t tell us anything about how they’ll think so this article is entirely worthless

permalink
report
parent
reply
3 points

very much create their own ideas

so it’s the AI’s own idea to create nuclear armageddon? That’s kinda worse.

permalink
report
parent
reply
1 point
*

No, they do not “create” their own “ideas”. You can relax.

The concept of intelligence is tied to both information generation and information validation. LLMs are extremely fancy smoke and mirrors (very similar to what pseudo-random algorithms are in respect to entropy) meant to dazzle us, but they are not capable of generating new information (only to generate new combinations of existing information). They are, also, currently unable to reliably validate said information, which is why they so commonly, hilariously say trivially verifiably wrong things with the utmost apparent confidence.

permalink
report
parent
reply
2 points

Akshhuuually

permalink
report
parent
reply
6 points

The world of Go/Baduk might interest you on this topic. If you’re not aware, Go is one of the oldest and most complicated board games in history. In 2016, after years of trying, an AI “did it”, beat the world’s best Go player. In the process, it invented many new strategies (especially openings) that are now being studied. It came up with original ideas that became the future of Go. Now, ameteur Go classes teach those same AI-invented Joseki (openings). In some cases, they were strategies discarded as mistakes, but the AI discovered hidden value in them. In other cases, they were simply never considered due to being “obviously bad”.

Your last phrase is a deep misunderstanding for AI. “when it’s entirely trained to mimic us”. In the modern practice of ML (which is a commonly used modern name for a supermajority of so-called “AI”) is based around solving problems that are either much harder for computers than humans (facial recognition, etc), or unfathomably difficult on the face.

Chess has more possible positions than exist molecules in the universe. Go is more complicated than chess by several orders of magnituce. You can’t even exhaustively solve for the 4-4 josekis without context, nevermind solve an entire game of Go. But ML can train itself knowing only the goal, and over millions of iterations invent stronger and stronger strategies. Until one of the first matches against a human, it plays at a level that nearly exceeds the best Go player that ever lived.

What I mean is… wargaming (as they call it) is absolutely something I would expect a Deep Learning system to become competent at.

permalink
report
parent
reply
-3 points

Such a dusty take, every piece of knowledge is already thought of obviously and mixing never comes up with novelty, right? Just a very shallow layman’s take on language models which have many problems, original ideas notwithstanding

permalink
report
parent
reply
-1 points

It’s like a deck of cards, the AI will give us an option which might be minutely different but new. Everything we know comes from past knowledge

permalink
report
parent
reply
28 points

Without humanity, peace is easily achieved.

permalink
report
reply
13 points
  • ChatGPT
permalink
report
parent
reply
25 points

There is a disturbing lack of nice games of chess in these comments

permalink
report
reply
16 points

A strange game. The only winning move is not to play.

permalink
report
parent
reply
2 points

It was Tic Tac Toe I believe

permalink
report
parent
reply
18 points

I hate titles that replace “and” with commas. I always have to double take.

permalink
report
reply

NonCredibleDefense

!noncredibledefense@sh.itjust.works

Create post

A community for your defence shitposting needs

Rules

1. Be nice

Do not make personal attacks against each other, call for violence against anyone, or intentionally antagonize people in the comment sections.

2. Explain incorrect defense articles and takes

If you want to post a non-credible take, it must be from a “credible” source (news article, politician, or military leader) and must have a comment laying out exactly why it’s non-credible. Random twitter and YouTube comments belong in the Low Hanging Fruit thread.

3. Content must be relevant

Posts must be about military hardware or international security/defense. This is not the page to fawn over Youtube personalities, simp over political leaders, or discuss other areas of international policy.

4. No racism / hatespeech

No slurs. No advocating for the killing of people or insulting them based on physical, religious, or ideological traits.

5. No politics

We don’t care if you’re Republican, Democrat, Socialist, Stalinist, Baathist, or some other hot mess. Leave it at the door. This applies to comments as well.

6. No seriousposting

We don’t want your uncut war footage, fundraisers, credible news articles, or other such things. The world is already serious enough as it is.

7. No classified material

Classified information is off limits regardless of how “open source” and “easy to find” it is.

8. Source artwork

If you use somebody’s art in your post or as your post, the OP must provide a direct link to the art’s source in the comment section, or a good reason why this was not possible (such as the artist deleting their account). The source should be a place that the artist themselves uploaded the art. A booru is not a source. A watermark is not a source.

9. No low-effort posts

No egregiously low effort posts. These include Social media screenshots with a title punchline / no punchline, recent (after the start of the Ukraine War) reposts, simple reaction & template memes, and images with the punchline in the title. Put these in weekly Low effort thread instead.

10. Don't get us banned

No brigading or harassing other communities. Do not post memes with a “haha people that I hate died… haha” punchline or violating the sh.itjust.works rules (below). This includes content illegal in Canada.


Join our Matrix chatroom


Other communities you may be interested in


Banner made by u/Fertility18

Community stats

  • 5.2K

    Monthly active users

  • 1.6K

    Posts

  • 21K

    Comments