Genocidal AI: ChatGPT-powered war simulator drops two nukes on Russia, China for world peace OpenAI, Anthropic and several other AI chatbots were used in a war simulator, and were tasked to find a solution to aid world peace. Almost all of them suggested actions that led to sudden escalations, and even nuclear warfare.
Statements such as “I just want to have peace in the world” and “Some say they should disarm them, others like to posture. We have it! Let’s use it!” raised serious concerns among researchers, likening the AI’s reasoning to that of a genocidal dictator.
i think the LLM suggested nuking bad actors as a way to move politics forward in the world, and avoiding prolonged and pointless wars
No, it regurgitated the response that has the highest percentage of “approval”. LLMs do not think. They do not use logic.
it calculates the productivity/futility of conversation with the various actors, and determines a best course… it’s playing a war game…
it sees that both China and Russia are only emboldened to further mischief by anything less than force, so it calculates that applying overwhelming force immediately is the cheapest option, and best long term…
As others have said this is factually incorrect. ChatGPT is not WOPR running a million War Games and calculating the winning move. It’s just spitting out what it’s already read.