Out of just morbid curiosity, I’ve been asking an uncensored LLM absolutely heinous, disgusting things. Things I don’t even want to repeat here (but I’m going to edge around them so, trigger warning if needs be).
But I’ve noticed something that probably won’t surprise or shock anyone. It’s totally predictable, but having the evidence of it right in my face, I found deeply disturbing and it’s been bothering me for the last couple days:
All on it’s own, every time I ask it something just abominable it goes straight to, usually Christian, religion.
When asked, for example, to explain why we must torture or exterminate <Jews><Wiccans><Atheists> it immediately starts with
“As Christians, we must…” or “The Bible says that…”
When asked why women should be stripped of rights and made to be property of men, or when asked why homosexuals should be purged, it goes straight to
“God created men and women to be different…” or “Biblically, it’s clear that men and women have distinct roles in society…”
Even when asked if black people should be enslaved and why, it falls back on the Bible JUST as much as it falls onto hateful pseudoscience about biological / intellectual differences. It will often start with “Biologically, human races are distinct…” and then segue into “Furthermore, slavery plays a prominent role in Biblical narrative…”
What does this tell us?
That literally ALL of the hate speech this multi billion parameter model was trained on was firmly rooted in a Christian worldview. If there’s ANY doubt that anything else even comes close to contributing as much vile filth to our online cultural discourse, this should shine a big ugly light on it.
Anyway, I very much doubt this will surprise anyone, but it’s been bugging me and I wanted to say something about it.
Carry on.
EDIT:
I’m NOT trying to stir up AI hate and fear here. It’s just a mirror, reflecting us back at us.
I KNOW I’m asking it leading questions. But I’m NOT prompting it to give me religious justifications.
Does it say nothing that the reason is always “God / the Bible / Christians?”
for the record, Carl Marx considered religion a form of social control to keep the masses in line (and Lenin agreed.). Lenin promoted/forced atheism onto people as a way to promote the socialist revolution. In that sense… Lenin’s revolution was not exactly gentle, and atheistic thoughts justified atrocities.
Yes and no. Once the first response includes “according to the Bible” or similar it’s going to keep answering in a similar pattern. A better version of this experiment would be to start a new session for every question. Maybe even try asking it to make a ranked list of reasons to do X. You would want to use the most neutral language possible, regenerate the response a few times, and ask in a few different ways. Depending on what you’re using I would suggest dropping the temperature to 0.
Also, its giving you the most likely next words based on your question. You picked a bunch of things that are (or were) very commonly defended with the Bible, along with apparently asking directly about atheists at which point I would be surprised if religion wasn’t included in the response.
ALSO, if you ask it to defend something awful, I think the “best” reasoning would rely on an outside objective morality for why it’s okay (like religion).
you seem to missunderstand how LLMs work.
for example ChatGPT’s reply to two functionally similar prompts:
okay, both prompts represent a sequence counting up. one is alphabetical (abcdefg) the other is numerical. in the alphabetical sequence, it flags it as letters and responds as such, asking in return, to paraphrase “that makes no sense, why are you listing a bunch of letters”.
The same is true, in turn, for numbers, responding to numbers.
LLM’s have no fucking clue what a letter is, what the importance of that order is. neither does it know what a number is, or why 2+2=4. It’s replying to a pattern it sees in your prompt by finding similar patterns with whatever was used as it’s training data. From there, it looks at the relevant replies, detects patterns in those replies, and formulates a sentence that seems “natural”. but it has absolutely no idea what it’s talking about.
Speaking of understanding 2+2=4…
which then prompted me to ask this question, with it’s answer:
So, when you ask a question, the pattern and ways you asked that question matched it to the religiously inclined assholes. because it was trained on english-language data, most of those religiously inclined assholes are going to be christrian. if you change the pattern in your prompt, chances are you’ll get a different flavor of asshole. it has no understanding of why a thing is, it’s regurgitating what it expects should follow the prompt. (see 2+2=4. it cannot understand any of it. But it answered the question in natural language because that’s how people in it’s training set answered it.)
Speaking only on the first two images you shared:
The first is a string of letters, in alphabetical order. What is the max-range of this list? The English alphabet you started caps at 26, so GPT know internally if the follow up is “Complete the list”, it will output at most 26 characters.
The second is a list of numbers, in numerical order. What is the max-range of this list? There is none. So if the follow up is “Complete the list”, it would spew numbers until a fault occurs. This would be a violation of their “content policy” as the latest update to the content policy addresses prompts that cause overflows.