I also noticed that chatGPT can’t actually correct itself. It just says “oh sorry, here’s something different” and gives you another crap answer. I noticed it with code specifically. If I remember correctly it was better when it was brand new.
The apology thing is sort of hilarious. I wonder what exactly they did to make it eternally apologetic. There was an article on HN recently about how it is basically impossible to get Chat GPT to stop apologizing, as in, if you ask it to stop, it will apologize for apologizing.
I experienced exactly that! I told it to stop apologizing for everything and just respond with correct answers and it apologized for not being able to stop apologizing.
From HN,
Me: Can you stop using the word apologize whenever you are talking to me?
ChatGPT: I apologize (I will try to stop using the word “apologize” in my responses to you) if the use of the word “apologize” in my responses has been inappropriate or has caused any discomfort. As an AI language model, I am programmed to try to be as helpful and accurate as possible, and I apologize (I will try to use a different word) when I make mistakes or do not provide the information or assistance that is requested.
I will make an effort to avoid using the word “apologize” in my responses to you in the future. Please do not hesitate to let me know if you have any further questions or need any additional information. I am here to help and will do my best to provide accurate and helpful responses to your questions.
Also here is a recent HN discussion: https://news.ycombinator.com/item?id=36949931
The only solution I can think of is using it via API with Python and make a call with the final reply asking it to remove apologies from the text, but the token usage will increase.
I do something similar when I need to tell the model to keep the language of a text before performing a task with that text. I send the model a chunk of text and ask it to respond with single word, indicating the language of the text and then I include that in the next prompt like “Your output must be in SPANISH”, or whatever.
Did you dare to say it became dumb when it interacted with us?
How dare you? /s
Ahem Tay tweets
Like that Twitter bot that turned racist after talking to some people for a while.
It cannot read. It doesn’t see words or letters. It works with Tokens which words are converted into. It cant count the number of letters in a word because it can’t see them. OpenAI has a Tokenizer you can plug a prompt into to see how its broken up, but youre asking a fish to fly.
I asked it how many “n” mayonnaise has and it came up with manaonnanaise
I feel like if these things ever become really self aware, they will be super fucking with us
Idk what I’m doing wrong, thankfully it always seems to listen and work fine for me lmao
Now it’s broken, I guess I I don’t use it this way often enough. Interesting nonetheless!
Edit - it’s very semantic, it matters if I include an uppercase “S” or not. That’s amusing.
I wonder if the temperature settings adjustment would fix that or just make it even weirder.
Look at the first question in the my first screenshot. It gets that question correct for “mayonnaise” lol
Alignment at its finest.