In my case, there are 95 packages that depend on zlib, so removing it is absolutely the last thing you want to do. Fortunately though, GPT also suggested refreshing the gpg keys, which did solve the update problem I was having.
You gotta be careful with that psycho!
Not copy pasting random commands you are not 100% sure about is basic terminal literacy
Online forums can give bad advice, but this is just next level bad. GPT truly has no remorse.
It’s a language model, I still don’t understand why people expect it to always give correct answers. You asked for some code, it gave you some code, I don’t see what is the problem, it worked as it should, and it’s astonishing that current technology can do this.
I also don’t like the term “Artificial Intelligence”, we should call these things LLM, or ML as Machine Learning.
It gives a lot of plainly wrong answers, including in fields where one would expect it to excel (basic physics, for instance).
LLMs are a specific implementation of ML, which is a field of AI. It’s all still AI
Meh, we call people intelligent and they give wrong answers confidently too. It’s not AI in the traditional sense, but AI has now come to mean LLM for non tech literate users. Language evolves. We don’t need to fight it.
Well, LLMs base their answers on content scraped from the web and those same online forums.
If you blindly follow whatever it tells you, you deserve whatever happens to you and your computer.
Filed under: “LLMs are designed to make convincing sentences. Language models should not be used as knowledge models.”
I wish I got a dollar every time someone shared their surprise of what a LLM said that was factually Incorrect. I wouldn’t need to work a day.
People expect a language model to be really good at other things besides language.
If you’re writing an email where you need to express a particular thought or a feeling, ask some LLM what would be a good way to say it. Even though the suggestions are pretty useful, they may still require some editing.
This use case and asking for information are completely different things. It can stylize some input perfectly fine. It just can’t be a source of accurate information.It is trained to generate text that sounds plausible.
There are already ways to get around that, even though they aren’t perfect. You can give the source of truth and ask it to answer using only information found in there. Even then, you should check. the accuracy of its responses.
Oh, yea, it has the habit of pretending to know things. For example i work with a lot of proprietary software with not much public documentation and when asking GPT about it GPT will absolutely pretend to know about it and will give nonsensical advice.
GPT is riding the highest peak of the Dunning-Kruger curve. It has no idea how little it really knows, so it just says whatever comes first. We’re still pretty far from having a AI capable of thinking before speaking.
Sounds like it’s already capable of replacing middle managers though ^(/s)