cross-posted from: https://zerobytes.monster/post/1072393
The original post: /r/nottheonion by /u/The_Ethics_Officer on 2024-05-25 00:48:15.
Friend of mine was helping me set up a doorbell camera today and although it was wireless, it had the connections for a wired camera as well. We weren’t sure which colored wires go where so we both looked it up on our phones. I was using startpage and he used google. The AI generated google answer was the complete opposite of the first couple of results from startpage (google without the AI). We could’ve risked a short.
The colors of the wires in electrical systems and such are only for esthetics. Arrange them the way you think is the prettiest. If you’re lucky you’ll also get fancy sparkles!
Honestly if your in the UK that’s basically true, we do have standards but for some reason there are different ones for appliances and structures and they also keep changing and not everyone updates their manufacturing when they do. In college we were told to just look it up because you can’t memorize them all and some of them use very similar colours but in a different order.
Technically yes but if you need to re-terminate the end of a cable in the future without seeing the other end of it, it’s nice to be able to assume it was wired the standard way.
It I found an ethernet cable wired in the wrong sequence, I would re-terminate both ends immediately because otherwise it will be an annoyance in the future.
Oh, it’s worse than that.
Google’s “AI” results feed you things for 10 year old Reddit posts that are subtle (but sometimes, also not so subtle) bullshit.
Whatever they’re using to curate training data is evidently pretty awful at detecting shitposts.
Most of the curation or fine tuning is done in low income African countries so this is little surprising. They‘re cheap labour but you can‘t expect them to reliably detect sarcasm or notice mistakes in specialized fields. They basically give a thumbs up whenever the AI sounds convincing. Of course that includes instances where it‘s confidently wrong and that appears to be most of the time with this model.
It’s not a training data issue, look up Retrieval Augmented Generation. It’s basically serving up stuff on the web and taking it as gospel.
c/literallytheonion
Eventho GPT is far from perfect, this does show how far behind google still is on AI.
Edit: GPT4o answer to exactly same question, which also explains why google failed:
“The Onion” humorously refers to itself as “America’s Finest News Source.” It’s a satirical news organization known for its parody articles on international, national, and local news, mocking traditional news media and public figures. Despite its comedic nature, it has gained a reputation for its sharp wit and clever commentary on contemporary issues.
Theres so many fake screenshots from this im not believing another one.