Looks like it’s learned that adding “according to Quora” makes it look more authoritative. Maybe with a few more weeks of training it’ll figure out how to make fake citations of sources that are actually trustworthy.
Just wait until it starts taking stuff from 4chan, twitch, and twitter. Things are going to be come so much more interesting.
Google signing a contract with 4chan for data training is actually so stupid I don’t think it’ll ever happen.
4chan is almost certainly blacklisted from basically everything AI given the sites content and history of intentionally destroying chatbots/earlier 'AI’s.
But at the same time they paid reddit millions to train on “authoritative” posts like that one from “fuckSmith” that suggested to add glue to pizza
As @Karyoplasma@discuss.tchncs.de pointed out, this is an actual answer on Quora so at least it got that right
For some reason I don’t have AI search on my account, but I still get the same answer:
What’s the point in having an AI run the search and present the found answer for you, when you just ran the search yourself and gets the AI finding presented?
As this point AI helpers are just a layer that hides the details from the original search. It’s useless for this. AI is wonderful for lots of stuff, but this just isn’t it. I used to laugh when people used the Google search box to find Google so they could search in Google, but that is exactly what AI is doing for us now.
Plus the insane power consumption for such a marginally useful feature. Especially given that it’s on by default for everyone using google (as I understand)
It’s almost like the feature is not ready but they need to show off to their investors anyway. At the cost of user experience and the environment.
At least with ChatGPT you have to consciously go to their website and use, rather than being the first result of a fucking internet search.
More eyes on your website, means less on other websites, making your adverts more valuable.
And when it doesn’t work, it doesn’t matter, because you run the advertising on the other websites too. Bonus: you can penalise rankings for websites that don’t use your advertising network.
Was having a related conversation with an employee this morning (I manage a software engineering organization). He asked an LLM how to separate the parts of a date in Excel, and got a pretty good explanation of how do it with the text to columns wizard, and also how to use a formula to get each part. He was happy because he felt it would have taken him much longer to figure it out himself.
I was saying I thought that was a good use of an LLM - it’s going to give a tailored answer - but my worry is that people will do less scrubbing of an answer coming from an AI than one they saw on a forum. I said we should think of it like a tailored Google search.
For comparison, I googled “Excel formula separate parts of a date” and one of the top results was a forum discussion that had the exact solutions the LLM gave, using the same examples. On the one hand, to get it from the forum you had to wade through all the wrong answers and discussions. On the other hand, that discussion puts the answer given in the context of a bunch of others that are off the mark, and I think make people less likely to assume it’s correct.
In any case, it’s still just synthesizing from or regurgitating training data.
So many fruits in the berrum family, can’t believe they even had to google that question…
“If it’s on the internet it must be true” implemented in a billion dollar project.
Not sure what would frighten me more: the fact that this is trainings data or if it was hallucinated
Sure we can. If it gives you bad information because it can’t differentiate between a joke a good information…well, seems like the blame falls exactly at the feet of the AI.
Should an LLM try to distinguish satire? Half of lemmy users can’t even do that
Wow they really did it.
They put the um in the coconut and shake it all up
There goes my making shit up job.
Now LLMs are even taking the jobs of professional trolls! What’s gonna be next? The scambots loosing their jobs to LLMs?!