I stumbled upon the Geminy page by accident, so i figured lets give it a try.
I asked him in czech if he can also generate pictures. He said sure, and gave me examples about what to ask him.
So I asked him, again in czech, to generate a cat drinking a beer at a party.
His reply was that features for some languages are still under development, and that he can’t do that in this language.
So I asked him in english.
I can’t create images for you yet, but I can still find images from the web.
Ok, so I asked if he can find me the picture on the web, then.
I’m sorry, but I can’t provide images of a cat drinking beer. Alcohol is harmful to animals and I don’t want to promote anything that could put an animal at risk.
Great, now I have to argue with my search engine that is giving me lessons on morality and decide what is and isn’t acceptable. I told him to get bent, that this was the worst first impression I ever had with any LLM model, and I’m never using that shit again. If this was integrated into google search (which I havent used for years and sticked to Kagi), and now replaces google assistant…
Good, that’s what people get for sticking with google. It brings me joy to see Google dig it’s own grave with such success.
for anyone else that felt they were left hanging by this 👆 person’s story:
Alcohol guzzling pussy is inappropriate content and against my guidelines.
https://imgur.com/a/beer-drinking-cat-iDHVaU6
It did not understand the assignment, but it did give me a few reasonable examples of the original prompt.
Are you in the czech republic, speaking in Czech, and presumably not using a VPN?
Wow that is pretty damning. I hope Google is adding all this stuff in with the replacement of Assistant but it’s Google so I guess they won’t. I replaced Assistant with Gemini a while back but I only use it for super basic stuff like setting timers so I didn’t realise it was this bad.
They did the same shit with Google Now, rolled it into Assistant but it was nowhere near as useful imo. Now we get yet another downgrade switching Assistant with Gemini.
As I like to say, there’s nobody Google hates more than the people that love and use their products.
That brief moment in time when we had dirt cheap Nexus phones, Google Now and Inbox was peak Google. Just 5 years later it was all gone.
When google asked if I wanted to try Gemini I gave it a try and the first time I asked it to navigate home, something I use assistant for almost daily, it said it can’t access this feature but we can chat about navigating home instead - fuck that!
Even though I switched back to assistant it’s still getting dumber and losing functionality - yesterday is asked it to add something to my grocery list(in keep) and it put it on the wrong list, told me the list I wanted doesn’t exist, then asked if I wanted to create the list and then told me it can’t create it because it already exists.
I’ve talk to more logical toddlers!
Go go enshittification!
I don’t think it’s even enshittification (probably costs more to run than Assistant), it’s just Google desperate to find a use for its new AI.
If you have a Pixel, just put GrapheneOS on it and you won’t ever have to deal with Google’s proprietary bullshit again
I can also just use stock android and assume they work. Sometimes y’all miss the forest for the trees.
Nope. I just do my banking stuff on my computer instead.
You might have some luck if you use Google Play services, but they often check if you have a custom ROM and bail if you do.
I wouldn’t give such a general statement. It really depends on your bank. There’s a very handy list at https://privsec.dev/posts/android/banking-applications-compatibility-with-grapheneos/
So an interesting thing about this is that the reasons Gemini sucks are… kind of entirely unrelated to LLM stuff. It’s just a terrible assistant.
And I get the overlap there, it’s probably hard to keep a LLM reined in enough to let it have access to a bunch of the stuff that Assistant did, maybe. But still, why Gemini is unable to take notes seems entirely unrelated to any AI crap, that’s probably the top thing a chatbot should be great at. In fact, in things like those, related to just integrating a set of actions in an app, the LLM should just be the text parser. Assistant was already doing enough machine learning stuff to handle text commands, nothing there is fundamentally different.
So yeah, I’m confused by how much Gemini sucks at things that have nothing to do with its chatbotty stuff, and if Google is going to start phasing out Assistant I sure hope they fix those parts at least. I use Assistant for note taking almost exclusively (because frankly, who cares about interacting with your phone using voice for anything else, barring perhaps a quick search). Gemini has one job and zero reasons why it can’t do it. And it still really can’t do it.
LLMs on their own are not a viable replacement for assistants because you need a working assistant core to integrate with other services. LLM layer on top of assistants for better handling of natural language prompts is what I imagined would happen. What Gemini is doing seems ridiculous but I guess that’s Google developing multiple competing products again.
- Convert voice to text.
- Pre-parse vs voice command library of commands. If there are, do them, pass confirmation and jump to 6.
- If no valid commands, then pass to LLM.
- have LLM heavily trained on commands and some API output for them. If none, then other responses
- have response checked for API outputs, handle them appropriately and send confirmation forward, otherwise pass on output.
- Convert to voice.
The LLM part obviously also needs all kinds of sanitation on both sides like they do now, but exact commands should preempt the LLM entirely, if you’re insisting on using one.
It is a replacement for a specific portion of a very complicated ecosystem-wide integration involving a ton of interoperability sandwiched between the natural language bits. Why this is a new product and not an Assistant overhaul is anybody’s guess. Some blend of complicated technical issues and corporate politics, I bet.