29 points

Great, but it’s restrictive only letting you use openai and google. I’m already hosting oogabooga text generation, let me use that

permalink
report
reply
19 points

I believe that’s because those two APIs support function calling, open source support is still coming along.

permalink
report
parent
reply
9 points

Ah that makes sense. That’s when I’d start using it myself. Self hosted models and audio

permalink
report
parent
reply
3 points

Mistral Instruct v0.3 added in function calling, but I don’t know if its method for implementation is the same/compatible. Also, it is fairly new and wasn’t released all that long ago. Hopefully we’ll get there soon. :)

permalink
report
parent
reply
2 points

I saw a few others, but the ones I looked at were basically instruct layers where you’d need to add your own parser. I didn’t find anything (in my 3 minutes of searching) that offers an openai chat completions endpoint, which is probably the main stopper.

permalink
report
parent
reply
10 points

Okay but when can we use the weather forecast on our dashboards? Functionality was retired with no replacement

permalink
report
reply
8 points

The free data source was cut off, there’s several replacements of varying quality depending on region. The met.no one is good for me.

permalink
report
parent
reply
1 point

It’s still working for me though I remember I needed to change the entity id and a automation a while back

permalink
report
parent
reply
1 point

What are you referring to when you say "when can we use the weather forecast on our dashboards? "

I’ve probably got the simplest and most “Out of the Box” dashboard stuff going on you can imagine and I’ve got forecast data showing with automations that run against it. What am I missing?

permalink
report
parent
reply
1 point

Ok. Npw Its definitely time to migrate my instance to something more powerful then my raspberry pi

permalink
report
reply
-1 points

Oh cool, implementing mediocre algorithms. What could possibly go wrong?

permalink
report
reply
7 points
*

Local LLMs have been supported via the Ollama integration since Home Assistant 2024.4. Ollama and the major open source LLM models are not tuned for tool calling, so this has to be built from scratch and was not done in time for this release. We’re collaborating with NVIDIA to get this working – they showed a prototype last week.

Are all Ollama-supported algos mediocre? Which ones would be better?

permalink
report
parent
reply

homeassistant

!homeassistant@lemmy.world

Create post

Home Assistant is open source home automation that puts local control and privacy first. Powered by a worldwide community of tinkerers and DIY enthusiasts. Perfect to run on a Raspberry Pi or a local server. Available for free at home-assistant.io

Community stats

  • 597

    Monthly active users

  • 548

    Posts

  • 5.8K

    Comments

Community moderators