You are viewing a single thread.
View all comments View context
7 points
*

Local LLMs have been supported via the Ollama integration since Home Assistant 2024.4. Ollama and the major open source LLM models are not tuned for tool calling, so this has to be built from scratch and was not done in time for this release. We’re collaborating with NVIDIA to get this working – they showed a prototype last week.

Are all Ollama-supported algos mediocre? Which ones would be better?

permalink
report
parent
reply

homeassistant

!homeassistant@lemmy.world

Create post

Home Assistant is open source home automation that puts local control and privacy first. Powered by a worldwide community of tinkerers and DIY enthusiasts. Perfect to run on a Raspberry Pi or a local server. Available for free at home-assistant.io

Community stats

  • 597

    Monthly active users

  • 548

    Posts

  • 5.8K

    Comments

Community moderators