Deckweiss
Deckweiss@lemmy.world
Joined
6 posts • 499 comments
They support AMD as well.
https://ollama.com/blog/amd-preview
also check out this thread:
https://github.com/ollama/ollama/issues/1590
Seems like you can run llama.cpp directly on intel ARC through Vulkan, but there are still some hurdles for ollama.
That means their metrics suck.
Because I definitely gain a lot as a programmer, even though it doesn’t necessarily translate into measurable profit for my company.
I do spend my brain less on grindy boring shit and more on crafting creative solutions to interesting problems. Which in turn makes me quite happy - a HUGE benefit.