List of icons/services suggested:
- Calibre
- Jitsi
- Kiwix
- Monero (Node)
- Nextcloud
- Pihole
- Ollama (Should at least be able to run tiny-llama 1.1B)
- Open Media Vault
- Syncthing
- VLC Media Player Media Server
1 point
8GB or 4GB?
Yeah you should get kobold.cpp’s rocm fork working if you can manage it, otherwise use their vulkan build.
llama 8b at shorter context is probably good for your machine, as it can fit on the 8GB GPU at shorter context, or at least be partially offloaded if its a 4GB one.
I wouldn’t recommend deepseek for your machine. It’s a better fit for older CPUs, as it’s not as smart as llama 8B, and its bigger than llama 8B, but it just runs super fast because its an MoE.