I have a laptop with a Ryzen 7 5700U, 16 GB ram, running Fedora 38 linux.
I’m looking to run a local uncensored LLM, I’d like to know what would be the best model and software to run it.
I’m currently running KoboldAI and Erebus 2.7b. It’s okay in terms of speed, but I’m wondering if there’s anything better out there. I guess, I would prefer something that is not web-ui based to lower the overhead, if possible.
I’m not very well versed in all the lingo yet, so please keep it simple.
Thanks!

5 points
*

Take a look at GPT4All, very user friendly

permalink
report
reply
4 points
*

I like KoboldCpp. It is easy to set up and runs well with little resources.

With something like that, you should be able to fit a much larger and better model into your RAM. If you use the quantized versions. Look for models in GGUF format on Huggingface. Q4_K_M is a good compromise between size and quality.

Which model depends on your exact use-case. I like Mythomax-L2-13b or Llama2-13B-Tiefighter for roleplay, Mistral 7B (Dolphin 2.1 Mistral 7B) or Toppy-M for more factual things. All of those are uncensored.

permalink
report
reply
3 points

Hope you had some success. Don’t hesitate to ask if you have further questions.

permalink
report
reply
1 point

As an alternative you could look at distributed/shared inferencing. There’s https://horde.koboldai.net/ (which you probably know), and petals.dev

I haven’t tested tho…

permalink
report
reply

LocalLLaMA

!localllama@sh.itjust.works

Create post

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

Community stats

  • 76

    Monthly active users

  • 219

    Posts

  • 830

    Comments