I have experience in running servers, but I would like to know if it’s possible to do it, I just need a GPT 3.5 like private LLM running.

10 points

It’s doable. Stick to the 7b models and it should work for the most part, but don’t expect anything remotely approaching what might be called reasonable performance. It’s going to be slow. But it can work.

To get a somewhat usable experience you kinda need an Nvidia graphics card or an AI accelerator.

permalink
report
reply
4 points

Intel Arc also works surprisingly fine and consistently for ML if you use llama.cpp for LLMs or Automatic for stable diffusion, it’s definitely much closer to Nvidia in terms of usability than it is to AMD

permalink
report
parent
reply
-6 points
*

Would you suggest the K9 instead of the K8?

permalink
report
parent
reply
-28 points
*

I need it to make academic works pass the anti-AI systems, what do you recommend for that work? It’s for business so I need a reasonable good performance but nothing extravagant…

I believe commercial LLMs have some kind of watermark when you apply AI for grammar and fixing in general, so I just need an AI to make these works undetectable with a private LLM.

permalink
report
parent
reply
12 points

I believe commercial LLMs have some kind of watermark when you apply AI for grammar and fixing in general, so I just need an AI to make these works undetectable with a private LLM.

That’s not how it works, sorry.

permalink
report
parent
reply
-11 points
*

I was talking about that with a friend some days ago, and they made an experiment, they just made the AI correct punctuation errors of a text document, no words at all which you can easily add manually, and the anti-AI system target 99% AI made, I don’t know how to explain that, maybe the text was AI generated also IDK or there is a watermark in some place, a pattern or something.

Edit: you point will be that there is no way to fool the anti-AI systems running a private LLM?

permalink
report
parent
reply
10 points

Maybe just write the academic works yourself, then they should pass.

permalink
report
parent
reply
-11 points

My friend used to employ several people for that, but they started using AI to work less so he decided to start doing by his own with AI instead of paying someone else to do the same.

permalink
report
parent
reply
1 point

Something with a GPU that’s good for LLMs would be best.

permalink
report
parent
reply
8 points

They’re Ryzen processors with “AI” accelerators, so an LLM can definitely run on hardware on one of those. Other options are available, like lower powered ARM chipsets (RK3588-based boards) with accelerators that might have half the performance but are far cheaper to run, should be enough for a basic LLM.

permalink
report
reply
3 points

I don’t know of any project that already supports that AI processor. You’d still be using the CPU and GPU at the moment.

permalink
report
parent
reply
-8 points
*

The K8 it’s Ryzen, the K9 Intel, money isn’t a problem and it’s not a spending it’s a investment I need it for business, which of these two models would you recommend for a reasonable good LLM and Stable Diffusion?

I’m looking for the most cost-effective solution.

permalink
report
parent
reply
5 points

Look into ollama. It shouldn’t be an issue if you stick to 7b parameter models

permalink
report
reply
-5 points

Yeah, I did see something related to what you mentioned and I was quite interested. What about quantized models?

permalink
report
parent
reply
3 points

Quantized with more parameters is generally better than floating point with fewer parameters. If you can squeeze a 14b parameter model down to a 4-bit int quantization it’ll still generally outperform a 16-bit Floating Point 7b parameter equivalent.

permalink
report
parent
reply
-7 points

Interesting information mate, I’m documenting myself into the subject, thx for the help 👍👍

permalink
report
parent
reply
2 points

I don’t have any experience with them honestly so I can’t help you there

permalink
report
parent
reply
-5 points

Appreciate you 👍👍

permalink
report
parent
reply

Selfhosted

!selfhosted@lemmy.world

Create post

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.

Rules:

  1. Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it’s not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

Community stats

  • 4.7K

    Monthly active users

  • 3.2K

    Posts

  • 71K

    Comments