So what is currently the best and easiest way to use an AMD GPU for reference I own a rx6700xt and wanted to run 13B model maybe superhot but I’m not sure if my vram is enough for that Since now I always sticked with llamacpp since it’s quiet easy to setup Does anyone have any suggestion?

You are viewing a single thread.
View all comments
-1 points

Just pay nvidia their ill-earned ounce of flesh. I say this as a strong AMD advocate.

It’s clear that AMD isn’t serious about the AI market. They had years to provide a proper competitor to CUDA or at the very least a 1:1 compatibility layer. Instead of doing either of these things, AMD continued messing with half-assed projects like ROCm and the other one the name of which I don’t care to look up. AMD has the resources to build a CUDA compatible API in under 6 months but for some reason they don’t. I don’t know why they don’t, and at this point I don’t really care.

Buy an AMD GPU for AI at your own risk.

permalink
report
reply

LocalLLaMA

!localllama@sh.itjust.works

Create post

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

Community stats

  • 54

    Monthly active users

  • 218

    Posts

  • 830

    Comments