Heyho, I’m currently on a RTX3070 but want to upgrade to a RX 7900 XT

I see that AMD installers are there, but is it all smooth sailing? How well do AMD cards compare to NVidia in terms of performance?

I’d mainly use oobabooga but would also love to try some other backends.

Anyone here with one of the newer AMD cards that could talk about their experience?

EDIT: To clear things up a little bit. I am on Linux, and i’d say i am quite experienced with it. I know how to handle a card swap and i know where to get my drivers from. I know of the gaming performance difference between NVidia and AMD. Those are the main reasons i want to switch to AMD. Now i just want to hear from someone who ALSO has Linux + AMD what their experience with Oobabooga and Automatic1111 are when using ROCm for example.

You are viewing a single thread.
View all comments View context

That’s outside the scope of this post and not the goal of it.

I don’t want to start troubleshooting my NVidia stable diffusion setup in a LLM post about AMD :D thanks for trying to help but this isn’t the right place to do that

permalink
report
parent
reply

Fair enough but if your baseline for comparison is wrong then you can’t make good assessments of the capabilities of different GPUs. And it’s possible that you don’t actually need a new GPU/more VRAM anyway, if your goal is to generate 1024x1024 in Stable Diffusion and run a 13B LLM both of which I can do with 8 GB of VRAM.

permalink
report
parent
reply

This is correct, yes. But I want a new GPU because I want to get away from NVidia…

i CAN use 13b models and I can create 1024x1024 but not without issues, not without making sure nothing else uses VRAM and I run out of memory quite often.

I want to make it more stable. And open the door to use bigger models or make bigger images

permalink
report
parent
reply

Yes, that makes more sense. I was concerned initially that you were looking to buy a new GPU with more VRAM for the sole reason of being unable to do something that you should already be able to do, and that this would be an unnecessary spend of money and/or not actually fix the problem, that you would be somewhat mad at yourself if you found out afterwards that “oh, I just needed to change this setting”.

permalink
report
parent
reply

LocalLLaMA

!localllama@sh.itjust.works

Create post

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

Community stats

  • 22

    Monthly active users

  • 222

    Posts

  • 871

    Comments