What are the hardware requirements to run SDXL?
In particular, how much VRAM is required?
This is assuming A1111 and not using --lowvram
or --medvram
.
I don’t have A1111 but in ComfyUI using a shared workflow that does base and then refiner, SDXL 0.9 was using 12GB of VRAM and 22GB of ram in Ubuntu for me. Doing images of 1024x1024~ GPU: AMD RX 6800
Also have a 6800xt, 32gb ram. SDXL 1.0 running with A1111, but I can only generate images using --medvram. This is on windows admittedly.
3070 8gb vram, 16gb ram
In confy About 16-17sec to do everything at 1024x1024 with 20steps and 5 refiner.
A1111 About the same, using batch of 4 and then using batch img2img refining. Just more clicks without extensions etc as getting the same it/s between confy and a1111 - - medvram
I am using a 3070 8gb FTW, and 32gb ddr5…I have yet to get SDXL to even generate without an error yet. I assumed it was my hardware.
Nah, auto1111 seriously struggles with SDXL for me. But comfy manages to do it without issue.
I tried ComyfyUI last night - but, unless I am just missing something, I couln’t get past the workflow screen. Thanks for the tip, I will keep tinkering with things.
I tried InvokeAI, but was having the same problem. I got the safetensor directly from Stability AI’s hugging face page - so I have to be doing something wrong…3 different UIs and I couldn’t get any to work. I am losing my technical aptitude :)
and im tryin to make SDXL work on my 1660ti laptop lol , comfyui runs it like 1:30 min for each pic , A1111 can’t even load the vae , however yesterday i saw and update on hugging face page of sd that they chaned to 0.9 vae for sdxl1 , seems like there was an issue with their provided 1.0 vae
It’s hard to give precise figures, because there’s always tricks to getting a little more or less but from my (admittedly limited) testing SDXL is significantly more demanding, and 10+GB of VRAM is probably going to be the minimum to run it. I don’t remember exactly what I was doing but I run on an RTX A4500 card, and I managed to max out the 20GB of VRAM just with one SDXL process, where I can normally run a LORA training and 512x768 size images at the same time.
I have a 6600 XT (so 8GBs of VRAM) and had no luck with A1111 or Vladmandic’s port, they would crash. ComfyUI worked with no fiddling.