What are the hardware requirements to run SDXL?

In particular, how much VRAM is required?

This is assuming A1111 and not using --lowvram or --medvram.

2 points

and im tryin to make SDXL work on my 1660ti laptop lol , comfyui runs it like 1:30 min for each pic , A1111 can’t even load the vae , however yesterday i saw and update on hugging face page of sd that they chaned to 0.9 vae for sdxl1 , seems like there was an issue with their provided 1.0 vae

permalink
report
reply
1 point
*

I can run it on my 3080 10 gig card, but Its ridiculously slow. I HAVE to use --medvram or I get out of memory errors and NaN errors. And I mean ridiculously slow. Loading the model takes a few minutes. Generating an image requires me to minimize the browser window, or stable diffusion just stalls. Switching to the refiner isnt even an option because it takes so long to switch between models.

This is on a 5930K, 32 GB Ram, 3080 10G trying to generate 1024x1024 images.

However with comfyUI, it runs just fine, PC doesnt struggle, and it generates the images in about 40 seconds at 50 steps base, 10 refiner.

permalink
report
reply
1 point
Deleted by creator
permalink
report
parent
reply
2 points

I have a 6600 XT (so 8GBs of VRAM) and had no luck with A1111 or Vladmandic’s port, they would crash. ComfyUI worked with no fiddling.

permalink
report
reply
5 points
*

3070 8gb vram, 16gb ram

In confy About 16-17sec to do everything at 1024x1024 with 20steps and 5 refiner.

A1111 About the same, using batch of 4 and then using batch img2img refining. Just more clicks without extensions etc as getting the same it/s between confy and a1111 - - medvram

permalink
report
reply
3 points

I am using a 3070 8gb FTW, and 32gb ddr5…I have yet to get SDXL to even generate without an error yet. I assumed it was my hardware.

permalink
report
parent
reply
1 point

Nah, auto1111 seriously struggles with SDXL for me. But comfy manages to do it without issue.

permalink
report
parent
reply
1 point

I tried ComyfyUI last night - but, unless I am just missing something, I couln’t get past the workflow screen. Thanks for the tip, I will keep tinkering with things.

I tried InvokeAI, but was having the same problem. I got the safetensor directly from Stability AI’s hugging face page - so I have to be doing something wrong…3 different UIs and I couldn’t get any to work. I am losing my technical aptitude :)

permalink
report
parent
reply
2 points

It’s hard to give precise figures, because there’s always tricks to getting a little more or less but from my (admittedly limited) testing SDXL is significantly more demanding, and 10+GB of VRAM is probably going to be the minimum to run it. I don’t remember exactly what I was doing but I run on an RTX A4500 card, and I managed to max out the 20GB of VRAM just with one SDXL process, where I can normally run a LORA training and 512x768 size images at the same time.

permalink
report
reply

Stable Diffusion

!stable_diffusion@lemmy.dbzer0.com

Create post

Discuss matters related to our favourite AI Art generation technology

Also see

Other communities

Community stats

  • 166

    Monthly active users

  • 849

    Posts

  • 1.7K

    Comments

Community moderators