Stable Diffusion on a 4090. I find DDIM sampling is far superior to the default euler_a but it takes fractionally longer. I typically do a 768x512 base image. If it is good then I img2img it to 1.5x size using between 0.4 and 0.6 diffusion. If the results are still good then I upscale using SD Upscale to 2x the size (splits it into pieces) using between 0.2 and 0.3 diffusion.
At any point if the images sucks I can go back a step and adjust.
Prompts tend to matter. It was surprising that switching “woman” to “lesbian” for this prompt immediately made the characters objectively less attractive - like visibly older. Adding “ugly” to the negative prompt then corrected that problem while still maintaining subtle things like the haircut.
Is that a local installation then? Also, and I’m sorry for the really basic question, but do you do any additional training on the system? I ask because I have messed around with SD (or similar systems) and never got anything close to this with the same prompts used by others. I’d be curious about training the system better so it could get more realistic skin details; for example, nipples always look kind of “basic” even when there is a lot of face detail.
It is a local installation. No worries on the questions, how we learn. I don’t currently do any custom training on my models - this is Deliberate V2 straight from civitai.com. If you are using the standard model that might be part of your problem.
Try adding “imperfect skin” to the positive prompt, make sure you are using DDIM instead of euler_a and use 40+ steps.