0 points
Deleted by creator
permalink
report
reply
3 points

llama.cpp quantizes the heck out of language models, which allows consumer cpus to run them. my laptop can run most 7b or 13b LLMs with 4bit quantization (and they are trying to push the level of quantization even further to 2 or 1.5 bits!)

The same will happen with stable diffusion. Most SD models are still around fp16 levels of quantization, and will soon be going lower. I expect we’ll all be running SDXL or larger models on our laptop CPUs without breaking a sweat at 4bit level.

permalink
report
reply
1 point

What I dislike about lower quantization is quality degradation. In my small experience, i find 7b models dumb (I’ve only tested Q4KM GGUF), and needed to be provided proper context before moving forward with the constructive conversation (be chat or instruct).

If this issue can be circumvented in lower quantization, I’m all in.

In context of SD, going below fp16 would only make things faster at cost of quality, and I personally like to go in depth with my prompts. For simpler prompts sure, even lighting and turbo are good in that regard.

permalink
report
parent
reply
1 point

You can’t shrink a model to 1/8 the size and expect it to run at the same quality. Quantization allows me to move from a cloud gpu to my laptops crappy cpu/igpu, so I’m ok with that tradeoff.

permalink
report
parent
reply

Stable Diffusion

!stable_diffusion@lemmy.dbzer0.com

Create post

Discuss matters related to our favourite AI Art generation technology

Also see

Other communities

Community stats

  • 166

    Monthly active users

  • 849

    Posts

  • 1.7K

    Comments

Community moderators