We recently released Smaug-72B-v0.1 which has taken first place on the Open LLM Leaderboard by HuggingFace. It is the first open-source model to have an average score more than 80.
I’m afraid to even ask for the minimum specs on this thing, open source models have gotten so big lately
CUDA 11.4 and above are recommended (this is for GPU users, flash-attention users, etc.) To run Qwen-72B-Chat in bf16/fp16, at least 144GB GPU memory is required (e.g., 2xA100-80G or 5xV100-32G). To run it in int4, at least 48GB GPU memory is requred (e.g., 1xA100-80G or 2xV100-32G).
It’s derived from Qwen-72B, so same specs. Q2 clocks it in at only ~30GB.
I think I read somewhere that you’ll basically need 130 GB of RAM to load this model. You could probably get some used server hardware for less than $600 to run this.
Oh if only it were so simple lmao, you need ~130GB of VRAM, aka the graphics card RAM. So you would need about 9 consumer grade 16GB graphics cards and you’ll probably need Nvidia because of fucking CUDA so we’re talking about thousands of dollars. Probably approaching 10k
Ofc you can get cards with more VRAM per card, but not in the consumer segment so even more $$$$$$
I’m pretty sure you can load the model using RAM like another poster said. Here’s a used server under $600 that could theoretically run it: ebay.
Afaik you can substitute VRAM with RAM at the cost of speed. Not exactly sure how that speed loss correlates to the sheer size of these models, though. I have to imagine it would run insanely slow on a CPU.
Unless you’re getting used datacenter grade hardware for next to free, I doubt this. You need 130 gb of VRAM on your GPUs
So can I run it on my Radeon RX 5700? I overclocked it some and am running it as a 5700 XT, if that helps.
Every billion parameters needs about 2 GB of VRAM - if using bfloat16 representation. 16 bits per parameter, 8 bits per byte -> 2 bytes per parameter.
1 billion parameters ~ 2 Billion bytes ~ 2 GB.
From the name, this model has 72 Billion parameters, so ~144 GB of VRAM
https://huggingface.co/senseable/Smaug-72B-v0.1-gguf/tree/main
About 44GB and 50GB for the Q4 and 5. You’d need quite some extra to fully use the 32k context length.
It’s been discovered that you can reduce the bits per parameter down to 4 or 5 and still get good results. Just saw a paper this morning describing a technique to get down to 2.5 bits per parameter, even, and apparently it 's fine. We’ll see if that works out in practice I guess
I’m more experienced with graphics than ML, but wouldn’t that cause a significant increase in computation time, since those aren’t native types for arithmetic? Maybe that’s not a big problem?
If you have a link for the paper I’d like to check it out.