Abacus.ai:

We recently released Smaug-72B-v0.1 which has taken first place on the Open LLM Leaderboard by HuggingFace. It is the first open-source model to have an average score more than 80.

You are viewing a single thread.
View all comments
45 points

I’m afraid to even ask for the minimum specs on this thing, open source models have gotten so big lately

permalink
report
reply
49 points

Every billion parameters needs about 2 GB of VRAM - if using bfloat16 representation. 16 bits per parameter, 8 bits per byte -> 2 bytes per parameter.

1 billion parameters ~ 2 Billion bytes ~ 2 GB.

From the name, this model has 72 Billion parameters, so ~144 GB of VRAM

permalink
report
parent
reply
47 points

Ok but will this run on my TI-83? It’s a + model.

permalink
report
parent
reply
17 points

Only if it’s silver.

permalink
report
parent
reply
3 points

no. but put this clustering software i wrote in ti-basic on 40 million of them? still no

permalink
report
parent
reply
2 points

Ooooo gotta upgrade to the 86 my dude

permalink
report
parent
reply
11 points

It’s been discovered that you can reduce the bits per parameter down to 4 or 5 and still get good results. Just saw a paper this morning describing a technique to get down to 2.5 bits per parameter, even, and apparently it 's fine. We’ll see if that works out in practice I guess

permalink
report
parent
reply
2 points
*

I’m more experienced with graphics than ML, but wouldn’t that cause a significant increase in computation time, since those aren’t native types for arithmetic? Maybe that’s not a big problem?

If you have a link for the paper I’d like to check it out.

permalink
report
parent
reply
1 point
*

Though with quantisation you can get it down to like 30GB of vram or less.

permalink
report
parent
reply
1 point

Llama 2 70B with 8b quantization takes around 80GB VRAM if I remember correctly. I’ve tested it a while ago.

permalink
report
parent
reply
1 point

Any idea what 8Q requirements would be? Or 4 or 5?

permalink
report
parent
reply
2 points

https://huggingface.co/senseable/Smaug-72B-v0.1-gguf/tree/main

About 44GB and 50GB for the Q4 and 5. You’d need quite some extra to fully use the 32k context length.

permalink
report
parent
reply
18 points

CUDA 11.4 and above are recommended (this is for GPU users, flash-attention users, etc.) To run Qwen-72B-Chat in bf16/fp16, at least 144GB GPU memory is required (e.g., 2xA100-80G or 5xV100-32G). To run it in int4, at least 48GB GPU memory is requred (e.g., 1xA100-80G or 2xV100-32G).

It’s derived from Qwen-72B, so same specs. Q2 clocks it in at only ~30GB.

permalink
report
parent
reply
11 points

Just a data center or two. Easy peasy dirt cheapy.

permalink
report
parent
reply
5 points

I think I read somewhere that you’ll basically need 130 GB of RAM to load this model. You could probably get some used server hardware for less than $600 to run this.

permalink
report
parent
reply
16 points

Oh if only it were so simple lmao, you need ~130GB of VRAM, aka the graphics card RAM. So you would need about 9 consumer grade 16GB graphics cards and you’ll probably need Nvidia because of fucking CUDA so we’re talking about thousands of dollars. Probably approaching 10k

Ofc you can get cards with more VRAM per card, but not in the consumer segment so even more $$$$$$

permalink
report
parent
reply
9 points

Afaik you can substitute VRAM with RAM at the cost of speed. Not exactly sure how that speed loss correlates to the sheer size of these models, though. I have to imagine it would run insanely slow on a CPU.

permalink
report
parent
reply
0 points

I’m pretty sure you can load the model using RAM like another poster said. Here’s a used server under $600 that could theoretically run it: ebay.

permalink
report
parent
reply
10 points

Unless you’re getting used datacenter grade hardware for next to free, I doubt this. You need 130 gb of VRAM on your GPUs

permalink
report
parent
reply
6 points

So can I run it on my Radeon RX 5700? I overclocked it some and am running it as a 5700 XT, if that helps.

permalink
report
parent
reply
3 points

Around 48gb of VRAM if you want to run it in 4bits

permalink
report
parent
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 18K

    Monthly active users

  • 12K

    Posts

  • 530K

    Comments