You are viewing a single thread.
View all comments View context
23 points

What’s the resources requirements for the 405B model? I did some digging but couldn’t find any documentation during my cursory search.

permalink
report
parent
reply
40 points
*

Typically you need about 1GB graphics RAM for each billion parameters (i.e. one byte per parameter). This is a 405B parameter model. Ouch.

Edit: you can try quantizing it. This reduces the amount of memory required per parameter to 4 bits, 2 bits or even 1 bit. As you reduce the size, the performance of the model can suffer. So in the extreme case you might be able to run this in under 64GB of graphics RAM.

permalink
report
parent
reply
20 points

Typically you need about 1GB graphics RAM for each billion parameters (i.e. one byte per parameter). This is a 405B parameter model.

permalink
report
parent
reply
13 points

Or you could run it via cpu and ram at a much slower rate.

permalink
report
parent
reply
11 points

Yeah uh let me just put in my 512GB ram stick…

permalink
report
parent
reply
2 points

Finally! My dumb dumb 1TB ram server (4x E5-4640 + 32x32GB DDR3 ECC) can shine.

permalink
report
parent
reply
8 points
*

At work we habe a small cluster totalling around 4TB of RAM

It has 4 cooling units, a m3 of PSUs and it must take something like 30 m2 of space

permalink
report
parent
reply
4 points

When the 8 bit quants hit, you could probably lease a 128GB system on runpod.

permalink
report
parent
reply
3 points

Can you run this in a distributed manner, like with kubernetes and lots of smaller machines?

permalink
report
parent
reply
2 points

According to huggingface, you can run a 34B model using 22.4GBs of RAM max. That’s a RTX 3090 Ti.

permalink
report
parent
reply
1 point

Ypu mean my 4090 isn’t good enough 🤣😂

permalink
report
parent
reply
1 point
*

Hmm, I probably have that much distributed across my network… maybe I should look into some way of distributing it across multiple gpu.

Frak, just counted and I only have 270gb installed. Approx 40gb more if I install some of the deprecated cards in any spare pcie slots i can find.

permalink
report
parent
reply
12 points

405b ain’t running local unless you got a proepr set up is enterpise grade lol

I think 70b is possible but I haven’t find anyone confirming it yet

Also would like to know specs on whoever did it

permalink
report
parent
reply
8 points
*

I’ve run quantized 70B models on CPU with 32 gigs but it is very slow

permalink
report
parent
reply
3 points

I gonna add some RAM with hope I can split original 70b between GPU and RAM. 8b is great what it is as is

Looks like it should be possible, not sure how much performance hit offloading to RAM will do. Fafo

permalink
report
parent
reply
3 points

I have a home server with 140 gigs of RAM, it was surprisingly cheap. It’s an HP z6 with the 6146 gold xeon processor.

I found a seller who was selling it with a low spec silver and 16 gigs of RAM for like 250 bucks.

Found the processor upgrade for about $120 and spend another $150 on 128gb of second-hand ECC ddr4.

I think the total cost was something like $700 after throwing a couple of 8 TB hard drives in.

I’ve also placed a Nvidia 4070 in it, which I got doing some horse trading.

How close am I on the specs to being able to run the 70b version?

permalink
report
parent
reply
2 points
*

I regularly run llama3 70b unqantized on two P40s and CPU at like 7tokens/s. It’s usable but not very fast.

permalink
report
parent
reply
1 point

so there is no way a 24gb and 64gb can run thing?

permalink
report
parent
reply
6 points

As a general rule of thumb, you need about 1 GB per 1B parameters, so you’re looking at about 405 GB for the full size of the model.

Quantization can compress it down to 1/2 or 1/4 that, but “makes it stupider” as a result.

permalink
report
parent
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 17K

    Monthly active users

  • 12K

    Posts

  • 554K

    Comments