What’s the resources requirements for the 405B model? I did some digging but couldn’t find any documentation during my cursory search.
Typically you need about 1GB graphics RAM for each billion parameters (i.e. one byte per parameter). This is a 405B parameter model. Ouch.
Edit: you can try quantizing it. This reduces the amount of memory required per parameter to 4 bits, 2 bits or even 1 bit. As you reduce the size, the performance of the model can suffer. So in the extreme case you might be able to run this in under 64GB of graphics RAM.
Hmm, I probably have that much distributed across my network… maybe I should look into some way of distributing it across multiple gpu.
Frak, just counted and I only have 270gb installed. Approx 40gb more if I install some of the deprecated cards in any spare pcie slots i can find.
405b ain’t running local unless you got a proepr set up is enterpise grade lol
I think 70b is possible but I haven’t find anyone confirming it yet
Also would like to know specs on whoever did it
I have a home server with 140 gigs of RAM, it was surprisingly cheap. It’s an HP z6 with the 6146 gold xeon processor.
I found a seller who was selling it with a low spec silver and 16 gigs of RAM for like 250 bucks.
Found the processor upgrade for about $120 and spend another $150 on 128gb of second-hand ECC ddr4.
I think the total cost was something like $700 after throwing a couple of 8 TB hard drives in.
I’ve also placed a Nvidia 4070 in it, which I got doing some horse trading.
How close am I on the specs to being able to run the 70b version?
I regularly run llama3 70b unqantized on two P40s and CPU at like 7tokens/s. It’s usable but not very fast.