brucethemoose
One can’t offload “usable” LLMs without tons of memory bandwidth and plenty of RAM. It’s just not physically possible.
You can run small models like Phi pretty quick, but I don’t think people will be satisfied with that for copilot, even as basic autocomplete.
About 2x faster than Intel’s current IGPs is the threshold where the offloading can happen, IMO. And that’s exactly what AMD/Apple are producing.
The localllama crowd is supremely unimpressed with Intel, not just because of software issues but because they just don’t have beefy enough designs, like Apple does, and AMD will soon enough. Even the latest chips are simply not fast enough for a “smart” model, and the A770 doesn’t have enough VRAM to be worth the trouble.
They made some good contributions to runtimes, but seeing how they fired a bunch of engineers, I’m not sure that will continue.
Bernie beating Trump is a nice fantasy, but it wouldn’t happen either.
This election was an information war, and the Democrats fought it like they’re in another century. And they’re going to lose again if they think sensible policy, however revolutionary and in-touch, and hopeful TV ads are how you win elections.
They need their own form of MAGA. And they need to flood and poison the internet and airwaves just like the Republicans did.
Fight fire with fire.
Dive into social media and be shameless. Get dirty. Lie. Pay influencers. Promote conspiracies. Push meme policies.
What are Republicans gonna do? Demonize Democrats more? That will just get them more eyeballs, which is exactly how Trump won.
The actual answer is that the world should have banned targeted advertising/engagement optimization so assholes don’t trend so much, but we are way past that.
If they wanna abandon discrete GPUs… OK.
But they need graphics. They should make M Pro/Max-ish integrated GPUs like AMD is already planning on doing, with wide busses, instead of topping out at bottom-end configs.
They could turn around and sell them as GPU-accelerated servers too, like the market is begging for right now.