![Avatar](/_next/image?url=%2Flemmy-icon-96x96.webp&w=3840&q=75)
Pumpkin Escobar
Is this the new “Simpsons already did it”?
Cunk already did it…
(3:40 if you want to get right to it) https://www.youtube.com/watch?v=UoSUx1xyj1E
Yale z-wave work well and last a long time between needing to replace batteries, and can run off of rechargeables. Can add to home assistant and work with Siri and Alexa integrations on home assistant.
Had some Schlage locks that ran through batteries way too fast.
When they’re not recording your desktop in an unencrypted database for AI, boot-looping your computer with bad patches or showing ads in your start menu, they’re disabling your account for calling family to see if they’re still alive. Damn.
Taking ollama for instance, either the whole model runs in vram and compute is done on the gpu, or it runs in system ram and compute is done on the cpu. Running models on CPU is horribly slow. You won’t want to do it for large models
LM studio and others allow you to run part of the model on GPU and part on CPU, splitting memory requirements but still pretty slow.
Even the smaller 7B parameter models run pretty slow in CPU and the huge models are orders of magnitude slower
So technically more system ram will let you run some larger models but you will quickly figure out you just don’t want to do it.
Boeing made $76B in revenue in 2023. This is slightly more than 1 day’s revenue for them ($210M / day) or a bit more than 10 days profit for them ($21M / day). They will keep doing what they’re doing, but increase their spending on a PR campaign to improve their public image.
Respect, but…
FWIW they didn’t merge it, they closed the PR without merging, link to line that still exists on master.
The recent comments are from the announcement of the ladybird browser project which is forked from some browser code from Serenity OS, I guess people are digging into who wrote the code.
Not arguing that the new comments on the PR are good/bad or anything, just a bit of context.