Spedwell
At least there is a big (ish?) player in the Chromium-sphere pushing back against this.
The more browsers that don’t initially support this, the slower adoption by web sites will be. If enough of the browser market share remains incompatibe, and if we’re lucky, maybe this technology won’t stick.
CSGO cases pulled $1 billion revenue in 2023. The steam store brought in $8.5 billion in that same year. That’s a 30% cut of all sales traffic on steam vs. in-game loot crates on a single title.
Loot boxes pull insane numbers. And yes they exploit children and problem gamblers. Love to see so many Valve fans downvote you :/
Two Vonnegut novels—God Bless you Mr. Rosewater and Player Piano—fundamentally shifted the way I view the world.
The novels primarily discuss the economy, automation, and human wellfare. When I was young I defaulted to a laissez-faire economic mindset, and basically assumed automation and technology would always make our quality of lives improve. I was very much in the Ayn Rand club on economic and moral issues. These books were ultimately what made me reflect and consider the other “spiritual” (in the sense Vonnegut uses the term) aspects of human wellfare. Vonnegut was my introduction to humanist thought, and I owe the vast majority of my personal moral development to the influence of these two books.
Wow, what a dishearteningly predictable attack.
I have studied computer architecture and hardware security at the graduate level—though I am far from an expert. That said, any student in the classroom could have laid out the theoretical weaknesses in a “data memory-dependent prefetcher”.
My gut says (based on my own experience having a conversation like this) the engineers knew there was a “information leak” but management did not take it seriously. It’s hard to convince someone without a cryptographic background why you need to {redesign/add a workaround/use a lower performance design} because of “leaks”. If you can’t demonstrate an attack they will assume the issue isn’t exploitable.
We should already be at that point. We have already seen LLMs’ potential to inadvertently backdoor your code and to inadvertently help you violate copyright law (I guess we do need to wait to see what the courts rule, but I’ll be rooting for the open-source authors).
If you use LLMs in your professional work, you’re crazy. I would never be comfortably opening myself up to the legal and security liabilities of AI tools.