Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Semi-obligatory thanks to @dgerard for starting this)
That anti David Gerard Wikipedia nontroversy from awhile back has made it to Elon Musk’s twitter feed: https://xcancel.com/elonmusk/status/1849862303614894223
I wish I was cool enough to have the world’s worst people get so mad at me that they…make fan art and put it up on their website. What’s Elon going for here?
If their aim is to make DG look bad ass, they’re doing a good job.
david gerard refuses to respond to my allegations that he wears an awesome trenchcoat and uses magic to trap his opponents in a realm where everything is made of the pages of a failed novela, and I think that says a lot
oh boy it’s gonna be funny/painful to see musk’s biggest fans try to litigate this on Wikipedia with all the banal nothing and weird stalking that trace’s article consists of
trace is on a quillette podcast about it too, prob where Musk got it
no new outbreak on Wikipedia itself
my condolences on the likely death threats. I’ve been there before and it sucks
Only now? It is amazing how disconnected Musk is from that part of SV culture. Amazing, even sucks at the thing he should have a home field advantage at.
I had a mini identity crisis when i realized I’m more aware of techno-fascist writing than Elon Musk of all people.
Maybe not super amazing, he probably avoids/firewalls out most of these people because they’d constantly be hitting him up for money
Update on LLM reviewer situation:
PM is down to let us pitch them our argument. Good news: PM seems like a cool person, is open minded, and is being pretty frank about the forces at work here. Bad news: taking action on this will open a whole can of worms, so any proof has to be ironclad. After conferring with our local grant wizards, the battle plan is to crank out a 15 minute pitch consisting of:
- a 2 min elevator pitch of our tech, highlighting what the reviews mangled
- intro to LLMs for people who know what glycosylation is
- intro to semiotics for the same
- show how transformer architectures transform symbols into symbols to produce text-shaped objects without actual intent, ideas, or context (and why “automated AI detection” is also bullshit).
- show a few examples of plausible-at-first-glance gen-ai slop (the nonexistant turkish fortress, mouse dck, etc)
- Highlight how our weird reviews (both good and bad) fit exactly into this bin (absolutely mis-interpreting a table, inventing a bacterial species we didn’t use and talking shit about it, miscounting our team members, etc)
We’ll be leaning on the Stochastic Parrot paper pretty hard, because it’s a good entry into the field on the skeptical side and is just well constructed in general. I’m also on the hunt simplified diagram for how LLMs convert tokens to arrays to tokens from the original transformer literature. Unfortunately, so much of the literature is obscurantist on purpose, and I want to avoid falling into the “It can’t be that stupid” trap. Any pointers in that direction are most welcome!
Wish us luck, heh!
good luck! it sounds like you’re coming in remarkably well-prepared, so unless they’re gonna go fingers-in-ears (and it sounds like the PM’s better than that), you’re at least likely to make an impact
Unfortunately, so much of the literature is obscurantist on purpose
between this and all the SEO on OpenAI’s marketing horseshit and breathlessly parroted press releases, it’s exhausting to find good sources for how any of this stuff actually works in reality. shit, I’ve had old primary sources on things like Sora get buried after OpenAI’s promises didn’t pan out. I’m hoping you can find what you need — our back archives might have a few links if you haven’t searched through here yet.
Kinda nervous about it, not gonna lie. Really appreciate the positive vibes!
Edit: And thanks so much for keeping this community alive!
Actual message I got while renewing my insurance plan last night. Thank you for adding a shitty chat bot which will give me false information about my life and death decisions, bravo.
This tool solely exists so that you can ask it questions and get assistance, but also we disavow any responsibility for the answers to the questions we just told you to ask it. Has this kind of clause been held up in court anywhere? Like, I’m sure it has but it seems like the same logic would be ridiculous in any other context. Like, consider the fraught legal history of the anarchist cookbook.
has the era of active sabotage of the autoplag inputs begun? let’s hope so
Considering Glaze and Nightshade have been around for a while, and I talked about sabotaging scrapers back in July, arguably, it already has.
Hell, I ran across a much smaller scale case of this a couple days ago:
Not sure how effective it is, but if Elon’s stealing your data for his autoplag no matter what, you might as well try to force-feed it as much poison as you can.
It’s almost completely ineffective, sorry. It’s certainly not as effective as exfiltrating weights via neighborly means.
On Glaze and Nightshade, my prior rant hasn’t yet been invalidated and there’s no upcoming mathematics which tilt the scales in favor of anti-training techniques. In general, scrapers for training sets are now augmented with alignment models, which test inputs to see how well the tags line up; your example might be rejected as insufficiently normal-cat-like.
I think that “force-feeding” is probably not the right metaphor. At scale, more effort goes into cleaning and tagging than into scraping; most of that “forced” input is destined to be discarded or retagged.
yeah this is the thing I’ve been thinking a lot about
fucking reCaptcha is literally mass-weaponising users for data filtration, and there is no good counter besides just not using reCaptcha (which is something one can’t easily pull off without things like regulatory action, massive reputational problems that make people gtfo, etc)
I have similar worries about cloudflare being such a massive chokepoint and using that position to enable “ai bot filter” services. feels extremely monopolistic, but ianal and I’m not entirely sure what the case grounds/structure on that would be (if any)
the only other viable strategy at the moment is fully breaking contact with any potential bad traffic systems, and that’s extremely fucking dire because that’s yet another nail in the coffin of the increasingly less open internet
I saw people say they would add 10% opaque layers of the musk with Epstein’s accomplice (whos name i forgot for a second and too lazy to look her up) photo. Would be nice if there was a tool to do so automatically. (Not that i post on twitter anymore).
tbh that sounds like a pretty easy script to write! Too bad I am not near a computer rn
It would be funny if someone was literally beating up servers with a wooden shoe.
I thought they were gonna do that themselves by feeding on their own outputs littered all over the www. Maybe they can use some help.
Update on the character.ai lawsuit:
Gizmodo just reported on the story - in addition to the suicide that kicked this litigation off, they’ve also discovered an hour-long screen recording where a test account (self-reported as thirteen years old) gets sexted relentlessly by the site’s chatbots.
So, in addition to driving one specific teen to suicide, character.ai is also facing accusations that their bots are sexually harassing children.
I have something that I want to post to MoreWrite and this is very convenient for my story