Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
this will probably become a NotAwfulTech post after I explore a bit more, but here’s a quick follow-up to my post last stubsack looking for language learning apps:
the open source apps for the learning system I want to use do exist! that system is essentially an automation around reading an interesting text in Spanish (or any other language), marking and translating terms and phrases with a translation dictionary, and generating flash cards/training materials for those marked terms and phrases. there’s no good name for the apps that implement this idea as a whole so I’m gonna call them the LWT family for reasons that will become clear.
briefly, the LWT family apps I’ve discovered so far are:
- LWT (Learning With Texts) is the original open source system that implemented the learning system I described above (though LWT itself originated as an open source clone of LingQ with some ideas from other learning systems). the Hugo Fara fork is the most recently-maintained version of LWT, but it’s generally considered finished (and extraordinarily difficult to modify) software. I need to look into LWT more since it’s still in active use; I believe it uses an Anki exporter for spaced repetition training. it doesn’t seem to have a mobile UI, which might be a dealbreaker since I’ll probably be doing a lot of learning from my phone
- Lute (Learning Using Texts) is a modernized LWT remake. this one is being developed for stability, so it’s missing features but the ones that exist are reputedly pretty solid. it does have a workable mobile UI, but it lacks any training framework at all (it may have an extremely early Anki plugin to generate flash cards)
- LinguaCafe is a completely reworked LWT with a modern UI. it’s got a bunch of features, but it’s a bit janky overall. this is the one I’m using and liking so far! installing it is a fucking nightmare (you have to use their docker-compose file only, with docker not podman, and absolutely slaughter the permissions on your bind mounts, and no you can’t fire it up native) but the UI’s very modern, it works well on mobile (other than jank), and it has its own spaced repetition training framework as well as (currently essentially useless) Anki export. it supports a variety of freely available translation dictionaries (which it keeps in its own storage so they’re local and very fast) and utterly optional DeepL support I haven’t felt the need to enable. in spite of my nitpicks, I really am enjoying this one so far (but I’m only a couple days in)
you have to use their docker-compose file only, with docker not podman, and absolutely slaughter the permissions on your bind mounts, and no you can’t fire it up native
yeah I have no idea what any of these words mean
speaking of
one of my endeavours the last few days (although heavily split into pieces between migraines and other downtimes) was to figure out how to segment containers into vlan splits (bc reasons), and doing this on podman
the docs will (by omission or directly) lie to you so much. the execution boundaries of root vs rootless cause absolutely hilarious failure modes. things that are required for operation are Recommended
packages (in the apt/dpkg sense)
utter and complete clownshow bullshit. it does my head in to think how much human time has been wasted on falling arse-over-face to get in on this shit purely after docker ran a multi-year vc-funded pr campaign. and even more to see, at every fucking interaction with this shit, just how absolutely infantile the implementations of any of the ideas and tooling are
Another front in the war of the sexes has opened, and men are on the back foot!
HN: Women are using ChatGPT to catch men lying about their height
The counterattacks are planned!
“They” all lie:
I personally take my job as Photo Detective very seriously, a trait I only acquired from too many dates with people who did not look as good as their photos in real life.
Double standards!!
I’m pretty sure if men would use ChatGPT to catch women lying about their body this conversation would be in a completely different tone.
For some reason The Internet decided that 6 feet was an arbitrary limit, under which men could be just ignored.
Imagine men deciding that any woman with smaller than (insert random body measurement we can’t affect) could just be filtered out?
Both sides!
Women will hate finding out that it can guess their age and weight! It can even guess their socioeconomic group, and if they dye their hair.
Great, guys can use it too. We’ll see how these chicks react…
it can guess their age and weight! It can even guess their socioeconomic group, and if they dye their hair.
But can it detect Inexplicable Cimmerian Vibes? Can it guess the haplogroup?
She knows you swiped left. She slams the table with both her hands. The formica cracks beneath her mighty fists as she shouts oaths in the name of Crom.
Post from July, tweet from today:
It’s easy to forget that Scottstar Codex just makes shit up, but what the fuck “dynamic” is he talking about? He’s describing this like a recurring pattern and not an addled fever dream
There’s a dynamic in gun control debates, where the anti-gun side says “YOU NEED TO BAN THE BAD ASSAULT GUNS, YOU KNOW, THE ONES THAT COMMIT ALL THE SCHOOL SHOOTINGS”. Then Congress wants to look tough, so they ban some poorly-defined set of guns. Then the Supreme Court strikes it down, which Congress could easily have predicted but they were so fixated on looking tough that they didn’t bother double-checking it was constitutional. Then they pass some much weaker bill, and a hobbyist discovers that if you add such-and-such a 3D printed part to a legal gun, it becomes exactly like whatever category of guns they banned. Then someone commits another school shooting, and the anti-gun people come back with “WHY DIDN’T YOU BAN THE BAD ASSAULT GUNS? I THOUGHT WE TOLD YOU TO BE TOUGH! WHY CAN’T ANYONE EVER BE TOUGH ON GUNS?”
Embarrassing to be this uninformed about such a high profile issue, no less that you’re choosing to write about derisively.
Google’s Search Dominance Leaves Sites Little Choice on AI Scraping (no archive - archive.ph appears to have died)
Because Google literally can’t stop being evil even when the world’s eyes are on it
yall might want to take notice of this thing https://discuss.tchncs.de/post/20460779
https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2024-08-14/Recent_research
STORM: AI agents role-play as “Wikipedia editors” and “experts” to create Wikipedia-like articles, a more sophisticated effort than previous auto-generation systems
ai slop in extruded text form, now longer and worse! and burns extra square kilometers of rainforest
we propose the STORM paradigm for the Synthesis of Topic Outlines through Retrieval and Multi-perspective Question Asking
oh come the fuck on
The authors hail from Monica S. Lam’s group at Stanford, which has also published several other papers involving LLMs and Wikimedia projects since 2023 (see our previous coverage: WikiChat, “the first few-shot LLM-based chatbot that almost never hallucinates” – a paper that received the Wikimedia Foundation’s “Research Award of the Year” some weeks ago).
from the same minds as STOTRMPQA comes: we constructed this LLM so it won’t generate a response unless similar text appears in the Wikipedia corpus and now it almost never entirely fucks up. award-winning!