![Avatar](/_next/image?url=https%3A%2F%2Flemm.ee%2Fapi%2Fv3%2Fimage_proxy%3Furl%3Dhttps%253A%252F%252Flemmy.ml%252Fpictrs%252Fimage%252F155ebbd4-38a0-4d99-8104-11be9c695cad.jpeg&w=3840&q=75)
![Avatar](/_next/image?url=%2Flemmy-icon-96x96.webp&w=3840&q=75)
OrangeSlice
It’s not “leftist” necessarily, but all leftists should inform themselves about the Russian Revolution IMO, which is covered by season 10 (the final and currently ongoing season) of the Revolutions|Spotify podcast.
Mike Duncan isn’t explicitly leftist or anything, but he really does his homework and portrays things in a really neutral way. Whether or not you are a big stan of the USSR and what came after, the Russian Revolution was the most successful attempt at overthrowing capitalism (to an extent), and any future movement should learn from and analyze all aspects of what happened in those years.
Citations Needed|Spotify is my other favorite with their in-depth media criticism (if you like Chomsky’s stuff, you’ll probably like what they have to say).
The whole point is that it’s federated too. The devs haven’t really been ardent about much to do with the software except “we’re not getting rid of the hard coded slur filter cause we really don’t want to see the open fascists who would care about that using it”. I don’t fully agree or disagree with that, but they don’t really have much else to say about the community at large. Des has repeatedly stated that he wants to have a healthy amount of “mainstream”/“liberal” instances that he has nothing to do with the content of.
I think what sites have been running into is that it’s difficult to tell what is and is not AI-generated, so enforcement of a ban is difficult. Some would say that it’s better to have an AI-generated response out there in the open, which can then be verified and prioritized appropriately from user feedback. If there’s a human generated response that’s higher.quality, then that should win anyway, right? (Idk tbh)
This then also makes me wonder how these models are going to be trained in the future. What happens when for example half of the training data is the output from previous models? How do you possibly steer/align future models and prevent compounding errors and bias? Strange times ahead.
Between this and the “deep fake” tech I’m kinda hoping for a light Butlerian jihad that gets everyone to log tf off and exist in the real world, but that’s kind of a hot take
I suggest that you edit the post title to just be “what’s everyone reading?”
Its like on TikTok when someone starts off with “I did not expect that to blow up”
It is possible that deletions will not propagate to other servers if they are running a forked version of the software.