I have the application process enabled for people to join my instance, and I’ve gotten about 20 bots trying to join today when I had nobody trying to join for 5 days. I can tell because they are generic messages and I put a question in asking what 2+3 is and none of them have answered it at all, they just have a generic message.
Be careful out there, for all you small instance admins.
Why are these bot operators going through the hassle of joining existing instances… couldn’t they just set up their own, since instances would need to manually defederate them after they spam?
I wonder how difficult it would be to take a Formspree-style approach to combat the bots, using a hidden form field
My guess would be because it is more difficult for other instances to deal with instances that have a combination of bots and actual users.
Because you can’t make thousands of spambots on your own instance because as you noted it’d take about 5 minutes to defederate and thus remove all the bots.
You want to put a handful on every server you can, because then your bots have to be manually rooted out by individual admins, or the federation between instances gets so broken there’s no value in the platform.
And for standing up more instances, you have to bear the cost of running the servers yourself, which isn’t prohibitive, but more than using bots via stolen/infected proxies (and shit like Hola that gives you a “free vpn” at the cost of your computer becoming an exit node they then resell).
Also, I’m suspicious that it’s not ‘spam bots’ in the traditional sense since what’s the point of making thousands of bots but then barely using them to spam anyone? My tinfoil hat makes me think this is a little more complicated, though I have zero evidence other than my native paranoia.
undefined> Also, I’m suspicious that it’s not ‘spam bots’ in the traditional sense since what’s the point of making thousands of bots but then barely using them to spam anyone?
This is Twitter and web forum spam 101, you establish a bunch of accounts while there are very few controls, then you start burning them over time as you get maybe one shot to mass spam with each of them before they get banned.
It’s always about following the money for spammers/malware/etc. authors: there’s (usually) a commercial incentive they’re pushing towards.
The bot is evolving and adapting to countermeasures and becoming “smarter” which means some human somewhere is investing time and effort in doing this, which means there’s some incentive.
That said, I doubt it’s strictly commercial because the Lemmy user base is really small and probably not worth much because if you’re here you’re most certainly not on the area of the bell curve that’ll fall for the usual spambot commercialization double-your-money/fake reviews/affiliate link/astroturfing approaches.
I’d wager it’s more about the ability to be disruptive than the ability to extract money from the users you can target, so like, your average 16-year-old internet trolls.
… How many comments would each of 5M bot accounts need to make to overflow an i32 db key … I also think it looks as if someone is testing disruptive stuff. It may be kids playing, or it may be the chatbot army in preparation.
I’m not a Postgres expert but a quick look at the pgsql limits looks like it’s 4 billion by default, which uh, makes sense if it’s a 32 bit limit.
Soooo 5 million users would need to make… 800 posts? ish? I mean, certainly doable if nobody caught it was happening until it was well into it.
Why are these bot operators going through the hassle of joining existing instances
I wonder if there’s already a “the bots are from Reddit” conspiracy :D
I really see no point in these actions. I mean, seriously, why would you want to just harm something open?
Detecting and blocking whole instances with many bots is somewhat trivial. Blocking and detecting some number of bots in an instance with 10k users, with an ever growing number of human users, is much harder.
- It’s 5
O cool we are back early 2000 solutions to forum sign up bots…
Can’t wait for all the direct message spam to follow.
A small LLM will easily crack that anyway, so applications are useless. /s
I think a reasonable approach would be to include little javascript mini games. “Score 50 or higher!” with no instructions provided.
edit: using a server side rendered canvas/logic, so no cheating. Damn, this is probably a million dollar idea.
Can you share some of the generic messages in the applications are so we can compare?
Here are mine, according to the admin chat others have gotten similar ones
However, these bots will adapt like you would expect LLMs to do so the messages will change depending on the registration text.
Thats incredibly helpful, thank you. Do you have email verification turned on on your instance?
I had it turned off today as a test but I just enabled it (registrations were disabled over the past week or so). I guess I’ll see tomorrow if it makes a difference