The steady stream of people who are telling me that the Santa moderation bot is going to delete anyone who’s downvoted or disagrees with the group, is continuing unabated.

Here’s an olive branch: You’ve got a point. It’s just a black box and I juggle the parameters to some secret process to ban the people who got some downvotes, I can understand how that comes across as toxic. I might or might not be lying about taking careful time to look over its judgements and make sure that I think the impact is more positive than negative, but at the end of the day, it doesn’t matter. You still have to trust my intentions and trust the bot to make good decisions, and trusting that to an automated system rarely works out well.

To me, delegating the moderation of the community to the segment of that community that’s trusted and consistently upvoted by the rest of us is better than giving it to a handful of people who wield unilateral power according to random rules. I like the bot’s judgements most of the time when I look at them. The question is simply whether this algorithm is actually doing that delegation effectively, or if it’s just banhammering anyone who gets a couple of downvotes. I’m confident that it’s doing the first thing almost all of the time.

In talks behind the scenes with other moderators, I’ve been going into a lot of detail about specific users and going back and forth about judgements. I also do a ton of checking behind the scenes. I don’t want to do that publicly. I think it would be deeply informative to post a list of the “top ten” and “bottom ten” users, and go into detail about why the low-ranked users got where they are, but that’s probably not a good idea.

What I would like to do is share that information on some level, so that people can see what’s going on, instead of it being me relaying that everything’s good. It’s tough because I can’t break down every level of detail without invading all kinds of people’s privacy. That said, I do think that there’s a way to be found to open up the process so people can see and give input to what’s going on.

One happy medium I could do would be to have the bot post its spot-check automatically about once a week. It could pick out one random user who’s barely on the borderline, and post a couple of the worst comments they made. Usually, when I’m messing around with its parameters, that’s what I am trying to do. There are some comments that are clearly toxicity that have no business anywhere. There are some comments that are clearly free speech, and even if they’re getting downvotes, they deserve to be heard. Then there are some comments that are on the borderline between. My goal is to set up the parameters so that the borderline rank value for a ban matches up with the users who are on that borderline.

I can see some upsides and downsides to posting that publicly. What do people think, though? What would you want to see, in order to make an informed decision about what you think of this whole approach?

You are viewing a single thread.
View all comments
2 points

I played around with possibilities for a while, and did some more fixing and tweaking of the algorithm and visualization tools. Here’s one way I think it could work. Once a week, the bot could post a breakdown of three random users who are permitted to post, and three random users who aren’t permitted to post. Right now, that breakdown would be:

Permitted to post:

Not permitted to post:

That means that anyone who wants to can check up on how it’s making its decisions. Then, in addition, anyone who wants an explanation for their user, I can do that.

Those charts are anonymized. I’ll send the users in question to some of the admins to see what they think. I think it’s okay to post a few users, as long as it’s random and not repetitive. I don’t think it would come across as singling anyone out or making them uncomfortable, but I’m curious what other people think.

permalink
report
reply
1 point
*

It might be fairly easy to de-anonymize users as not all users post in all threads, and identifying a user based on which threads they post in and generally what the response was to their posts isn’t impossible.

On the other hand, it doesn’t reveal information that we’ve decided should be treated special, like who is voting in the comments and posts. When posting a controversial twitter screenshot of a non-public figure, it’s internet etiquette and good form to blur the target’s name, even though the tweet can be found via text search. This ups the effort to attack the user a little, but also communicates through actions that trolling is being discouraged – which I think is the most effective deterrent.

The measures you’re taking seem to be in line with that internet etiquette. Especially considering the relatively small exposure your project is getting (at this point it seems it’s just us talking in this thread, for example) the precautions you have in place should be enough. You may consider revising this if you get complaints of harassment or when your project develops a much larger audience.

permalink
report
parent
reply

Santabot

!santabot@slrpnk.net

Create post

Santabot is an automated moderation tool, designed to reduce moderation load and remove bad actors in a way that is transparent to all and mostly resistant to abuse and evasion.

This is a community devoted to meta discussion for the bot.

Community stats

  • 330

    Monthly active users

  • 9

    Posts

  • 46

    Comments

Community moderators