MBFC BTFO: https://mediabiasdetector.seas.upenn.edu/
The smart way to keep people passive and obedient is to strictly limit the spectrum of acceptable opinion, but allow very lively debate within that spectrum…
Assuming this isn’t just evidence that the methodology sucks or the sample is crap because they picked a single right-wing crank site to serve as the functional outgroup, it seems to be pointing to a distinct lack of liveliness. The debates are all over lurid speculation about the diets and religious practices of immigrant communities.
The discourse gap between the two groups have narrowed so much that it wouldn’t surprise me if that’s how it looks on a chart. They’re not arguing about whether they should or shouldn’t do things anymore, merely how.
I don’t think that’s the problem. The problem is that an AI can’t know truth from falsehood, or when things are being omitted, overtly emphasized, etc. The only thing it can actually evaluate is tone, and the factual, objective affect that all news reporting tends to use is gonna read as unbiased. It’d only register as biased if they started throwing out insults, used lots of positive or negative adjectives, or other kinds of semantically evident bias. You’d basically need an AGI to actually evaluate how biased an article is. Not to mention that attempting to quantify that bias assumes that there even is a ground truth to compare against, which might be true for the natural sciences but is almost always false for social reality.