Digital privacy and security engineers at the University of Wisconsin–Madison have found that the artificial intelligence-based systems that TikTok and Instagram use to extract personal and demographic data from user images can misclassify aspects of the images. This could lead to mistakes in age verification systems or introduce other errors and biases into platforms that use these types of systems for digital services.
Led by Kassem Fawaz, an associate professor of electrical and computer engineering at UW–Madison, the researchers studied the two platforms’ mobile apps to understand what types of information their machine learning vision models collect about users from their photographs — and importantly, whether the models accurately recognize demographic differences and age.
@tardigrada yea, I’m going closer and closer to the point these MFs won’t leave my browser
I know that we all want to enjoy things and don’t live in fear like rats but keeping your pics online nowadays is not a good move. Though I guess if they are already there it’s kinda too late probably?
What, I’m shocked, the tech bros didn’t think about sociological issues before “innovating”?
For people ages 0 to 2, the model often classified them as being between 12 and 18 years old.
I guess they’re just not training with baby pictures then? I mean, this seems like it should be the easiest distinction to make.
Doesn’t seem like there’s any information on the purpose of this analysis. Google Photos has been doing face recognition and other classification for a long time, and it’s genuinely useful because it lets you sort your photo collection by person. It also categorizes pet photos and does a halfway-decent job of distinguishing one pet from another. I’d genuinely appreciate similar functionality in the open-source photo apps I use. This seems like a natural fit for Instagram. Not sure about TikTok, but honestly, I’m too old and ornery to understand how people actually use TikTok.