115 points

Using AI to flag footage for review by a person seems like a good time-saving practice. I would bet that without some kind of automation like this, a lot of footage would just go unreviewed. This is far better than waiting for someone to lodge a complaint first, since you could conceivably identify problem behaviors and fix them before someone gets hurt.

The use of AI-based solutions to examine body-cam footage, however, is getting pushback from police unions pressuring the departments not to make the findings public to save potentially problematic officers.

According to this, the unions are against this because they want to shield bad-behaving officers. That tells me the AI review is working!

permalink
report
reply
26 points

I bet if they made all footage publicly available, watchdog style groups would be reviewing the shit out of that footage. But yeah AI might help too maybe.

permalink
report
parent
reply
12 points

While I agree wholeheartedly, that is unrealistic due to laws. You can’t reveal certain suspects identity because for certain crimes, like pedophilia, people will attempt to execute the suspect before they know whether or not they actually did it.

permalink
report
parent
reply
6 points

I mean police footage would be privacy invading as hell for victims and even just bystanders.

permalink
report
parent
reply
1 point

A charge being filed against someone is already public record in the majority of areas in the United States, as well as any court records resulting from those charges.

permalink
report
parent
reply
12 points
*

Exactly, and this also contradicts the “few bad apples” defense. If there were only a few bad apples, then the police unions should be bending over backwards to eradicate them sooner than later to protect the many good apples, not to mention improve the long suffering reputation of police.

Instead, they’re doing the exact opposite, making it clear to anyone paying attention that it’s mostly, if not entirely, bad apples.

permalink
report
parent
reply
12 points

You’ve got it backwards.

The phrase is “A few bad apples spoil the bunch”. It means everyone around the bad apples is also bad, because they’re all around and do nothing about it. It’s not a defense, it’s literally explaining what your comment says.

permalink
report
parent
reply
9 points

I think that poster is right in this context. It gets abbreviated and used as a defense of there just being “a few bad apples” and they they just drop/ignore the reset of the phrase.

permalink
report
parent
reply
10 points

The whole police thing and public accountability kinda makes sense, but I don’t think this means we should be pushing on AI just because the “bad guys” don’t like it.

AI is full of holes and unknowns. And relying on it to do stuff like this is a dangerous precedent IMO. You absolutely need someone reviewing it, yes. But they’re also not going to catch everything and starting with this will mean it will start being leaned on and it will replace thorough reviews by people.

I think something low stakes and unobtainable without the tools might make sense - like AIs reading through game chat or Twitter posts to identify issues where it’s impossible to have someone reading everything, and if some get by, oh well it’s a post on the internet.

But with police behavior? Those are people with the authority to ruin people’s lives or kill them. I do NOT trust AI to catch every problematic behavior and this stuff ABSOLUTELY should be done by people. I’d be okay with it as an aid, in theory, but once it’s doing any “aiding” it’s also approving some behavior. It can’t really be telling anyone where TO look without implying where NOT to look, and that gives it some authority, even as an “aid”. If it’s not making decisions, it’s not saving anyone any time.

Idk, I’m all for the public accountability and stuff like that here, but having AI make decisions around the behavior of people with so much fucking power is horrifying to me.

permalink
report
parent
reply

An AI art website I use illustrates your point perfectly with its attempt at automatic content filtering. Tons of innocent images get flagged, meanwhile problem content often gets through and has to be whacked manually. Relying on AI to catch everything, without false positives, is a recipe for disaster.

permalink
report
parent
reply
1 point

Still better than what he have now, where the footage usually isn’t reviewed at all.

permalink
report
parent
reply
2 points

Man, you said everything I wanted to in less than half the words. Shoulda just linked to your comment lol

permalink
report
parent
reply
97 points

dont hey always tell us

‘you have nothing to worry about if you have nothing to hide’

uh huh… hows that feel now, government employee?

permalink
report
reply
43 points

Yep.

Ya 'all like surveillance so much, let’s put all government employees under a camera all the time. Of all the places I find cameras offensive, that one not so much.

permalink
report
parent
reply
-27 points

I sure hope you get your daily dosis of enjoying people’s misery watching the substitute teacher crying in the teacher’s lounge.

permalink
report
parent
reply
4 points

Cameras in a teacher’s lounge would be ridiculous but, in principle, cameras in classrooms make a lot of sense. Teachers are public officials who exercise power over others, and as such they need to be accountable for their actions. Cameras only seem mean because teachers are treated so badly in other ways.

permalink
report
parent
reply
26 points

“A police state keeps everyone safe”

“No wait, not like that”

permalink
report
parent
reply
50 points

If the police unions don’t like it, then it’s certainly going to be a positive step towards public safety.

permalink
report
reply
33 points
*

I have a sneaking suspicion if police in places like America start using AI to review bodycam footage that they’ll just “pay” someone to train their AI so that way it’ll always say that the police officer was in the right when killing innocent civilians so that the footage never gets flagged That, or do something equally as shady and suspicious.

permalink
report
reply
23 points

These algorithms already have a comical bias towards the folks contracting their use.

Case in point, the UK Home Office recently contracted with an AI firm to rapidly parse through large backlogs of digital information.

The Guardian has uncovered evidence that some of the tools being used have the potential to produce discriminatory results, such as:

An algorithm used by the Department for Work and Pensions (DWP) which an MP believes mistakenly led to dozens of people having their benefits removed.

A facial recognition tool used by the Metropolitan police has been found to make more mistakes recognising black faces than white ones under certain settings.

An algorithm used by the Home Office to flag up sham marriages which has been disproportionately selecting people of certain nationalities.

Monopoly was a lie. You’re never going to get that Bank Error In Your Favor. It doesn’t happen. The House (or, the Home Office, in this case) always wins when these digital tools are employed, because the money for the tool is predicated on these agencies clipping benefits and extorting additional fines from the public at-large.

permalink
report
parent
reply
3 points

Bank errors in your favour do happen, or at least they did - one happened to me maybe twenty five years ago. I was broke and went to the bank to pay in my last £30-something of cash to cover an outgoing bill. Stopped at the cash machine outside my bank to check my balance was sufficient now, and found that the cashier had put an extra 4 zeros on the figure I’d deposited. I was rich! I was also in my early 20s and not thinking too clearly I guess because my immediate response was to rush home to get my passport with the intention of going abroad and opening an account into which to transfer the funds, never coming back. I checked my balance again at another machine closer to home and the bank had already caught and corrected their mistake. Took them maybe thirty minutes.

After a bit of occurred to me that I was lucky really, because I didn’t know what the fuck I was doing and the funds would have been traced very easily and I’d have been in deep shit.

But yeah, anecdotal, but shit like that did happen. I assume it’s more rare these days as fewer humans are involved in the system, and fewer people use cash.

permalink
report
parent
reply
24 points

Happy or unhappy I feel like body cam footage is too important a form of evidence to have reviewed by AI

permalink
report
reply
47 points
*

AI can’t be the last word in what gets marked for misconduct etc., however using it as a screening tool for potentially problematic moments in a law enforcement officer’s encounters would be useful. It’s an enormous task to screen through those hours upon hours of video and probably prohibitively expensive for humans to work through.

permalink
report
parent
reply
6 points

Need to be certain the false negative rate is near zero though, or important events could be missed, and with AI its nearly impossible to say that with certainty.

permalink
report
parent
reply
26 points

Important events are already being missed.

permalink
report
parent
reply
9 points

so we should just drop the good in pursuit of perfect?

ai is just an additional tool to be applied based on its efficacy. the better the tooling gets, the more we can trust its results. but no one…

no one

is expecting ai to be perfect, and to be trusted 100%.

permalink
report
parent
reply
18 points

Maybe if it’s just being used to flag potential areas of interest for review by a human? I’m open to the idea as long as there’s definite accountability and care.

Which, returning to the real world, we know is a fat chance.

permalink
report
parent
reply
10 points

It’s just flagging for human review. The dataset is too large and it can be made more objective than human review. As soon as I hear anything upsets police unions, I know it’s gotta be good. Support this.

permalink
report
parent
reply
7 points

One thing AI is generally pretty good at is identifying what is in a video. So at the very least you don’t have to waste money paying someone to watch 100s of hours of videos of donuts.

permalink
report
parent
reply
1 point

Is this the kind of thing anyone could be happy about?

Cops reviewing themselves, we know how that works out.

  1. Cops being reviewed by shitty AI.
  2. ???
  3. ???
  4. wtf
permalink
report
parent
reply
20 points

Then again, when the police union doesn’t like something, makes me wonder what it’s exposing about them…

permalink
report
parent
reply
9 points

Absolutely, anything the police union is against is by default a good thing for actual humans.

permalink
report
parent
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 18K

    Monthly active users

  • 12K

    Posts

  • 530K

    Comments