The White House wants to ‘cryptographically verify’ videos of Joe Biden so viewers don’t mistake them for AI deepfakes::Biden’s AI advisor Ben Buchanan said a method of clearly verifying White House releases is “in the works.”

177 points

Digital signature as a means of non repudiation is exactly the way this should be done. Any official docs or releases should be signed and easily verifiable by any public official.

permalink
report
reply
83 points

Maybe deepfakes are enough of a scare that this becomes standard practice, and protects encryption from getting government backdoors.

permalink
report
parent
reply
57 points

permalink
report
parent
reply
22 points

Hey, congresscritters didn’t give a shit about robocalls till they were the ones getting robocalled.

We had a do not call list within a year and a half.

That’s the secret, make it affect them personally.

permalink
report
parent
reply
17 points

Would someone have a high level overview or ELI5 of what this would look like, especially for the average user. Would we need special apps to verify it? How would it work for stuff posted to social media

linking an article is also ok :)

permalink
report
parent
reply
24 points
*

Depending on the implementation, there are two cryptographic functions that might be used (perhaps in conjunction):

  • Cryptographic hash: An arbitrary amount of data (like a video file) is used to create a “hash”—a shorter, (effectively) unique text string. Anyone can run the file through the same function to see if it produces the same hash; if even a single bit of the file is changed, the hash will be completely different and you’ll know the data was altered.

  • Public key cryptography: A pair of keys are created, one of which can only encrypt data (but can’t decrypt its own output), and the other, “public” key can only decrypt data that was encrypted by the first key. Users (like the White House) can post their public key on their website; then if a subsequent message purporting to come from that user can be decrypted using their public key, it proves it came from them.

permalink
report
parent
reply
9 points

a shorter, (effectively) unique text string

A note on this. There are other videos that will hash to the same value as a legitimate video. Finding one that is coherent is extraordinarily difficult. Maybe a state actor could do it?

But for practical purposes, it’ll do the job. Hell, if a doctored video with the same hash comes out, the White House could just say no, we punished this one, and that alone would be remarkable.

permalink
report
parent
reply
2 points

Public key cryptography would involve signatures, not encryption, here.

permalink
report
parent
reply
12 points

The best way this could be handled is a green check mark near the video that you could click on it and it would give you all the meta data of the video (location, time, source, etc) with a digital signature (what would look like a random string of text) that you could click on and your browser would show you the chain of trust, where the signature came from, that it’s valid, probably the manufacturer of the equipment it was recorded on, etc.

permalink
report
parent
reply
7 points

Just make sure the check mark is outside the video.

permalink
report
parent
reply
4 points

The issue is making that green check mark hard to fake for bad actors. Https works because it is verified by the browser itself, outside the display area of the page. Unless all sites begin relying on a media player packed into the browser itself, if the verification even appears to be part of the webpage, it could be faked.

permalink
report
parent
reply
2 points

Do not show a checkmark by default! This is why cryptographers kept telling browsers to de-emphasize the lock icon on TLS (HTTPS) websites. You want to display the claimed author and if you’re able to verify keypair authenticity too or not.

permalink
report
parent
reply
6 points
*

it would potentially be associated with a law that states that you must not misrepresent a “verified” UI element like a check mark etc, and whilst they could technically add a verified mark wherever they like, the law would prevent that - at least for US companies

it may work in the same way as hardware certifications - i believe that HDMI has a certification standard that cables and devices must be manufactured to certain specifications to bear the HDMI logo, and the HDMI logo is trademarked so using it without permission is illegal… it doesn’t stop cheap knock offs, but it means if you buy things in stores in most US-aligned countries that bear the HDMI mark, they’re going to work

permalink
report
parent
reply
8 points
*

There’s already some kind of legal structure for what you’re talking about: trademark. It’s called “I’m Joe Biden and I approve this message.”

If you’re talking about HDCP you can break that with an HDMI splitter so IDK.

permalink
report
parent
reply
5 points
*

For the average end-user, it would look like “https”. You would not have to know anything about the technical background. Your browser or other media player would display a little icon showing that the media is verified by some trusted institution and you could learn more with a click.

In practice, I see some challenges. You could already go to the source via https, EG whitehouse.gov, and verify it that way. An additional benefit exists only if you can verify media that have been re-uploaded elsewhere. Now the user needs to check that the media was not just signed by someone (EG whitehouse.gov. ru), but if it was really signed by the right institution.

permalink
report
parent
reply
3 points

As someone points out above, this just gives them the power to not authenticate real videos that make them look bad…

permalink
report
parent
reply
3 points

Adobe is actually one of the leading actors in this field, take a look at the Content Authenticity Initiative (https://contentauthenticity.org/)

Like the other person said, it’s based on cryptographic hashing and signing. Basically the standard would embed metadata into the image.

permalink
report
parent
reply
2 points
permalink
report
parent
reply
2 points

It needs some kind of handler, but we mostly have those in place. A web browser could be the handler for instance. A web browser has the green dot on the upper left, telling you a page is secure, that https is on and valid. This could work like that, the browser can verify the video and display a green or red dot in the corner, the user could just mouse over it/tap on it to see who it’s verified to be from. But it’s up to the user to mouse over it and check if it says whitehouse.gov or dr-evil-mwahahaha.biz

permalink
report
parent
reply
1 point

Probably you’d notice a bit of extra time posting for the signature to be added, but that’s about it, the responsibility for verifying the signature would fall to the owners of the social media site and in the circumstances where someone asks for a verification, basically imagine it as a libel case on fast forward, you file a claim saying “I never said that”, they check signatures, they shrug and press the delete button and erase the post, crossposts, and if it’s really good screencap posts and those crossposts of the thing you did not say but is still being attributed falsely to your account or person.

It basically gives absolute control of a person’s own image and voice to themself, unless a piece of media is provable to have been made with that person’s consent, or by that person themself, it can be wiped from the internet no trouble.

Where it comes to second party posters, news agencies and such, it’d be more complicated but more or less the same, with the added step that a news agency may be required to provide some supporting evidence that what they said is not some kind of misrepresentation or such as the offended party filing the takedown might be trying to insist for the sake of their public image.

Of course there could still be a YouTube “Stats for Nerds”-esque addin to the options tab on a given post that allows you to sign-check it against the account it’s attributing something to, and a verified account system could be developed that adds a layer of signing that specifically identifies a published account, like say for prominent news reporters/politicians/cultural leaders/celebrities, that get into their own feed so you can look at them or not depending on how ya be feelin’ that particular scroll session.

permalink
report
parent
reply
1 point
*

TL;DR: one day the user will see an overlay or notification that shows an image/movie is verified as from a known source. No extra software required.

Honestly, I can see this working great in future web browsers. Much like the padlock in the URL bar, we could see something on images that are verified. The image could display a padlock in the lower-left corner or something, along with the name of the source, demonstrating that it’s a securely verified asset. “Normal” images would be unaffected. The big problem is how to put something on the page that cannot be faked by other means.

It’s a little more complicated for software like phone apps for X or Facebook, but doable. The problem is that those products must choose to add this feature. Hopefully, losing reputation to being swamped with unverifiable media will be motivation enough to do so.

The underlying verification process is complex, but should be similar to existing technology (e.g. GPG). The key is that images and movies typically contain a “scratch pad” area in the file for miscellaneous stuff (metadata). This is where the image’s author can add a cryptographic signature for the file itself. The user would never even know it’s there.

permalink
report
parent
reply
7 points

i wouldn’t say signature exactly, because that ensures that a video hasn’t been altered in any way: no re-encoded, resized, cropped, trimmed, etc… platforms almost always do some of these things to videos, even if it’s not noticeable to the end-user

there are perceptual hashes, but i’m not sure if they work in a way that covers all those things or if they’re secure hashes. i would assume not

perhaps platforms would read the metadata in a video for a signature and have to serve the video entirely unaltered if it’s there?

permalink
report
parent
reply
11 points
*

You don’t need to bother with cryptographically verifying downstream videos, only the source video needs to be able to be cryptographically verified. That way you have an unedited, untampered cut that can be verified to be factually accurate to the broadcast.

The White House could serve the video themselves if they so wanted to. Just use something similar to PGP for signature validation and voila. Studios can still do all the editing, cutting, etc - it shouldn’t be up to the end user to do the footwork on this, just for the studios to provide a kind of ‘chain of custody’ - they can point to the original verification video for anyone to compare to; in order to make sure alterations are things such as simple cuts, and not anything more than that.

permalink
report
parent
reply
4 points
*

you don’t even need to cryptographically verify in that case because you already have a trusted authority: the whitehouse… of the video is on the whitehouse website, it’s trusted with no cryptography needed

the technical solutions only come into play when you’re trying to modify the video and still accurately show that it’s sourced from something verifiable

heck you could even have a standard where if a video adds a signature to itself, editing software will add the signature of the original, a canonical immutable link to the file, and timestamps for any cuts to the video… that way you (and by you i mean anyone; likely hidden from the user) can load up a video and be able to link to the canonical version to verify

in this case, verification using ML would actually be much easier because you (servers) just download the canonical video, cut it as per the metadata, and compare what’s there to what’s in the current video

permalink
report
parent
reply
3 points

Rather that using a hash of the video data, you could just include within the video the timestamp of when it was originally posted, encrypted with the White House’s private key.

permalink
report
parent
reply
1 point

That doesn’t prove that the data outside the timestamp is unmodified

permalink
report
parent
reply
1 point

Apple’s scrapped on-device CSAM scanning was based on perceptual hashes.

The first collision demo breaking them showed up in hours with images that looked glitched. After just a week the newest demos produced flawless images with collisions against known perceptual hash values.

In theory you could create some ML-ish compact learning algorithm and use the compressed model as a perceptual hash, but I’m not convinced this can be secure enough unless it’s allowed to be large enough, as in some % of the original’s file size.

permalink
report
parent
reply
1 point

you can definitely produced perceptual hashes that collide, but really you’re not just talking about a collision, you’re talking about a collision that’s also useful in subverting an election, AND that’s been generated using ML which is something that’s still kinda shakey to start with

permalink
report
parent
reply
79 points

I have said for years all media that needs to be verifiable needs to be signed. Gpg signing lets gooo

permalink
report
reply
36 points

Very few people understand why a GPG signature is reliable or how to check it. Malicious actors will add a “GPG Signed” watermark to their fake videos and call it a day, and 90% of victims will believe it.

permalink
report
parent
reply
7 points

As soon as VLC adds the gpg sig feature, it’s over.

permalink
report
parent
reply
11 points

No, it’s not. People don’t use VLC to watch misinformation videos. They see it on Reddit, Facebook, YouTube, or TikTok.

permalink
report
parent
reply
4 points

…how popular do you think VLC is among those who don’t understand cryptographic signatures?

permalink
report
parent
reply
1 point

And that will in no way be the first step on the road to VLC deciding which videos it allows you to play…

permalink
report
parent
reply
4 points

Yeah but all it takes is proving it doesn’t have the right signature and you can make the Social Media corpo take every piece of media with that signature just for that alone.

What’s even better is that you can attack entities that try to maliciously let people get away with misusing their look and fake being signed for failing to defend their IP, basically declaring you intend to take them to court to Public Domainify literally everything that makes them any money at all.

If billionaires were willing to allow disinformation as a service then they wouldn’t have gone to war against news as a service to make it profitable to begin with.

permalink
report
parent
reply
22 points

I just mentioned this in another comment tonight; cryptographic verification has existed for years but basically no one has adopted it for anything. Some people still seem to think pasting an image of your handwriting on a document is “signing” a document somehow.

permalink
report
parent
reply
4 points

It doesn’t help that in a lot of cases, this is actually accepted by a shit ton of important institutions that should be better, but aren’t.

permalink
report
parent
reply
2 points

Still trying to get people to sign their emails lol

permalink
report
parent
reply
1 point

I mean, part of it is PGP is the exact opposite of streamlined and you’ve got to be NSA levels of paranoid to bother with it.

permalink
report
parent
reply
2 points

The average Joe won’t know what any of what you just said means. Hell, the Joe in the OP doesn’t know what any of you just said means. There’s no way (IMO) of simultaneously creating a cryptographic assurance and having it be accessible to the layman.

permalink
report
parent
reply
1 point

There is, but only if you can implement a layer of abstraction and get them to trust that layer of abstraction.

Few laymen understand why Bitcoin is secure. They just trust that their wallet software works and because they were told by smarter people that it is secure.

Few laymen understand why TLS is secure. They just trust that their browser tells them it is secure.

Few laymen understand why biometric authentication on their phone apps is secure. They just trust that their device tells them it is secure.

permalink
report
parent
reply
3 points

Each of those perfectly illustrates the problem with adding in a layer of abstraction though:

Bitcoin is a perfect example of the problem. Since almost nobody understands how it works, they keep their coins in an exchange instead of a wallet and have completely defeated the point of cryptocurrency in the first place by reintroducing blind trust into the system.

Similarly, the TLS ecosystem is problematic. Because even though it is theoretically supposed to verify the identity of the other party, most people aren’t savvy enough to check the name on the cert and instead just trust that if their browser doesn’t warn them, they must be okay. Blind trust one again is introduced alongside the necessary abstraction layers needed to make cryptography palatable to the masses.

Lastly, people have put so much trust in the face scanning biometrics to wake their phone that they don’t realize they may have given their face to a facial recognition company who will use it to help bring about the cyberpunk dystopia that we are all moving toward.

permalink
report
parent
reply
56 points

Huh. They actually do something right for once instead of spending years trying to ban A.I tools. I’m pleasantly surprised.

permalink
report
reply
9 points
*

Bingo. If, at the limit, the purpose of a generative AI is to be indistinguishable from human content, then watermarking and AI detection algorithms are absolutely useless.

The ONLY means to do this is to have creators verify their human-generated (or vetted) content at the time of publication (providing positive proof), as opposed to attempting to retroactively trying to determine if content was generated by a human (proving a negative).

permalink
report
parent
reply
-5 points

I mean banning use cases is deffo fair game, generating kiddy porn should be treated as just as heinous as making it the “traditional” way IMO

permalink
report
parent
reply
5 points

Yikes! The implication is that it does not matter if a child was victimized. It’s “heinous”, not because of a child’s suffering, but because… ?

permalink
report
parent
reply
-5 points

Man imagine trying to make “ethical child rape content” a thing. What were the lolicons not doing it for ya anymore?

As for how it’s exactly as heinous, it’s the sexual objectification of a child, it doesn’t matter if it’s a real child or not, the mere existence of the material itself is an act of normalization and validation of wanting to rape children.

Being around at all contributes to the harm of every child victimised by a viewer of that material.

permalink
report
parent
reply
3 points
*

Idk, making CP where a child is raped vs making CP where no children are involved seem on very different levels of bad to me.

Both utterly repulsive, but certainly not exactly the same.

One has a non-consenting child being abused, a child that will likely carry the scars of that for a long time, the other doesn’t. One is worse than the other.

E: do the downvoters like… not care about child sexual assault/rape or something? Raping a child and taking pictures of it is very obviously worse than putting parameters into an AI image generator. Both are vile. One is worse. Saying they’re equally bad is attributing zero harm to the actual assaulting children part.

permalink
report
parent
reply
-8 points

Man imagine trying to make the case for Ethical Child Rape Material.

You are not going to get anywhere with this line of discussion, stop now before you say something that deservedly puts you on a watchlist.

permalink
report
parent
reply
47 points

Yeah good luck getting to general public to understand what “cryptographically verified” videos mean

permalink
report
reply
21 points

The general public doesn’t have to understand anything about how it works as long as they get a clear “verified by …” statement in the UI.

permalink
report
parent
reply
4 points

The problem is that even if you reveal the video as fake,the feeling it reinforces on the viewer stays with them.

“Sure that was fake,but the fake that it seems believable tells you everything you need to know”

permalink
report
parent
reply
3 points
*

“Herd immunity” comes into play here. If those people keep getting dismissed by most other people because the video isn’t signed they’ll give up and follow the crowd. Culture is incredibly powerful.

permalink
report
parent
reply
17 points

It could work the same way the padlock icon worked for SSL sites in browsers back in the day. The video player checks the signature and displays the trusted icon.

permalink
report
parent
reply
3 points

It needs to focus on showing who published it, not the icon

permalink
report
parent
reply
14 points

Democrats will want cryptographically verified videos, Republicans will be happy with a stamp that has trumps face on it.

permalink
report
parent
reply
2 points
*

I mean, how is anyone going to crytographically verify a video? You either have an icon in the video itself or displayed near it by the site, meaning nothing, fakers just copy that in theirs. Alternatively you have to sign or make file hashes for each permutation of the video file sent out. At that point how are normal people actually going to verify? At best they’re trusting the video player of whatever site they’re on to be truthful when it says that it’s verified.

Saying they want to do this is one thing, but as far as I’m aware, we don’t have a solution that accounts for the rampant re-use of presidential videos in news and secondary reporting either.

I have a terrible feeling that this would just be wasted effort beyond basic signing of the video file uploaded on the official government website, which really doesn’t solve the problem for anyone who can’t or won’t verify the hash on their end.


Maybe some sort of visual and audio based hash, like musicbrainz ids for songs that are independant of the file itself but instead on the sound of it. Then the government runs a server kind of like a pgp key server. Then websites could integrate functionality to verify it, but at the end of the day it still works out to a “I swear we’re legit guys” stamp for anyone not techinical enough to verify independantly thenselves.


I guess your post just seemed silly when the end result of this for anyone is effectively the equivalent of your “signed by trump” image, unless the public magically gets serious about downloading and verifying everything themselves independently.

Fuck trump, but there are much better ways to shit on king cheeto than pretending the average populace is anything but average based purely on political alignment.

You have to realize that to the average user, any site serving videos seems as trustworthy as youtube. Average internet literacy is absolutely fucking abysmal.

permalink
report
parent
reply
4 points

People aren’t going to do it, the platforms that 95% of people use (Facebook, Tik Tok, YouTube, Instagram) will have to add the functionality to their video players/posts. That’s the only way anything like this could be implemented by the 2024 US election.

permalink
report
parent
reply
2 points

In the end people will realise they can not trust any media served to them. But it’s just going to take time for people to realise… And while they are still blindly consuming it, they will be taken advantage of.

If it goes this road… Social media could be completely undermined. It could become the downfall of these platforms and do everyone a favour by giving them their lives back after endless doom scrolling for years.

permalink
report
parent
reply
1 point

Do it basically the same what TLS verification works, sure the browsers would have to add something to the UI to support it, but claiming you can’t trust that is dumb because we already use that to trust the site your on is your bank and not some scammer.

Sure not everyone is going to care to check, but the check being there allows people who care to reply back saying the video is faked due to X

permalink
report
parent
reply
5 points

“Not everybody will use it and it’s not 100% perfect so let’s not try”

permalink
report
parent
reply
1 point

That’s not the point. It’s that malicious actors could easily exploit that lack of knowledge to trick users into giving fake videos more credibility.

If I were a malicious actor, I’d put the words “✅ Verified cryptographically by the White House” at the bottom of my posts and you can probably understand that the people most vulnerable to misinformation would probably believe it.

permalink
report
parent
reply
0 points

Just make it a law that if as a social media company you allow unverified videos to be posted, you don’t get safe harbour protections from libel suits for that. It would clear right up. As long as the source of trust is independent of the government or even big business, it would work and be trustworthy.

permalink
report
parent
reply
15 points

Back in the day, many rulers allowed only licensed individuals to operate printing presses. It was sometimes even required that an official should read and sign off on any text before it was allowed to be printed.

Freedom of the press originally means that exactly this is not done.

permalink
report
parent
reply
6 points

Jesus, how did I get so old only to just now understand that press is not journalism, but literally the printing press in ‘Freedom of the press’.

permalink
report
parent
reply
1 point
*

You understand that there is a difference between being not permitted to produce/distribute material and being accountable for libel, yes?

“Freedom of the press” doesn’t mean they should be able to print damaging falsehood without repercussion.

permalink
report
parent
reply
3 points

As long as the source of trust is independent of the government or even big business, it would work and be trustworthy

That sounds like wishful thinking

permalink
report
parent
reply
45 points

I don’t blame them for wanting to, but this won’t work. Anyone who would be swayed by such a deepfake won’t believe the verification if it is offered.

permalink
report
reply
33 points

Agreed and I still think there is value in doing it.

permalink
report
parent
reply
-9 points

I honestly do not see the value here. Barring maybe a small minority, anyone who would believe a deepfake about Biden would probably also not believe the verification and anyone who wouldn’t would probably believe the administration when they said it was fake.

The value of the technology in general? Sure. I can see it having practical applications. Just not in this case.

permalink
report
parent
reply
24 points

It helps journalists, etc, when files have digital signatures verifying who is attesting to it. If the WH has their own published public key for signing published media and more then it’s easy to verify if you have originals or not.

permalink
report
parent
reply
4 points

Sure, the grandparents that get all their news via Facebook might see a fake Biden video and eat it up like all the other hearsay they internalize.

But, if they’re like my parents and have the local network news on half the damn time, at least the typical mainstream network news won’t be showing the forged videos. Maybe they’ll even report a fact check on it?!?

And yeah, many of them will just take it as evidence that the mainstream media is part of the conspiracy. That’s a given.

permalink
report
parent
reply
2 points

If a cryptographic claim/validation is provided then anyone refuting the claims can be seen to be a bad faith actor. Voters are one dimension of that problem but mainstream media being able to validate election videos is super important both domestically, but also internationally as the global community needs to see efforts being undertaken to preserve free and fair elections. This is especially true given the consequences if america’s enemies are seen to have been able to steer the election.

permalink
report
parent
reply
11 points

I don’t think that’s what this is for. I think this is for reasonable people, as well as for other governments.

Besides, passwords can be phished or socially engineered, and some people use “abc123.” Does that mean we should get rid of password auth?

permalink
report
parent
reply
4 points

Deepfakes could get better. And if they do, a lot more people will start to get fooled

permalink
report
parent
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 18K

    Monthly active users

  • 12K

    Posts

  • 528K

    Comments