This is tilting at windmills. If someone has physical possession of a piece of hardware, you should assume that it’s been compromised down to the silicon, no matter what clever tricks they’ve tried to stymie hackers with. Also, the analog hole will always exist. Just generate a deepfake and then take a picture of it.
You have it backwards. This is not too stop fake photos, despite the awful headline. It’s to attempt to provide a chain of custody and attestation. “I trust tom only takes real photos, and I can see this thing came from Tom”
And if the credentials get published to a suitable public timestamped database you can also say “we know this photo existed in this form at this specific time.” One of the examples mentioned in the article is the situation where that hospital got blown up in Gaza and Israel posted video of Hamas launching rockets to try to prove that Hamas did it, and the lack of a reliable timestamp on the video made it somewhat useless. If the video had been taken with something that published certificates within minutes of making it that would have settled the question.
That doesn’t really work. If the private key is leaked, you’re left in a quandary of “Well who knew the private key at this timestamp?” and it becomes a guessing game.
Especially in the scenario you posit. Nation-state actors with deep pockets in the middle of a war will find ways to bend hardware to their will. Blindly trusting a record just because it’s timestamped is foolish.
And Tom’s camera gets hacked by an evil maid and then where are you? Exactly. This is snake oil.
Unless the evil maid is also capable of time travel there’s no way for them to mess with the timestamps of things once they’ve been published. She could take some pictures with the camera but not tamper with ones that have already been taken.
If only I knew how to create my own firmware for Leica… then I could call the same crypto-chip and sign any picture I’d like. (Oh wait! There’s a github for hacking Leica M8 firmware!)
Ah, DRM for your photos.
Great.
Not at all. From what I understand of this article, it wouldn’t stop you from doing anything you wanted with the image. It just generates a signed certificate at the moment the picture is taken that authenticates that that particular image existed at that particular time. You can copy the image if you like.
Forgive the cynicism, but: free, for now.
What happens when the company decides all of a sudden to lock the service behind a subscription pay wall?
Do you still maintain rights to your photos when you use this service?
I have no idea what you’re proposing be “locked behind a subscription pay wall.” The certificate exists and is public from the moment the picture is taken. It can be validated by anyone from that point forward, otherwise it would be pointless. Post the timestamp and the public key on a public blockchain and there’s nothing that can be “taken away” after that.
Your rights to your photos are from your copyright on them. This service shouldn’t affect that. Read the EULA and don’t sign your rights away and there’s no way they can be taken.
I suppose if they are running some kind of identity-verification service they could cut you off from that and prevent future photos you take from being signed after that, but that doesn’t change the past.
This isn’t DRM. I can’t believe you have so many upvotes for such blatant FUD.
I think this is probably great for specific forensic work and similar but the problem with deepfakes isn’t that people can’t determine their veracity. The problem is that people see a picture online and don’t bother to even check. We have news sources that care about being accurate and trustworthy yet people just choose to ignore them and believe what they want.
“that it’s a true representation of what someone saw.”
Someone please correct me if I’m wrong but photography has never ever ever been a “true” representation of what you took a picture of.
Photography is right up there with statistics in its potential for “true” information to be used to draw misleading or false conclusions. I predict that a picture with this technology may carry along with it the authority to impose a reality that’s actually not true by pointing to this built-in encryption to say “see? the picture is real” when the deception was actually carried out by the framing or timing of the picture, as has been done often throughout history.
You’re talking about “the whole truth”. If the whole is true, then all of the parts are true, so photographing only a subset of the truth (framing) is still true. If a series of events are true, then each event is true, so taking a picture at a certain time (timing) is also true.
Photos capture real photons that were present at real scenes and turn them into grids of pixels. Real photographs are all “true”. Photoshop and AI don’t need photons and can generate pixels from nothing.
That’s what is being said.
Nah, lying by omission can still tell a totally wrong narrative. Sometimes it has to be the whole truth to be the truth.
As I understand it, it’s a digital signature scheme where the raw image is signed at the camera, and modifications in compliant software are signed as well. So it’s not so much “this picture is 100% real, no backsies”. Nor is it “We know all the things done to this picture”, as I doubt people who modify these photos want us to know what they are modifying.
So it’s more like “This picture has been modified, like all pictures are, but we can prove how many times it was touched, and who touched it”. They might even be able to prove when all that stuff happened.