We’ll admit we’ve kicked around the idea of a camera that digitally signs a picture so you could prove it hasn’t been altered and things like the time and place the photo was taken for years. Apparently, products are starting to hit the market, and Spectrum reports on a Leica that, though it will set you back nearly $10,000, can produce pictures with cryptographic signatures.
This isn’t something Leica made up. In 2019, a consortium known as the Content Authenticity Initiative set out to establish a standard for this sort of thing. The founders are no surprise: The New York Times, Adobe, and Twitter. There are 200 companies involved now, although Twitter — now X — has left.
The problem, the post notes, is that software support is limited. There are only a few programs that recognize and process digital signatures. That’ll change, of course, and — we imagine — if you needed to prove the provenance of a photo in court, you’d just buy the right software you needed.
We haven’t dug into the technology, but presumably keeping the private key secure will be very important. The consortium is clear that the technology is not about managing rights, and it is possible to label a picture anonymously. The signature can identify if an image was taken with a camera or generated by AI and details about how it was taken. It also can detect any attempt to tamper with the image. Compliant programs can make modifications, but they will be traceable through the cryptographic record.
Will it work? Probably. Can it be broken? We don’t know, but we wouldn’t bet that it couldn’t without a lot more reading. PDF signatures can be hacked. Our experience is that not much is truly unhackable.
All cameras should do something like this.
Gimp and Photoshop et.al. should be able to add to the signature without being allowed to remove it.
It will minimize “deep fakes” in important places.
The next year we will see some really awful fakes showing up in news stories.
It’s not possible to “without being allowed to remove it.”
You could convert the file format outside of Gimp, for instance. Depending on how the signature is embedded, it would not survive the converting process.
Gimp is opensource, and a programmer could inspect the source code, and compile a custom Gimp that either removes the signature, or corrupts it. As Gimp is able to add a signature, it will be able to add fake signatures, and you will end up with a picture with ten thousand authors.
Deep fakes will still be added, no matter if you have signature or not. As there are some open source AI engines to manipulate images, it’s a matter of a code edit and recompilation to create a version that does not care.
I do agree that all cameras should be able to sign images, but removal prevention doesn’t make sense. The signature is there to verify the original source image is what it says it is. if you strip that then you can no longer verify that image, which, to me seems to be working as intended.
yep, this ^
Signature is not to prohibit image modification or deepfakes, just to guarantee it wasn’t modified and is the original one
Canon did this years ago, and it was hacked. https://www.elcomsoft.co.uk/news/428.html
That doesn’t mean it couldn’t be done better of course, devices are more secure and more capable these days.
It seems like the main reason to do this is to thwart fake/AI/photoshopped images, but, if you can extract the private key (or generate a valid new one), couldn’t you just re-sign the altered image as if it was the original?
Because if you do that then the fake image won’t be signed. It doesn’t prevent people from creating or manipulating images. But it does say that this image was actually taken by this camera at this time and location (assuming it has GPS which most do) and hasn’t been altered since.
One presumably could hack around this to some extent by removing the camera sensor and injecting fake data into that and the GPS, but that’s a great deal more work and using an AI to strap someone’s face onto another picture.
If you can get the camera’s secret key and the signing algorithm, then you could sign any image, wether it was taken by the camera or not. Signing is only safe if the secret key (which is stored on the camera) is safe.
Definitely, I would expect that this is stored inside a secure element… good luck to extract anything from it. the best you could hope to do would be to make it sign other data…
That wont stop anyone. Assuming that you can figure out how evrything else is programmed, and that decapping and stuff doesnt exist, you can figure out the protocal and make it sign evrything.
Set you email on hackaday comments to the support one of a company you dont like(i use apple)
Why not just gpg sign it
GPS might not be available in the location the image was taken. Such as a cave, tunnel, mine, basement, battlefield or other similar locations.
They said gpg.
“text/>alert(“Set you email on hackaday comments to the support one of a company you dont like”)<text”
I hope i got the html wright
Even if the hardware is perfectly secure, you can just take a photo of a screen with fake image on it. Just be careful to choose plausible exposure and focus settings.
Ten thousand bucks?? Gotta just be some kind of PR hype move
Sure, it might be signed, authentic, etc., but what’s preventing the photo from being spoofed? I could set the camera time to ten years ago, take a picture of something current, and claim time travel, or forgery, or IP fraud, or any number of things.
Even if you say you get the time & location stamp is from a GPS system, that’s trivially faked now.
I think we missed a golden opportunity in the most recent round of GPS satellites. They could be providing crypto-signed time & location validity stamps for exactly this purpose.
It’s not impossible to generate a cryptographically secure (as you want) timestamp on an equally secure hash of the image and add it to the image metadata. Camera wouldn’t have the timeservers private key, just an address.
I doubt that’s what this does.
Even that depends on a trusted time server being somewhere in the picture, remaining secure and available.
There are already APIs for secure, signed time on the network. Because otherwise, I’d be backdating trades.
Not me, my evil twin.
GPS data? How could you trust that?
Given the broadcast nature of GPS there is no possible way you could prevent a replay attack.
*Faking* GPS signals at present is still quite a long way from trivial, I’m not talking about rooting your phone and using a location faking tool I’m talking about creating fake signals. Last I saw commercially that was about $20K in gear and a bunch of licences and permits if you want to buy one new.
Trivial is deepfakes.
This would make creating a fake image quite a bit harder. Also presuming there is a “chain of trust” type arrangement from the camera vendor to the camera and thence to the photographer you could in court ask to see or have assessed a sampling of the photographers other works. If there is just the one image that may be a little suss.
Nothing is perfect. Even if your crypto time worked, what’s to stop me building a satellite, flying up to the GPS satellites and extracting the private keys from them and then faking whatever time I wanted? Or just kidnapping the family of the dude that holds the root certs for your GPS time system?
The point of something like this is to make it more difficult than your average edgelord 12 year old kid will acomplish.
You either need a “locked down tight” camera with security similar to Microsoft’s XBOX One or a way to immutably publish a digest of your data shortly after taking the photograph.
Any network-enabled camera can do the latter. Even non-networked ones can do so if they can generate a hash as a QR-code or human-readable string then show the hash on the camera’s display. From there, you can use Ye Olde Fashioned Smarte Fone(TM) to take a photo of the hash and upload it to someplace that is immutable, like a blockchain. Heck, you can probably even pay a newspaper to print the hashes as a classified ad, or even print a hash of the hashes. Printing hashes isn’t a new idea, as The New York Times did in 1995.
You almost never see an unmodified photo from a professional. And that fact goes all the way back to the first images taken.
‘Professional’, same as a doctorate.
That means you’re qualified to doctor the data/image.
The usual way to show you took a picture is that you take the original raw files, or historically the film negatives, and exercise trust. They see that you have the original capture which produced the released image, and trust that you must be the original photographer, while you trust they won’t keep and use your originals without permission. Removing that need using cryptography might be interesting, but won’t always prove the things you need proven as far as fake images.
I wonder if AI could theoretically extrapolate so that even looking at the data forensically it’s not obvious which of two possible originals was generated and which was legitimate, if the provenance of a valuable image was in doubt? While the information to actually duplicate the original isn’t present in the released image, it might be *technically possible* to produce something that is made-up but looks like it could have been the original. It’d be awfully hard to do once people started making tools to counter this and detect false raws, but let’s ignore that. Apart from authenticity it’d be pointless to even take pictures properly because the hypothetical AI could just use some random snapshot as a base to hallucinate what it would look like if you had used a different camera or stood in a different place or whatever. So I think if that ever happens, maybe signing will be nice or maybe you’ll just have to use trust.
Seems like anti-AI paranoia being used to sell a useless gimmick.
pgp already exists?
Everyone seems to be forgetting the photographer. One way we verify the provenance of a photograph is a person makes a claim to taking the photo. If they also have a signed photo it adds to the provenance.
They’d have a harder time spoofing deep fakes consistently.
So such a specification doesn’t end fake and deceptive imagery, it just makes it harder. Crucially, it gives legitimate photographers another tool to prove provenance.
Don’t let perfect be the enemy of the good.
Who noone made a digital camera password protected? Like smartphones.
Smartphones have sensitive data that is hard to extract(socket clips are super annoying). Cameras dont have sensitive data(unless your a idiot) and data can be “extracted” by removing a sd card.
For those interested in the spec itself (which discusses a lot of the questions above) it can be found here: https://c2pa.org/specifications/specifications/1.3/index.html
The Hacker Factor Blog recently published an article on some of the problems with this specification: https://www.hackerfactor.com/blog/index.php?/archives/1010-C2PAs-Butterfly-Effect.html
Tinfoil hat time,
Its plausible that all imagery will be marked with some kind of identifier moving forward, be it the device that took the photo or software that edited it as a sort of paper trail to see where it came from. Like they do with many printers to combat “counterfeiting”
FWIW, it’s already broken: https://www.hackerfactor.com/blog/index.php?/archives/1010-C2PAs-Butterfly-Effect.html
Its pretty spectacularly bad, actually.
Is this really so hard? fit a TPM chip, generate a private/key pair unique to the device, create a HMAC-256 based on the raw image data and add it to the EXIF data together with the public key. OK, there will be a performance hit but any changes to the original can be detected
The main reason i think this has not been done is that the use case is a very small one, so the extra cost is not considered worth it
My Masters thesis was on wavelet-based image watermarking for embedded processors (e.g., cameras) for this exact purpose… to mark the date/time an image was taken, who took it (i.e., which physical camera), and allowed for determination of a pristine (non-edited) image (as well as edit locations within the image if it was edited). That was over 20 years ago… good times.
Can you not just use the camera to take a picture of a DeepFake image and then it becomes real and certified?
Iirc there were also digital media that inherently could only allow you to add pictures to it but not delete or change them.
Plain old CD-Rs would do that, even natively in Windows. You could append new content until you closed (finalized) the disc. “Deleted” files were just marked suchly in the directory, but the files themselves were not (and could not be) erased from the disc.
But they can be overwriten
Why are you stating “Twitter — now X”? Anyone that doesn’t know this already are hardly going to be the type reading HaD :rolleyes: