Researchers at ETH Zurich have developed a new sensor technology designed to combat the rise of AI-generated deepfakes.
As manipulated photos and videos become increasingly sophisticated, it is becoming harder to distinguish between authentic content and digital forgeries — such as the famous AI-generated image of the Pope in a designer down jacket. This technology aims to restore trust by verifying the origin of digital media at the source – on the actual camera sensor.
Created by Pablo Xavier using Midjourney. (Image: Wikimedia Commons)
The problem with current metadata
Currently, camera metadata acts like an automatic digital note card attached to every photo you take. The moment you press the shutter, the camera software creates hidden text information, usually called EXIF data, and embeds it directly into the image file itself.
This data automatically records fundamental technical details, such as the date and time, the specific camera model used, and key settings like the aperture and ISO.
While invisible when simply viewing the photo, this built-in information is easily read by software and devices, allowing both consumer phone apps and professional cataloging programs to sort, organize, and understand exactly how and when a specific image was captured.
Unfortunately, with the right know how, the metadata can be hacked. This is the exact reason why new technologies, such as the cryptographic signing are being developed.
The new approach shifts from relying on unsecured text to a secure, hardware-locked digital seal, making it virtually impossible to tamper with the content or its history without being detected.
The new technology by ETH Zurich
The system works by integrating cryptographic signing directly into a camera’s sensor chip. Instead of adding a digital watermark after a photo is taken, the chip creates a unique digital signature at the exact moment the light hits the sensor.
This signature acts as a 'digital seal' that proves:
• The data originated from a specific device.
• The exact time and date of capture.
• That the content has not been altered since it was recorded.
This hardware-based approach makes mass-producing deepfakes significantly harder, as a bad actor would need to physically hack the chip itself rather than simply using software to create a forgery.
(Graphic created using AI: Felix Franke / ETH Zurich)
How Verification Works
Because the signature is created inside the hardware, any subsequent attempt to edit the image — such as changing a face or altering a background — will leave detectable traces. These signatures can be stored on a public, immutable ledger, such as a blockchain.
This allows anyone, from social media platforms to individual journalists, to verify a file’s authenticity. By comparing the file’s data against the signature stored in the public registry, a system can instantly confirm if the image is an original recording or a tampered version.
It will be interesting to see how it is implemented. Will photographers 'trust' the blockchain to ensure that their image can't be downloaded, or accessed by other people?
Future Implementation
While the technology is currently in the prototype stage, the researchers have filed a patent and are working on making it cost-effective for mass production. In the future, this technology could be integrated into everything from professional mirrorless cameras, video cameras, to smartphones.
Social media platforms could use these signatures to automatically flag unverified or manipulated content, helping to protect public discourse from misinformation.
You can find out more on the ETH Zurich website.
