Scientists Hide Light Codes To Expose Fake AI Videos

Image by Justin Lane, from Unsplash

Scientists Hide Light Codes To Expose Fake AI Videos

Reading time: 3 min

Cornell researchers developed a new technology to help fact-checkers detect fake or manipulated videos, and did so by embedding secret watermarks in light.

In a rush? Here are the quick facts:

  • Cornell researchers developed light-based watermarks to detect fake or altered videos.
  • The method hides secret codes in nearly invisible lighting fluctuations.
  • Watermarks work regardless of the camera used to record footage.

The researchers explain that this method hides nearly invisible fluctuations in lighting during important events or at key locations, such as press conferences or even entire buildings.

These fluctuations, unnoticed by the human eye, are captured in any video filmed under the special lighting, which can be programmed into computer screens, photography lamps, or existing built-in fixtures.

“Video used to be treated as a source of truth, but that’s no longer an assumption we can make,” said Abe Davis, assistant professor of computer science at Cornell, who conceived the idea.

“Now you can pretty much create video of whatever you want. That can be fun, but also problematic, because it’s only getting harder to tell what’s real,” Davis added.

Traditional watermarking techniques modify video files directly, requiring cooperation from the camera or AI model used to create them. Davis and his team bypassed this limitation by embedding the code in the lighting itself, ensuring any real video of the subject contains the hidden watermark, no matter who records it.

Each coded light produces a low-fidelity, time-stamped “code video” of the scene under slightly different lighting. “When someone manipulates a video, the manipulated parts start to contradict what we see in these code videos,” Davis explained.

“And if someone tries to generate fake video with AI, the resulting code videos just look like random variations,” Davis added.

The project leader Peter Michael explained that the team created imperceptible light codes by drawing on human perception research. The system uses normal lighting “noise” patterns to make detection challenging without the secret key. Programmable lights can be coded with software, while older lamps can use a small chip the size of a postage stamp.

The team achieved successful implementation of up to three separate codes for different lights within the same scene, which significantly increases the difficulty of forging them. The system demonstrated its effectiveness outdoors, and across various skin tones.

Still, Davis warns the battle against misinformation is far from over. “This is an important ongoing problem,” he said. “It’s not going to go away, and in fact, it’s only going to harder.”

Did you like this article? Rate it!
I hated it I don't really like it It was ok Pretty good! Loved it!

We're thrilled you enjoyed our work!

As a valued reader, would you mind giving us a shoutout on Trustpilot? It's quick and means the world to us. Thank you for being amazing!

Rate us on Trustpilot
0 Voted by 0 users
Title
Comment
Thanks for your feedback