Google DeepMind Introduces 'SynthID' To Create Watermarks On AI-Generated Content

SynthID

AI is becoming more and more advanced, that the lines between synthetic and reality is blurring.

For this reason, tech companies are racing to find ways to combat misinformation. Google, which has long been at the forefront of tech advancement and AI development, becomes the first tech giant to test automated watermarks to label AI-generated images.

In a blog post, the technology called the 'SynthID' is the digital watermark invisible to the human eyes, meaning that people shouldn't be able to edit it out once it's there.

With it, Google wants to have it revolutionize how people deal with AI-generated content on the web and beyond.

Created by Google DeepMind, the tool is introduced in beta, and released to only a limited number of Vertex AI customers who are using Imagen, Google’s text-to-image diffusion model, like Midjourney, OpenAI's DALL·E and DALL·E 2.

SynthID

The technology works by embedding a digital watermark directly into an image's pixels.

This makes it literally invisible, meaning that the watermarks it makes will not compromise aesthetics of the images. But still, the tiny change to the image is easily detectable through automated tools for identification.

"This technology embeds a digital watermark directly into the pixels of an image, making it imperceptible to the human eye, but detectable for identification," the release explains.

And because no human eyes can see the watermark, it makes it virtually impossible to edit out, ensuring the integrity of the image.

Through SynthID, users should be able to communicate that their images are AI-generated, even if they're later edited or shared by others, and that misinformation can be curbed at least to several degrees.

SynthID provides three levels of confidence, “Digital watermark detected," "Digital watermark not detected," and "Digital watermark possibly detected.”

Whereas the first two levels simply mean that the work is either likely or unlikely, the last one can be described as “Could be generated. Treat with caution."

DeepMind emphasizes that SynthID is not perfect, especially regarding "extreme image manipulations." Among other reasons, this is why none of the three levels of confidence don't purport complete confidence.

"While generative AI can unlock huge creative potential, it also presents new risks, like enabling creators to spread false information — both intentionally or unintentionally," DeepMind’s release reads.

"Being able to identify AI-generated content is critical to empowering people with knowledge of when they’re interacting with generated media, and for helping prevent the spread of misinformation."

Misinformation due to AI-generated images has become somewhat prolific, and at certain times, it's easy to believe that AI-generated images spread on the web and social media are real images.

With SynthID, Google DeepMind wants to help differentiate which is which, and make sure that people will not be tricked into believing what they shouldn't believe.

SynthID
The watermark is invisible to the human eyes, but detectable even after modifications like adding filters, changing colors and brightness. (Credit: Google DeepMind)

DeepMind emphasizes that being able to identify AI-generated content is crucial to empower individuals with the knowledge that they're interacting with generated media.

So here, this technology should be considered a significant step in the ongoing battle against the spread of misinformation.

While the technology does represent an innovation, there is also one drawback of this SynthID.

This is because it's explicitly said that the technology was used and tested on images created by Imagen only.

Further reading: This 'PhotoGuard' Can Protect Images From Generative AI Manipulations, Researcher Said

Published: 
29/08/2023