Premium
This is an archive article published on August 30, 2023

Google’s Deepmind has a smart new method to identify AI-generated images

DeepMind has created a watermarking method that embeds changes in individual pixels of AI-generated images.

google deepmind synthid featuredSynthetic images are becoming harder to distinguish from reality. (Image: Deepmind)
Listen to this article
Google’s Deepmind has a smart new method to identify AI-generated images
x
00:00
1x 1.5x 1.8x

With AI-generated images reaching a point where they’re hard to distinguish from reality, methods for their quick identification are now more important than ever. Google’s solution is adding a watermark to images that’s so tiny it’s invisible to the human eye.

Developed by DeepMind, Google’s AI arm, SynthID is a watermarking method that embeds changes in individual pixels in such a way that only computers can pick them.

Traditional watermarks are typically added to the corners of images or overlaid on top to show ownership or make it trickier to use them without permission. However, these can be cropped out or erased, making them unsuitable for identifying AI-generated images. Even hashing – the technique where known videos of abuse are assigned digital fingerprints for quick identification and removal – can be easily corrupted when the clip is manipulated.

Story continues below this ad

SynthID possibly solves this problem by effectively attaching an invisible watermark to images that stays put even when aspects like the image’s size, contrast, and colour are altered. These images can then be run through Google’s software which will then flag them as being AI-generated even if edited.

“We designed SynthID so it doesn’t compromise image quality, and allows the watermark to remain detectable, even after modifications like adding filters, changing colours, and saving with various lossy compression schemes — most commonly used for JPEGs,” reads Deepmind’s press release.

However, Deepmind warns that SynthID is not foolproof against extreme image manipulation. The tool is currently in its experimental launch stage and the company is still testing its robustness.

AI image generator models have hit mainstream status and tools like Midjourney have clocked tens of millions of users already. Google has its own model called Imagen, although it’s not yet been made available to the public. These models are trained on massive collections of images picked from across the internet, raising ethical and legal issues about the originality and ownership of generated images.

Story continues below this ad

Google is part of a voluntary agreement with other AI companies, such as Microsoft and Amazon, to watermark some of their AI-generated content.

Latest Comment
Post Comment
Read Comments
Advertisement
Loading Taboola...
Advertisement