Google has announced it's testing a digital watermark system, developed by its AI outfit DeepMind, which aims to identify AI-generated images and embed changes to individual pixels: these changes are invisible to the human eye, but computers can detect and flag them.
It's called SynthID, which does sound rather Blade Runner-like, and it emerges at a time when the ethical questions around image manipulation are coming to the fore. It's one thing when we're talking about art and photography competitions, but AI-generated imagery's capacity for political and social disinformation is enormous, emergent, and feels barely understood. The Pope in a puffer jacket is our canary in the coal mine.
DeepMind warns the technology is not currently «foolproof against extreme image manipulation, but it does provide a promising technical approach for empowering people and organisations to work with AI-generated content responsibly.» More pointedly, the tool is currently only being used on images generated by Google's own image generation software Imagen.
«These approaches [to identifying AI-generated material] need to be robust and adaptable as generative models advance and expand to other mediums,» says DeepMind's announcement. «SynthID could be expanded for use across other AI models and we’re excited about the potential of integrating it into more Google products and making it available to third parties in the near future—empowering people and organisations to responsibly work with AI-generated content.»
«To you and me, to a human, [the image] does not change,» DeepMind's Pushmeet Kohli told the BBC, explaining that subsequent manipulation will not affect its identification. «You can change the colour, you can change the contrast, you
Read more on pcgamer.com