Watermarking content generated by artificial intelligence: this is the solution offered by companies in the sector to limit disinformation and deepfakes. Google has just presented a still incomplete solution.
The artificial intelligences capable of creating content are more and more efficient, so much so that it is often impossible to discern the real from the bogus. And this can have consequences in reality, as we saw again last spring with this true-false image of an explosion at the Pentagon which sent Wall Street spinning.
A watermark invisible to the eye
The solution put forward by the tech giants: a watermark on AI-generated content. This is what the industry advocated during a meeting at the White House on the subject. However, the implementation promises to be very complex, as shown by Google’s latest initiative.
Read A watermark to identify content generated by artificial intelligence
DeepMind, the specialized AI branch of Google, has unveiled a new tool called SynthID that watermarks AI images. A watermark invisible to the eye, but embedded directly into the pixels of the visual — so the watermark remains detectable by systems that know where to look.
SynthID is currently only available to a handful of DeepMind customers, and mostly this watermarking technology is limited to visuals generated with Imagen, Google’s photorealistic image creation model. This will greatly limit the scope of the technology, confined to the ecosystem of the search engine. Given the proliferation of AI image-generating tools, SynthID might just be a dud in the water.
DeepMind does not give technical details about its watermark to prevent clever people from finding a way around it. However, it is only a matter of time: the secrets of SynthID will eventually be discovered, especially since the company intends to share them with partners if the system proves effective. Watch out for leaks!
This type of watermark will only be really relevant if the whole industry gets together to design a common watermark, and as impenetrable as possible. Google also indicates that SynthID can adapt to other AI models. The other pitfall is that all the watermarks in the world will not prevent the virality of a spectacular image generated by artificial intelligence.
DeepMind has no illusions: the proposed method is not a bulwark against extreme image manipulation. Rather, it is a tool for organizations to work responsibly on AI-generated content.
Source :
Deepmind