Developer Claims to Reverse-Engineer Google's AI Watermarking System
A software developer says they've cracked Google DeepMind's SynthID watermarking technology, though Google disputes the claim.

A software developer using the username Aloshdenny has claimed to reverse-engineer Google DeepMind's SynthID watermarking system, which is designed to identify AI-generated images. The developer has published their work on GitHub and provided documentation of their process.
According to the developer's claims, the reverse-engineered system can both strip AI watermarks from generated images and manually insert watermarks into other works. This would potentially undermine the effectiveness of Google's technology, which was created to help distinguish between human-created and AI-generated content.
Google DeepMind's SynthID system embeds invisible watermarks into AI-generated images to help identify their artificial origin. The watermarking technology has been positioned as a tool to combat misinformation and help users identify AI-created content across digital platforms.
Google has disputed the developer's claims about successfully reverse-engineering the SynthID system. The company has not provided specific details about why it believes the claims are inaccurate.
The controversy highlights ongoing challenges in developing robust systems to identify AI-generated content. As artificial intelligence tools become more sophisticated and widely available, the ability to reliably detect AI-created materials remains a significant technical challenge for both companies and regulators.