Canadian researchers create tool to remove anti-deepfake watermarks from AI content

7 hours ago 1

Article content

OTTAWA — University of Waterloo researchers have built a tool that can quickly remove watermarks identifying content as artificially generated — and they say it proves that global efforts to combat deepfakes are most likely on the wrong track.

Financial Post

THIS CONTENT IS RESERVED FOR SUBSCRIBERS ONLY

Subscribe now to read the latest news in your city and across Canada.

  • Exclusive articles from Barbara Shecter, Joe O'Connor, Gabriel Friedman, and others.
  • Daily content from Financial Times, the world's leading global business publication.
  • Unlimited online access to read articles from Financial Post, National Post and 15 news sites across Canada with one account.
  • National Post ePaper, an electronic replica of the print edition to view on any device, share and comment on.
  • Daily puzzles, including the New York Times Crossword.

SUBSCRIBE TO UNLOCK MORE ARTICLES

Subscribe now to read the latest news in your city and across Canada.

  • Exclusive articles from Barbara Shecter, Joe O'Connor, Gabriel Friedman and others.
  • Daily content from Financial Times, the world's leading global business publication.
  • Unlimited online access to read articles from Financial Post, National Post and 15 news sites across Canada with one account.
  • National Post ePaper, an electronic replica of the print edition to view on any device, share and comment on.
  • Daily puzzles, including the New York Times Crossword.

REGISTER / SIGN IN TO UNLOCK MORE ARTICLES

Create an account or sign in to continue with your reading experience.

  • Access articles from across Canada with one account.
  • Share your thoughts and join the conversation in the comments.
  • Enjoy additional articles per month.
  • Get email updates from your favourite authors.

THIS ARTICLE IS FREE TO READ REGISTER TO UNLOCK.

Create an account or sign in to continue with your reading experience.

  • Access articles from across Canada with one account
  • Share your thoughts and join the conversation in the comments
  • Enjoy additional articles per month
  • Get email updates from your favourite authors

Sign In or Create an Account

or

Article content

Academia and industry have focused on watermarking as the best way to fight deepfakes and “basically abandoned all other approaches,” said Andre Kassis, a PhD candidate in computer science who led the research.

Article content

Article content

At a White House event in 2023, the leading AI companies _ including OpenAI, Meta, Google and Amazon — pledged to implement mechanisms such as watermarking to clearly identify AI-generated content.

Article content

Article content

AI companies’ systems embed a watermark, which is a hidden signature or pattern that isn’t visible to a person but can be identified by another system, Kassis explained.

Article content

By signing up you consent to receive the above newsletter from Postmedia Network Inc.

Article content

He said the research shows the use of watermarks is most likely not a viable shield against the hazards posed by AI content.

Article content

“It tells us that the danger of deepfakes is something that we don’t even have the tools to start tackling at this point,” he said.

Article content

The tool developed at the University of Waterloo, called UnMarker, follows other academic research on removing watermarks. That includes work at the University of Maryland, a collaboration between researchers at the University of California and Carnegie Mellon, and work at ETH Zurich.

Article content

Kassis said his research goes further than earlier efforts and is the “first to expose a systemic vulnerability that undermines the very premise of watermarking as a defence against deepfakes.”

Article content

In a follow-up email statement, he said that “what sets UnMarker apart is that it requires no knowledge of the watermarking algorithm, no access to internal parameters, and no interaction with the detector at all.”

Article content

Article content

When tested, the tool worked more than 50 per cent of the time on different AI models, a university press release said.

Article content

AI systems can be misused to create deepfakes, spread misinformation and perpetrate scams — creating a need for a reliable way to identify content as AI-generated, Kassis said.

Article content

After AI tools became too advanced for AI detectors to work well, attention turned to watermarking.

Article content

The idea is that if we cannot “post facto understand or detect what’s real and what’s not,” it’s possible to inject “some kind of hidden signature or some kind of hidden pattern” earlier on, when the content is created, Kassis said.

Article content

The European Union’s AI Act requires providers of systems that put out large quantities of synthetic content to implement techniques and methods to make AI-generated or manipulated content identifiable, such as watermarks.

Article content

In Canada, a voluntary code of conduct launched by the federal government in 2023 requires those behind AI systems to develop and implement “a reliable and freely available method to detect content generated by the system, with a near-term focus on audio-visual content (e.g., watermarking).”

Read Entire Article