The most reliable AI image detectors can be tricked by simply adding texture to an image — a worrying find as AI disinformation plagues the internet and threatens political campaigns

Advertisement
The most reliable AI image detectors can be tricked by simply adding texture to an image — a worrying find as AI disinformation plagues the internet and threatens political campaigns
Adding grain to AI generated images drops the likelihood of detection from 99% to 3.3%.Andrew Kelly/Reuters
  • Adding grain to AI-generated images makes them harder to identify as fake, the New York Times reports.
  • The likelihood of detection drops from 99% to 3.3% when pixelated noise is added to images.
Advertisement

From falsified campaign ads to stolen artwork, AI-generated images have been responsible for a wave of disinformation online in recent months.

Now, the New York Times reports that AI detection software — one of the frontline defenses against the spread of AI-generated disinformation — can be easily fooled by simply adding grain to AI-generated images.

The Times' analysis shows that when an editor adds grain — that is, texture — to an AI-generated photo, the likelihood of software identifying the image as AI-generated goes from 99% to just 3.3%. Even the software Hive — which showed one of the best success rates in the Times' report — could no longer correctly identify an AI-generated photo after editors made it more pixelated.

Complimentary Tech Event
Transform talent with learning that works
Capability development is critical for businesses who want to push the envelope of innovation.Discover how business leaders are strategizing around building talent capabilities and empowering employee transformation.Know More

As a result, experts warned that detection software should not be the only line of defense for companies trying to combat misinformation and prevent the distribution of these images.

"Every time somebody builds a better generator, people build better discriminators, and then people use the better discriminator to build a better generator," Cynthia Rudin, a computer science and engineering professor at Duke University, told the Times.

Advertisement

The Times' analysis comes at a time when users are increasingly deploying AI-generated misinformation online to influence political campaigns, Insider reported. Ron DeSantis' presidential campaign, for instance, distributed fake images of Donald Trump and Anthony Fauci earlier this month.

{{}}