scorecard
  1. Home
  2. tech
  3. news
  4. Spot AI-generated images with these simple steps

Spot AI-generated images with these simple steps

Spot AI-generated images with these simple steps
  • AI-generated images are becoming increasingly realistic, making them difficult to distinguish from real photos.
  • There are a number of techniques that can be used to identify AI-generated images, such as looking for unnatural textures or backgrounds.
  • We give you a checklist to help you detect AI-generated images.
For some time now, the general public has been cautioned about the dangers associated with artificial intelligence or AI-generated images, commonly referred to as deepfakes. However, up until recently, distinguishing an AI-generated image from a photograph was relatively straightforward. That’s not the case any more.

Perhaps you've come across pictures depicting former president Donald Trump's arrest or the Pope sporting a fashionable, pristine white puffer coat. These viral sensations were the outcome of artificial intelligence systems that utilise textual prompts to fabricate images. They serve as evidence of how rapidly these programmes have improved, reaching a point where they can convincingly deceive an unsuspecting observer.

In just a few months, publicly-accessible AI image-generation tools have gained remarkable capabilities, enabling the creation of photorealistic imagery. Although the image of the Pope exhibited some evident signs of manipulation, it was still persuasive enough to deceive numerous internet users, including the prominent celebrity Chrissy Teigen. "I genuinely believed the Pope's puffer jacket was genuine and never questioned it. I can't imagine what lies ahead in terms of technological advancements," she said.

While victims of deepfakes – particularly women subjected to nonconsensual deepfake pornography – have long emphasised the risks associated with this technology, the accessibility and power of image-generation tools have significantly increased in recent months. These tools now produce fabricated images of superior quality across various categories. As AI continues to progress rapidly, determining the authenticity of an image or video will become increasingly challenging. This could have profound implications such as – increased susceptibility of the public to foreign influence operations, targeted harassment of individuals, and reduced trust in the realm of news.

To assist in identifying AI-generated images in the present, and to avoid falling prey to even more convincing iterations of this technology in the future, here are some essential tips.
Here’s how you can spot AI-generated images
Assess title, description and tags
While opinions differ on the necessity of disclosing the use of AI in image posts, those who choose to disclose usually include the information in the title or description. The comments section may also provide clues, as authors might mention the AI involvement.

In addition to the title, description, and comments section, examining the profile page can offer further hints. Keywords like Midjourney or DALL-E, the names of popular AI art generators, can indicate that the images may be AI-generated.
Find the image source
To verify the authenticity of an image, seek its source. Check for comments below the picture to find information about its original posting. Alternatively, conduct a reverse image search using tools like Google Image Reverse Search, TinEye or Yandex. This can help you locate the image's original source. The search results may also include fact checks conducted by reputed media outlets, offering additional contextual information.
Beware of ultra-smooth textures
Watch out for AI-generated textures that exhibit an unnaturally smooth appearance, resembling glossy plastic-like skin. Pay close attention to these details to detect potential manipulation.
Look for a watermark
An additional valuable indicator for identifying an AI-generated image is the presence of a watermark. DALL-E 2, for example, applies a watermark to every photo downloaded from its website, although it may not be immediately noticeable. In the image above, you can locate the watermark at the bottom right-hand corner, consisting of five squares coloured yellow, turquoise, green, red and blue. If you encounter this watermark on an image, you can confidently conclude that it was generated using DALL-E 2.
Examine the background
The background of an image can serve as a revealing factor in determining whether it has been manipulated. In manipulated images, objects in the background may exhibit deformations, such as distorted street lamps.
Zoom in and look carefully
AI-generated images often possess a convincing appearance upon initial observation. Hence, our primary recommendation is to scrutinise the image carefully. To achieve this, search for the highest available resolution of the picture and proceed to zoom in on the finer details. Enlarging the image will unveil any inconsistencies or errors that may have been overlooked at first glance.
Do technological solutions exist for detecting AI-generated images?
Numerous commercially available software products claim to have the capability to detect deepfakes, including an offering from technology giant Intel that boasts of 96% accuracy in identifying deepfake videos. However, there are limited, if any, free online tools that can consistently determine whether an image has been generated by AI or not. A free AI image detector hosted on the Hugging Face AI platform managed to accurately identify the AI-generated image of the Balenciaga Pope with a certainty of 69%.

During Google I/O 2023, Google unveiled upcoming features designed to aid users in detecting AI-generated fake images within search results. These features aim to identify the image source and label other AI-generated images, thus combating the proliferation of disinformation.

One of the tools introduced is called 'About this image,' which allows internet users to swiftly assess the credibility of images. By clicking on three dots displayed alongside an image in Google Images results, by using a picture or screenshot in Google Lens, or by swiping up in the Google app, users can access information about an image's history, indexing and initial appearance.

The 'About This Image' feature will be gradually rolled out over the next few months – it will initially be available to users in the United States and will support only English. Later in the year, Google plans to enable access to this tool by right-clicking or long-pressing on an image in Chrome on both desktop and mobile devices. This contextual information about an image will assist users in making informed judgments about its reliability.
Critically analyse what you see
Currently, relying on media literacy techniques is likely the most effective strategy to stay informed about AI-generated images in the future. While these questions may not guarantee the detection of 100% of the fake images, they can empower you to identify a greater number of them and protect yourself from various forms of misinformation. Remember to consistently ask yourself: What is the source of this image? Who is sharing it and for what purpose? Does it align with other reliable information available to you? By applying these critical thinking skills, you can enhance your ability to navigate the realm of AI-generated images and combat misinformation.

READ MORE ARTICLES ON



Popular Right Now



Advertisement