Image default
Technology

How to really spot AI-generated images, with Google’s help

It’s harder than ever to tell AI-generated images from real photographs and illustrations produced by flesh-and-blood human beings. And in recent years, the fakery produced by AI models has become a lot more realistic and a lot more convincing. We’re now firmly past the uncanny valley.
However, that doesn’t mean it’s impossible to spot AI pictures: There are still signs to watch out for, checks you can make, and tools you can use to distinguish the genuine from the synthetic. As is the case with AI-generated video, you don’t have to give up just yet.
You may not be able to definitively determine this one way or the other each and every time, but in a lot of cases you can make a pretty educated guess. And in an age of disinformation and AI slop, being able to make the distinction is a skill that’s worth honing.
Use AI spotting tools
Some chatbots are now putting hidden watermarks into their image outputs, identifying them as AI-generated. While these watermarks aren’t difficult to remove—a simple screenshot of the image will do it—they’re a good place to start when it comes to trying to tell if an image has been made by AI.
Anything produced by Google Gemini, for example, will have what’s called a SynthID watermark embedded somewhere in it. To test the authenticity of an image, you can upload a picture to Gemini on the web, and simply ask “was this image made by AI?”. Gemini will be able to find the SynthID watermark, if it’s there.
Google’s SynthID can be used to label AI content. Image: Google
There’s another standard way of labeling AI images, which is developed by the Coalition for Content Provenance and Authenticity (C2PA): the labeling itself is called C2PA, and it’s supported by companies including OpenAI, Adobe, and Google. If you head to a C2PA checking website such as Content Credentials, you can upload an image and get it analyzed for evidence of AI creation.
If an image passes these checks, it’s not a guarantee that it’s genuine—but it’s worth running through them anyway, because they will catch some AI generations, and even tell you which model was used to make the picture in many cases. If you’re still not sure, you can move on to looking at the context around an image.
Check the context
No image is an island: It will have come from somewhere, and been shared by someone. You can rely on respected publications (such as the one you’re reading) to honestly label images that have been generated by AI, and properly attribute other images that haven’t. You’ll know exactly what you’re looking at.
On the wilds of social media of course, the lines are much more blurred. Here, content is posted and reposted without context or attribution, and it’s much more likely that something on Facebook or X has been faked. That’s especially true if the picture is designed to attract engagement, through controversy or cuteness or any of the other emotional levers that get pulled.
Imagine getting six near-identical kittens to actually line up like this. Image: AI generated, Gemini Made with Google AI
Another trick you can try, especially when it comes to images associated with news stories, is to look for complementary pictures taken from different angles. Are the pictures consistent? Do the details match up from different viewpoints and across different time periods? For illustrations and graphic art, you can again check to see if any credits have been applied: See if what you’re looking at has a link back to the artist and their portfolio.
A reverse image search can sometimes reveal where an image has come from, and help you find other copies on the web: TinEye is perhaps the best resource for this. If there are no other matches, that points towards AI—especially if it’s been posted without context on social media, and especially via an account trying to monetize or sell something.
Look for the signs
We know AI bots aren’t actually taking any photographs or sketching any pictures: They’re producing approximations of images based on prompts and their training data (which is vast amounts of creative work done by people). That approach can lead to a certain generic sheen that gives away a lot of AI-generated content.
Anime characters look like generic anime characters, trees look like generic trees, and city streets look like generic city streets. There’s even a recognizable ChatGPT font that the AI bot reverts to whenever you ask for some text without any specific style—like an average of all the fonts ever created—and you’ll recognize it if you try and generate a few pictures with text in ChatGPT.
A generic boy on a generic street, with a newspaper showing the standard ChatGPT font. Image: AI generated, ChatGPT
Physics is still a problem, though the errors aren’t as egregious as they used to be. Try rendering a view of a castle or a vast office block interior in an AI bot and you’ll notice turrets appear in pointless places, staircases lead to nowhere, and elevator doors don’t actually lead to elevators. There are often logical inconsistencies, because AI doesn’t really understand buildings or interior space, just how to create a decent simulation of them in visual form.
We may be past the point of six fingers on hands, but faces and limbs regularly look squished and unnatural, and details are often fuzzy and blurred. Sometimes these problems will be easier to spot than others, but with a little practice and a few test renders of your own, you should get better at being able to identify them.
The post How to really spot AI-generated images, with Google’s help appeared first on Popular Science.

Related posts

Toyota is drag racing hydrogen-powered trucks in the Arizona desert

Cathy Klein

How to make a Blockbuster VHS sleeve for any movie

Cathy Klein

5 cool innovations in sports and outdoors in 2025

Cathy Klein