A two-month-old baby lies amid the rubble of a bombed-out building in the Gaza Strip, with no adult around to help her. It's no wonder that pro-Palestinian users have shared this image across social media platforms in recent weeks, attempting to amplify the Palestinian tragedy worldwide. The image itself, however, is fake.
More stories:
A check conducted by Israeli startup Tasq.ai revealed unequivocally that the photo was generated by an artificial intelligence-based image generator. A brief search on the internet revealed the image was posted on social network LinkedIn nine months ago – long before Israel’s war against Hamas.
This phenomenon still remains on the sidelines: most disseminators of fake content on social media prefer to share images from previous conflicts, such as those in Ukraine or Syria, or even from video games, presenting them as authentic documentation from Israel or Gaza.
However, as the war continues, more and more artificially generated images depicting events that never took place are surfacing on social media. This trend adds to the chaos and confusion surrounding the war, making it challenging for users in Israel and around the world to grasp a clear picture of what’s happening in reality.
In the following photo, you see a touching documentation of a family escaping from Israeli bombings in the Gaza Strip; the father's eyes are closed, his face covered in dust, carrying four children on his body, and holding the hand of a five-year-old. It's a painful testament to Palestinian suffering in the war.
Now, focus on the left side of the image: could it be that one of the children's legs is peeking out from a third sleeve that is protruding from the father's shirt? And why does the baby have only three toes on the left foot? On close inspection, it’s clear this image is fake as well.
This practice isn’t exclusive to the Palestinian side: Images generated by artificial intelligence have been circulated in recent weeks by Israeli users as well. While on the Palestinian side, technology is used to fabricate horrifying images, it seems that on the Israeli side, the goal is to boost morale by presenting images of soldiers in Gaza, bravely facing the enemy and rescuing captives. The issue is that these images continue to spread on social networks without indication about their creation, and they may mislead some users, especially those who don’t live in Israel.
In one of these images, IDF soldiers are seen walking among ruins in the Gaza Strip with Israeli flags in their hands. At first glance, it's a fairly convincing image, but upon closer inspection, it can be seen that some of the flags don’t feature the Star of David but another symbol – an error typical of AI image generators.
In another image, IDF soldiers appear to be returning from an operation in which they seemingly freed two young women and three toddlers from Hamas captivity. However, some details in the image, such as the distorted faces of one of the soldiers and the high resemblance between some of the figures, indicate this image was generated by AI.
This phenomenon would not have been possible if artificial intelligence tools such as DALL-E, Midjourney, and Stable Diffusion hadn’t been developed in the last two years. These tools are capable of creating entirely realistic images based on textual descriptions.
Anyone can write a line such as "Two Palestinian children covered in blood standing amid the rubble of a building in the Gaza Strip next to their mother's body," and the generator will do whatever they wish.
These image generators are accessible for free or for a nominal fee of $10 per month and over time they continue improving, producing increasingly convincing results. One of their most prominent issues – very distorted human fingers 0150 has been almost completely resolved in the tools’ latest versions.
Such image generators help businesses, organizations and designers in creating material quickly and at a low cost, but they also enable malicious groups to create fake images of politicians, public figures and celebrities.
In such a world, it's challenging for users to distinguish between what is real and what is fake, leading them to suspect everything they see. “The specter of deepfakes is much, much more significant now — it doesn’t take tens of thousands, it just takes a few, and then you poison the well and everything becomes suspect,” Hany Farid, a computer science professor at the University of California, Berkeley, told The New York Times.
AI images in foreign media
This problem isn’t just found on social media platforms. Fake images of the war have also reached foreign news websites after being purchased from Adobe Stock, which sells professional images to organizations and media outlets.
In a search on Adobe's platform this week, Ynet found fairly realistic images depicting, among other things, IDF attacks in Gaza, military tanks in the Strip, and even children dug out from the rubble or watching it from a distance.
These images were uploaded by users, not by Adobe itself, and their descriptions state they were created using generative AI. The problem is that the websites buying these images don't always bother to clarify to their readers that these aren’t real images. Thus, they contribute to the confusion that already surrounds the war, and disorient their readers.
All images in the article with the “Fake" watermark were identified as AI generated by Tasq.ai, which has developed advanced algorithms that include many human testers - to expedite the training process of artificial intelligence models and verify their quality after being trained.
In an inspection conducted by the startup, the generated images, alongside genuine ones, were distributed to 37 random users worldwide. According to Tasq.ai, when users are presented with a genuine image, at least 90% correctly identify it as such.
When shown a generated image, at least 20% can identify it as being fake. To obtain a clear answer regarding images for which there is no clear agreement (those for which 80% to 90% believe to be real), it’ is necessary to assess them against a specific user audience with the relevant context for the image.
Only one image managed to confuse human checkers: 89% of users thought that an image depicting tents with the colors of the Israeli flag on the beach was genuine. However, when the file was examined by Israeli users, none of them believed it to be a real image. This underscores the importance of context and early, relevant knowledge when trying to identify AI-generated images.
How to distinguish AI-generated images
So, how can you identify images created using artificial intelligence? There are automated tools on the web like AI or Not, but bear in mind that they aren’t always accurate. The good news is that, in many cases, you can rely on the naked eye.
Tasq.ai’s CEO Erez Moscovich recommends focusing on the smaller details. Fingers, for example, are elements that some image generators struggle to recreate convincingly. Carefully check if there’s anything unusual in faces, teeth and ears of the people seen in the image.
Note that older versions of image generators sometimes create faces with distorted appearances, while the images generated by newer versions are characterized by smooth and almost perfect skin. Unusual physical resemblance among individuals in the same image should also raise a red flag.
If the image includes texts or symbols, carefully check their accuracy – this is also a significant vulnerability of artificial intelligence models, as seen in some of the images in the article. Unfortunately, however, image generators are improving at breakneck speeds, and it's unclear if these tips will still be helpful in a few months.