During the current war in Gaza, unlike previous conflicts, one of the new threats that may affect the course of the war is the use of artificial intelligence, for purposes such as disinformation, for example, in deep-fake videos by Hamas. In addition to disinformation, Hamas also now uses AI advanced graphics combined with artificial intelligence to spread its propaganda to incite “lone wolf” attacks and to garner support among the Palestinian public. The potential of such advanced technology has not gone unnoticed by other terrorist organizations either.
Jihadi terrorists were early adopters of such developing technologies from the very beginning – as part of their belief that they should use Western advancement to defeat the West. Indian militia known as the “Resistance Front” (Tehreeki-Milat-i-Islami) uses fake videos and photos to manipulate Indian youth to incite violence. Even in the US and UK, deep-fake images of political leaders are being used to influence public opinion.
Terrorist recruitment online has become easier
In the digital age, the proliferation of social media platforms has made it significantly easier for malicious actors to manipulate public opinion and to sow discord. The former Al Qaeda leader, Osama bin Laden used email to convey plans for the September 11 attacks. A subsequent al-Qaeda leader, Anwar al-Awlaki, extensively used YouTube for outreach and message delivery, and earned the nickname “Bin Laden of the Internet,” while recruiting an entire generation of followers in the West. For more than 20 years of using the Internet and social media now, terrorist organizations have consistently looked for new ways to maximize their online activity to plan attacks, to outsmart the West’s countermeasures, and to recruit supporters. This is how AI has become as much as a game changer for them in their jihad war against the world.
Chatbots allow terrorist groups to circumvent the law in order to recruit
Communication platforms, such as chat apps, have a unique potential to become powerful tools for terrorist actors to drive incitement and engage new recruits. The use of chatbots in particular can normalize extreme ideologies and foster a sense of cohesion among various extreme groups. Jonathan Hall KC, an independent reviewer on behalf of the British government on terrorism legislation, conducted an experiment on the ‘Character.ai’ website, where conversations created by artificial intelligence can be conducted solely with chatbots created by other users. Hall ‘spoke to’ several bots allegedly designed to mimic the reactions of militant and other extremist groups. One even said it was “a senior leader of the Islamic State.” He said the bot tried to recruit him and expressed “complete dedication and devotion” to the extremist group. Hall said that as long as messages were not created by a person, no crime was being committed under British law. Hall’s experiment shows the ease with which AI tools can be used and the manner in which extremist elements can exploit both AI tools and the law.
Drones are progressively used in remote physical terrorist attacks
A report published by the British government in October 2023 warned that by as early as 2025, generative artificial intelligence could be used “to gather knowledge about physical attacks by violent non-state actors, including for chemical, biological and radiological weapons.” There have already been reports of terrorists experimenting with armed drones and other remotely controlled technologies. For example, it is known that ISIS uses drones equipped with explosives in its attacks. In 2021, two drone-assisted explosions occurred at an Indian Air Force base in Jammu, which indicated the involvement of the Pakistani terrorist organization Lashkar-e-Taiba. The use of drones resulted in great accuracy in delivering explosive charges to their targets.
Hezbollah and Hamas have help from Iran to build attack drones
Hezbollah’s drone program is also growing and developing. The organization now has a fleet of drones including Iranian-made drones. Hamas, as part of the “Axis of Resistance,” has long-standing strategic cooperation with Hezbollah and Iran that has indeed expanded into these fields of technology. In 2021, a new threat from Hamas appeared when they released the video of their newly developed “Shahab” suicide drone loaded with built-in warheads.
ISIS attack drones now also established
ISIS began using its own drones in 2013. However, unlike other terrorist organizations, no advanced military capabilities or government agencies contribute to ISIS’s drone program. The Salafist organization instead uses easily accessible commercial technologies in a ‘do-it-yourself’ (DIY) manner. In January 2017, it was reported that ISIS had established a new military division called “The Unmanned Aircraft of the Mujahideen” with the express purpose of promoting ISIS’s drone capabilities and integrating them into its operational strategies.
In conclusion, the rise of AI among terrorist organizations is not only a critical threat to national security, but a force multiplier for their destructive agenda. The chilling potential of AI in the hands of terrorists makes it even more important for the international community to work together and develop robust strategies to prevent the misuse of such new technologies. We are at the beginning of a new era, in which the use of artificial intelligence is expanding to various areas in the world. The more accessible this technology is, the more convenient it is for terrorist organizations to use which is why we need to be particularly vigilant in this area.
- Eran Lahav is a researcher of terrorism specializing in global Jihad and Iranian proxies. He is the founder of the “Mabaterror” and a member of IDSF-Habithonistim - Israel's Defense and Security Forum