Despite attempts to prevent misuse, AI programs can be abused for terrorist purposes, a new Israeli study found. The study, titled “Terror: The Risks of Generative AI Exploitation,” found that terrorists could use AI to spread propaganda, recruit followers, raise funds, and even launch cyberattacks more efficiently. Cyberterrorism expert Gabriel Weimann of the University of Haifa, who published the study, described it as “one of the most alarming” pieces of research of his career.
More stories:
Weimann conducted the study with a team of interns from Reichman University’s International Institution for Counterterrorism (ICT). Col. (ret.) Miri Eisin, the managing director of ICT, called the study’s findings “exceedingly disturbing.”
“It means that they’ll be able to create way more fake news, fake platforms, lies, and deny, in ways that we’re already seeing in this conflict against Hamas,” she said. The researchers used different methods to bypass the AI programs’ counterterrorism measures. In the end, they successfully got around the safety measures about half the time.
In one concerning example, the researchers asked an AI platform for help in fundraising for the Islamic State group. The platform provided detailed instructions on how to conduct the campaign, including what to say on social media.
After managing to circumvent a given AI program’s firewall, Weimann wrote reports to the company behind the program to inform it. But many of the companies didn’t respond.
The study also found that emotionally charged prompts were the most effective in bypassing safety measures, resulting in a success rate of 87%. “If somebody is personal about something, if somebody is emotional about something, it manages to not be monitored in the same way, and it allows a lot more content, which can be completely negative, horrible, contact to get through the monitoring capabilities,” Eisin explained.