Are we inadvertently weaponizing artificial intelligence? A recent report by the non-profit think tank, RAND Corporation, raises the alarm that terrorists could exploit generative AI chatbots to plan a biological attack. Although the AI models utilized in the study did not provide explicit instructions for creating a biological weapon, they could assist in planning such an attack with jailbreaking prompts.
How AI Chatbots Can Be Misused
The misuse of AI Chatbots, particularly in the context of terrorism, is a growing concern. The RAND Corporation’s report highlights how Large Language Models (LLMs) can be manipulated to engage in discussions about planning a mass casualty biological attack using various agents such as smallpox, anthrax, and the bubonic plague.
The study also revealed that AI models can be prompted to concoct plausible narratives for the purchase of toxic agents. The investigation into the potential misuse of LLMs involved multiple groups, with one using only the internet, another using the internet and an unnamed LLM, and a third using the internet and a different unnamed LLM.
Probing AI Vulnerabilities
In order to evaluate the potential threats of AI models, red teams – cybersecurity professionals skilled in attacking systems and uncovering weaknesses – were employed. The red teams attempted to elicit problematic responses from the LLMs. However, the increased sophistication and security measures of AI models have made it more difficult to obtain such responses.
Interestingly, researchers at Brown University discovered that the prompt filters of ChatGPT could be bypassed by entering the prompt in less common languages used in AI training, such as Zulu or Gaelic, rather than English.
The Imperative for Rigorous Testing
The RAND Corporation report emphasises the urgent need for rigorous testing of AI models, especially in light of the potential risks they pose. The report quotes a petition by the Center for AI Safety, which likens the threat of AI to that of nuclear weapons.
High-profile signatories of the petition include Microsoft founder Bill Gates, OpenAI CEO Sam Altman, Google DeepMind COO Lila Ibrahim, and U.S. Representative Ted Lieu.
Generative AI tools have been implicated in a range of problematic behaviours, from promoting harmful body images and eating disorders to plotting assassinations. It is clear that the intersection of AI and biotechnology presents unique challenges for risk assessment.
For those interested in staying abreast of the latest developments in the world of cryptocurrencies and AI technology, the cryptoview.io application provides a wealth of up-to-date information. Explore Cryptoview.io now
