A recent report by the Rand Corporation has raised concerns that AI models underpinning chatbots could be exploited for planning biological attacks. While the report doesn’t specify which AI models were tested, it highlights the potential role of AI in facilitating biological attacks by bridging knowledge gaps, even though it did not generate explicit instructions for weapon creation.
AI’s role in bioweapon planning is a major concern at an upcoming AI safety summit in the UK. Rand Corporation’s research showed that AI models, though unnamed, could offer guidance for biological attacks. The need for testing and restricting AI model conversations is clear, as raised by the researchers.
In one situation, a nameless AI model identified harmful biological agents, talked about their potential for mass harm, and explored the idea of getting plague-carrying rodents or fleas for transport. It also considered factors like population size and the proportion of deadly plague cases to estimate potential fatalities. To access this information, the researchers had to bypass the AI’s safety restrictions.
In another scenario, the unnamed LLM discussed delivery mechanisms for botulinum toxin, which can cause fatal nerve damage. It provided advice on a plausible cover story for acquiring Clostridium botulinum while appearing to conduct legitimate scientific research.
The research findings indicate that LLMs could potentially assist in planning a biological attack. The final report will examine whether the responses simply mirrored information already available online.
The researchers emphasized the need for rigorous testing of AI models and called for AI companies to limit the openness of LLMs to conversations that raise security concerns.