Of the things artificial intelligence is good at, trawling through thousands of chemical compounds to identify drug candidates is easily one of the best. However, researchers have found it is remarkably good at thinking up possible chemical weapons too – scarily good.
In a recent study published in the journal Nature Machine Intelligence, a team from pharmaceutical company Collaborations Pharmaceuticals, Inc. repurposed a drug discovery AI. It successfully identified 40,000 new potential chemical weapons in just 6 hours, with some remarkably similar to the most potent nerve agent ever created.
According to an interview with the Verge, the researchers were shocked by how remarkably easy it was.
“For me, the concern was just how easy it was to do. A lot of the things we used are out there for free. You can go and download a toxicity dataset from anywhere. If you have somebody who knows how to code in Python and has some machine learning capabilities, then in probably a good weekend of work, they could build something like this generative model driven by toxic datasets,” said Fabio Urbina, lead author of the paper, to the Verge.
“So that was the thing that got us really thinking about putting this paper out there; it was such a low barrier of entry for this type of misuse.”
To redirect the AI to suggest something that causes harm instead of healing, the researchers needed to point it towards identifying toxicity.
Taking their AI MegaSyn, which typically rewards bioactivity (how well the drug interacts with the target) and penalizes toxicity, they simply inverted the toxicity parameters but kept the bioactivity reward, now also scoring drugs more highly based on their toxicity.
In the 6 hours they ran the AI, it made some frightening developments. After they targeted it at generating nerve agent-like compounds, it suggested VX, the most potent nerve agent ever created, used in the assassination of Kim Jong-un’s brother Kim Jong-nam, plus other agents used in chemical warfare.
Undeterred, it also then designed agents predicted to be even more toxic than VX. The researchers stated that while the predictions are not verified and they “certainly don’t want to verify that” themselves, the predictive models created by MegaSyn so far have been reliable. There will likely be some false positives, and it would require the compound to be synthesized to be tested, so it is unclear just how many of these compounds would actually be toxic.
The team believes this should be an eye-opening moment for the use of AI in drug discovery, highlighting the ease of misuse these algorithms may have.
“Without being overly alarmist, this should serve as a wake-up call for our colleagues in the ‘AI in drug discovery’ community,” the authors write.
“The reality is that this is not science fiction. We are but one very small company in a universe of many hundreds of companies using AI software for drug discovery and de novo design. How many of them have even considered repurposing, or misuse, possibilities?”