Digital ‘Twins’ of Human Patients Can Be Developed Using AI to Speed Up Drug Development: FDA

Digital ‘Twins’ of Human Patients Can Be Developed Using AI to Speed Up Drug Development: FDA
"Sophia" an artificially intelligent human-like robot in Geneva, on June 7, 2017. (Fabrice Coffrini/AFP/Getty Images)
Naveen Athrappully
5/12/2023
Updated:
5/12/2023
0:00

The U.S. Food and Drug Administration (FDA) has shed light on the possibility of using artificial intelligence (AI) and machine language (ML) in the drug development process, pointing to the benefits these technologies bring, such as digital versions of human patients.

“AI/ML’s growth in data volume and complexity, combined with cutting-edge computing power and methodological advancements, have the potential to transform how stakeholders develop, manufacture, use, and evaluate therapies. Ultimately, AI/ML can help bring safe, effective, and high-quality treatments to patients faster,” the FDA said in a May 10 post. A subset of AI, machine learning is the development of computers to learn and adapt using algorithms and models, without explicit instructions, to imitate human learning.

The agency pointed out that AI/ML could be used to predict which individuals would respond to better treatments and who would be at more risk of side effects. Conversational AI chatbots could be used to answer people’s questions about taking part in clinical trials.

Digital or computerized “twins” of a patient can be developed to model a medical intervention that would provide feedback on the treatment before the patient even receives it, FDA noted.

In 2021, over 100 drug and biological applications that were submitted to the agency had AI/ML components.

While highlighting the benefits of these technologies, the FDA also admitted that there are “challenges” involved in using AI/ML in drug development. This includes ethical concerns, cybersecurity risks, and improper data sharing.

“There are also concerns with using algorithms that have a degree of opacity, or algorithms that may have internal operations that are not visible to users or other interested parties. This can lead to amplification of errors or preexisting biases in the data,” said the FDA.

Chemical Killers

Just as AI gives hope about the development of drugs, it also opens up the possibility of creating toxins. This issue was brought to light by a team of scientists working at the North Carolina-based Collaborations Pharmaceuticals.

The team trained an AI with a set of molecules that included environmental toxins and pesticides. The AI was tasked to calculate how to adapt the molecules in a way that they become more deadly.

Within just six hours, the AI outputted 40,000 potential killer molecules. This included a banned nerve agent called VX that was used to assassinate North Korean leader Kim Jong Un’s half-brother.

In an interview with FT, Filippa Lentzos, co-director of the Centre for Science and Security Studies at King’s College London, said that when the company’s founder presented the findings at a conference, the audience was shocked.
“It was a jaw-drop moment … Everyone was thinking, ‘This is awful. What do we do now?’ The potential for misuse has always been a concern in the life sciences. But with AI, that potential is on steroids,” she said.

Algorithmic Discrimination, AI Drug Firms

In the May 10 post, the FDA also expressed worries about “algorithmic discrimination” which occurs when automated systems “favor one category of people over other(s).” Moreover, the agency intends to “advance equity” when using AI/ML techniques.
In a paper (pdf) titled “Using Artificial Intelligence and Machine Learning in the Development of Drug and Biological Products,” the FDA states that AI/ML could be used to improve “health equity.”

The AI/ML can be used to mine vast data from clinical trials and other sources to match individuals with trials, the paper notes. However, this also brings up the issue of representation, it said.

“While these algorithms are trained on high volumes of patient data and enrollment criteria from past trials, it is important to ensure adequate representation of populations that are likely to use the drug (e.g., gender, race, and ethnicity) as matching algorithms are created and, when used, to confirm that equitable inclusion was achieved during the recruitment process.”

At present, there are a couple hundred companies involved in this sector. An October report by McKinsey identified almost 270 firms working in the AI-driven drug discovery industry. Over 50 percent of these companies were located in the United States. McKinsey identified Southeast Asia and Western Europe as emerging “key hubs.”

“The number of AI-driven companies with their own pipeline is still relatively small today (approximately 15 percent have an asset in preclinical development),” it said.

“Those with new molecular entities (NMEs) in clinical development (Phase I and II) have predominantly in-licensed assets or have developed assets using traditional techniques.”

Conflicting Stances

Unlike other fields of science, the development of artificial intelligence has widely conflicting takes, with some experts advocating for the technology while others warning against its usage and progress.
According to the UK’s health secretary, Steve Barclay, AI-based tech can bring about better and faster care for the elderly, while at the same time, reducing the workload on health care professionals.

“I think there’s a space to look at what is working in other health care systems. Do we have the humility to learn from that and see what we can adopt? There may be space to do that within robots, but it may also be particularly around AI,” he said in an interview with The Telegraph.

“Looking at the examples in Japan where it may be on robotics, it may be on artificial intelligence, it may be other areas where technology is helping both to support patients in the care home, in their own home, and also avoid some of those visits to emergency departments, because once frail, elderly people are admitted to hospital, often they end up staying for a significant length of time.”

On the opposite end of the AI spectrum, Eliezer Yudkowsky, a decision theorist and leading AI researcher, predicts that in the absence of meticulous preparation, the AI will have vastly different demands from humans, and once self-aware will “not care for us” nor any other sentient life. “That kind of caring is something that could in principle be imbued into an AI but we are not ready and do not currently know how.” This is the reason why he’s calling for an absolute shutdown.

Without a human approach to life, the AI will simply consider all sentient beings to be “made of atoms it can use for something else.” And there is little humanity can do to stop it. Yudkowsky compared the scenario to “a 10-year-old trying to play chess against Stockfish 15.” No human chess player has yet been able to beat Stockfish, which is considered an impossible feat.

The industry veteran asked readers to imagine AI technology as not being contained within the confines of the internet.

“Visualize an entire alien civilization, thinking at millions of times human speeds, initially confined to computers—in a world of creatures that are, from its perspective, very stupid and very slow.”

The AI will expand its influence outside the periphery of physical networks and could “build artificial life forms” using laboratories where proteins are produced using DNA strings.

The end result of building an all-powerful AI, under present conditions, would be the death of “every single member of the human species and all biological life on Earth,” he warned.