It’s time to get real about artificial intelligence applied to our drug development and manufacturing outsourcing milieu – practical supply-chain enhancements from AI-generated insights that can be implemented right now.
A helpful source in this undertaking is Professor Tinglong Dai of Johns Hopkins Carey Business School. Dai is also affiliated with the Johns Hopkins Workgroup on AI and Healthcare (part of the Hopkins Business of Health Initiative); Johns Hopkins Center for Digital Health and Artificial Intelligence; and Johns Hopkins Institute for Data-Intensive Engineering and Science.
As a leading expert in AI and global supply chains, he has been interviewed by and written articles for outlets such as Bloomberg, CNBC, Financial Times, New York Times, and the Wall Street Journal.
And now Outsourced Pharma.
There’s a lot to cover from my in-depth conversation with Dai, but let’s keep four of his key points in mind as we move forward.
The technology is already among us and can be adapted to improve current supply-chain monitoring and analysis.
AI technology will be relatively easy to implement.
Development/implementation will not be expensive – in fact, AI can save untold dollars by mitigating various supply-chain challenges, and increasing productivity.
AI can help build trust between drug sponsors and CDMOs, and regulators and manufacturers.
Most importantly, applications of AI will increase worker, product, and ultimately patient safety.
We’ll cover these points over the course of three editorials.
Professor Dai speaks from the perspective of business realities – he’s concerned with P&Ls, and the unique circumstances of drug developers and CDMOs. He extolls eminently implementable solutions. He's an academic capitalist at his best.
But first, he’s our ghostbuster.
Three Ghosts Of AI
Dai says the biggest myth – or darkest shadow over the technology – is the lingering belief that practical AI for manufacturing solutions or enhancements is still “futuristic.”
I’ll define practical AI here as the utilization of the analytical and predictive power of software programs fed with unstructured data, gleaned from employee activities, processing machines, and materials themselves, to improve outcomes.
Unstructured data includes audio files (picked up by microphones), photos (from cameras), or other info (detected by sensors) that do not have a “pre-defined model” for organization in order to generate insights or analysis ... until an AI application is programmed to do so.
“Around 2011, a deeper learning revolution started a trend for building advanced software solutions on networks that provided learning opportunities that are immediately practical and useful,” says Dai. The revolution was driven by widespread use of faster CPUs and GPUs and rapid developments in neural networks."
A second specter over AI, which bolsters the first, is that customized business-problem solutions are difficult to develop, and require hiring a team of knowledgeable specialists.
“I wouldn't frame it as anyone can develop AI today,” says Dai, “but my daughter in fourth grade developed an ‘AI tool’ for the game rock-paper-scissors,” he says with a laugh.
But seriously, he says, all the resources – hardware, software – already exist on the internet for developing programs, “and many, if not most of them are free.”
Dai asserts that adapting or generating AI applications does not require a lot of heavy coding. “There are abundant tools for this out there as well.”
In fact, Dai reasons, to improve manufacturing quality standards, catch operator error, machine failures, or other negative anomalies, inventing your own AI applications “is the lowest cost tool.”
What's essential, he says, "is you absolutely do need a well-defined problem.”
“And generally speaking,” he adds, “the pharmaceutical supply-chain space has quite a lot of problems.”
Finally, there is the myth of data itself.
You do need a real-life, reliable data set, says Dai, to get you started with a pilot study of some kind. However, he insists, “You don’t need to spend millions of dollars to develop a large database for AI,” as so many people believe is needed to effectively utilize the power of analysis and prediction, and the AI promise.
Interjecting here, I’ve heard those in our industry say artificial intelligence/machine learning may work for drug discovery, which generates mountains of data, but for development or manufacturing – particularly for the short-runs of reduced quantities needed for those chasing rare and orphan diseases – there simply is not enough data generated to gain elevated insights into processes and production.
“It’s simply not true,” avers Dai.
Modern AI tools, especially deep-learning technologies, “allow people to build in high accuracy, high reliability, even with, for example, a limited – say hundreds – of images from a material under processing.”
“I would say that people can always start somewhere, even if they do not necessarily have large data sets.”
Ghosts In The Machine
A promise: We will get to specific AI use-cases for pharmaceutical development and manufacturing in part two. I’ll conclude today’s opening salvo on AI in outsourcing with this thought.
Listening to Dai, I began to think of AI as revealing “the ghost in the machine” – the images of malfunctioning components, the unusual sounds coming from the facility floor, and the revealed errant behavior of operators in labs and factories. To whit:
Why did the rubber casket for the rotor blade breakdown and throw tiny fragments into the batch – well ahead of its approved replacement schedule?
Perhaps it is not until data images, data on maintenance scheduling, overall run-times and blade-speeds, the effects of different kinds of materials – unstructured data discussed above and in more detail soon – are fed into an AI application, that today’s unknowns become known, predictable, and events preventable.
“That compressor sounds strange this morning,” says an experienced plant engineer to his floor supervisor. “What do you mean?” replies the supervisor. “It seems to be running ok. Check it out next week after these batches are finished up.”
The engineer and supervisor are mystified. AI might not have been. Technnology and software analyzing the different sounds that equipment makes could send alerts of anomalies – the compressor may not last for the rest of those batches.
How could we have detected that an engineer working on two processing trains was actually contaminating one line with the other?
The operator could have been continuously monitored, and AI could have generated a warning of suspected improper cleaning protocol.
How does all this get accomplished? It's no mystery. Part two with Professor Dai is on the way.