FDA's First cGMP Enforcement Action On AI Misuse In Drug Manufacturing
By Kishore Hotha, Ph.D., MBA, president, Dr. Hotha’s Life Sciences LLC

The quote that should keep you up at night: “The AI never told us it was required.”
That was a drug manufacturer’s response to FDA investigators when asked why process validation had not been conducted before product distribution. Not a documentation gap. Not a training deficiency. The firm told FDA it did not know that process validation was a legal requirement — because the artificial intelligence tool it relied on never flagged it.
If you run a CDMO, a contract testing laboratory, or any pharmaceutical manufacturing operation that has started integrating AI tools into your quality and compliance workflows, this should give you pause. On April 2, 2026, the FDA did something it has never done before: it issued a warning letter with a dedicated section titled “Inappropriate Use of Artificial Intelligence in Pharmaceutical Manufacturing,” treating AI misuse as its own stand-alone cGMP deficiency.
The warning letter, issued to Purolea Cosmetics Lab (FEI 3011669383),1 describes a manufacturer that used AI agents to generate drug product specifications, procedures, SOPs, and master production or control records. The firm’s stated goal was to use AI to help comply with FDA regulations. But the quality unit did not adequately review those AI-generated outputs for accuracy or cGMP compliance.
The FDA cited two distinct failures. The first was a violation of 21 CFR 211.22(c): the firm’s quality unit failed to review the AI-generated documents to ensure they were accurate and actually compliant with cGMP. The QU’s responsibility to review and approve quality-affecting documents doesn’t go away because an algorithm wrote them instead of a person. The second was a violation of 21 CFR 211.100: process validation had not been conducted before distribution. And here is where it gets remarkable. When investigators informed the firm of this requirement, the firm responded that it was unaware of the legal requirement because the AI agent it used never informed them it was required.
Read that again. A manufacturer distributed drug products without process validation — one of the most foundational requirements in the cGMP framework — and explained that their AI tool had a knowledge gap. The firm had effectively delegated its regulatory awareness to a large language model. When the AI’s knowledge was incomplete, the firm inherited the gap wholesale. And the FDA made clear that this is not a defense. It is a deficiency.
The warning letter sets out FDA’s expectations in unusually plain language: if a firm uses AI to support cGMP activities — including developing procedures and specifications — any output or recommendation from an AI agent must be reviewed and cleared by an authorized human representative of the firm’s quality unit. This is not a new standard. It is an application of existing cGMP requirements to a new technology. What is new is FDA’s willingness to explicitly identify AI as a source of compliance failures and to carve out a dedicated enforcement heading for it. AI is a tool. The quality unit retains accountability. Every AI-generated output used in cGMP activities must undergo human review before it enters the quality system.
This enforcement action did not come out of nowhere. The FDA has been building toward it for years, through CDER’s March 2023 discussion paper on AI in drug manufacturing, the January 2025 draft guidance introducing a seven-step credibility assessment framework for AI used in regulatory decision-making, a February 2025 warning letter to Exer Labs for marketing an AI-based device without the proper regulatory pathway, and January 2026’s Guiding Principles of Good AI Practice in Drug Development. The trajectory follows FDA’s well-established playbook: educate, set expectations, then enforce. The education and expectation-setting phases are now complete. We are in the enforcement phase.
Why This Matters More For The Outsourced Pharma Ecosystem
CDMOs, contract testing laboratories, and contract packagers operate as extensions of the manufacturer under FDA’s regulatory framework. The FDA has been explicit about this: the owner is responsible for the quality of drugs regardless of agreements in place with a contract facility. The warning letter itself restates this principle directly. Now layer AI on top of that accountability structure.
When a contract testing lab uses an AI tool to draft analytical method specifications, the sponsor retains ultimate quality responsibility, but the contract lab bears the cGMP compliance obligation at the point of execution. When a CDMO uses AI to assist with batch record generation or deviation investigations, the same dual accountability applies. This creates questions that quality agreements have not historically addressed. Does your contract partner use AI in any cGMP-relevant activity, and do you know? What human review controls are in place for AI-generated outputs? Is AI use disclosed in the quality agreement, and is it auditable? If AI contributes to a specification or procedure that later proves noncompliant, where does accountability sit? How do you audit AI use during supplier qualification or routine audits?
If your quality agreements do not yet address AI, they need to. If your supplier audit checklists do not include AI use assessment, they are incomplete. And if you are a contract organization that uses AI tools without disclosing it to your clients, you are carrying risk that just became significantly more visible.
What to do now comes down to six practical steps:
- Inventory your AI touchpoints — every point where AI tools, including commercial LLMs, AI-assisted document generators, predictive analytics platforms, and automated decision-support systems, interact with cGMP-regulated activities such as specification development, SOP drafting, deviation investigation support, CAPA recommendations, batch record generation, stability trending, and OOS investigation assistance.
- Establish documented QU review procedures requiring human review and approval of all AI-generated or AI-assisted outputs before they are incorporated into cGMP records or used to inform quality decisions, with reviewers who have subject-matter expertise to evaluate accuracy and regulatory compliance.
- Train your people on AI limitations. The Purolea case shows that firms cannot assume AI tools possess comprehensive regulatory knowledge and explicitly address the risk of hallucinated, outdated, or incomplete regulatory guidance.
- Update your quality agreements to address AI use: disclosure requirements, permitted use cases, human oversight expectations, and audit rights specific to AI-assisted activities.
- Add AI to your audit programs. Internal audits, supplier audits, and management reviews should now include assessment of AI tool governance in cGMP operations.
- Build the audit trail. Document which AI tools are used, for what purpose, what outputs they generated, who reviewed them, and what modifications were made. This is the evidence of controlled use that the FDA will look for.
I want to be clear: this is not an argument against AI in pharmaceutical manufacturing. AI offers genuine value — efficiency in document drafting, pattern recognition in deviation analysis, acceleration of literature reviews, decision support in complex analytical assessments. I use AI tools in my own work. Most of us do, whether we acknowledge it or not. But value without oversight is risk. And in a cGMP environment, uncontrolled risk is a regulatory finding.
FDA has now shown us exactly what that finding looks like: a dedicated section, a named deficiency, and a clear expectation that the quality unit — not the algorithm — is accountable. Your clients will ask about this. Your auditors will ask. FDA already has.
Reference:
- FDA Warning Letter, Purolea Cosmetics Lab, 722591, April 2, 2026. Available at: https://www.fda.gov/inspections-compliance-enforcement-and-criminal-investigations/warning-letters/purolea-cosmetics-lab-722591-04022026
Disclaimer: This article is for informational and educational purposes only and does not constitute legal or regulatory advice. It discusses publicly available FDA enforcement documents and does not make any independent claims about the companies or products mentioned.
About The Author:
Kishore Hotha, Ph.D., is the president of Dr. Hotha’s Life Sciences LLC, a global consulting firm. He is an accomplished scientific and business leader in the pharmaceutical biotech and CDMO sectors, spanning drug development from early-stage research to commercialization. He has made significant contributions to numerous IND, NDA, and ANDA submissions for drug substances and products across small and large molecules, including ADCs, oligonucleotides, and peptides, through commercialization. Hotha holds a Ph.D. from JNT University and an MBA from SNHU. Previously to his current role, he served as the global VP at Veronova and global director at Johnson Matthey, with pivotal roles at Lupin and Dr. Reddy’s. He has contributed to over 80 publications and serves on various editorial boards.