In a significant step toward integrating artificial intelligence (AI) into healthcare regulation, the U.S. Food and Drug Administration (FDA) and AI research company OpenAI have been exploring the use of generative AI to streamline drug evaluations.
This initiative, informally labeled “cderGPT” after the FDA’s Center for Drug Evaluation and Research (CDER), aims to reduce inefficiencies and modernize the lengthy and costly drug approval process.
FDA Commissioner Marty Makary has emphasized the necessity for modernizing regulatory procedures.
He pointedly questioned the traditional timeline, noting, “Why does it take over 10 years for a new drug to come to market? Why are we not modernized with AI and other technological advancements?”
In a press release issued on May 6, 2025, the FDA announced the completion of its first agency-wide AI-assisted scientific review pilot.
The project used a secure, non-internet-connected large language model (LLM) to summarize complex scientific data from new drug applications.
The review focused on areas such as clinical trial design, pharmacology, and manufacturing data.
While the LLM outputs required FDA expert review and editing, the agency noted that the AI tool was able to significantly reduce the time needed to compile summaries and review documents.
These developments are being led by Jeremy Walsh, the FDA’s newly appointed AI officer.
Walsh is overseeing discussions with AI developers like OpenAI and working across agency divisions to evaluate how LLMs might support internal regulatory work, particularly for routine tasks that do not compromise the FDA’s scientific integrity.
AI’s potential in drug evaluation is substantial, especially through automating data analysis and accelerating the assessment of clinical trials.
Experts highlight that routine tasks such as scanning vast data sets for trends and inconsistencies, could greatly benefit from AI’s speed and analytical capacity.
Despite enthusiasm from proponents, significant concerns remain regarding the deployment of AI in sensitive regulatory spaces.
Former FDA Commissioner Robert Califf has previously cautioned about the ambiguity surrounding AI’s role in the review process, underscoring the importance of transparency and accountability.
Critics warn about the inherent risks of AI technologies, particularly their susceptibility to generating persuasive yet inaccurate or misleading information.
This possibility poses considerable risk in pharmaceutical regulation, where accuracy and reliability are not just essential—they are life-saving.
While the FDA has committed to a “human-in-the-loop” approach for reviewing AI-generated content, watchdogs and transparency advocates are calling for formal standards, robust data validation, and public engagement before AI tools play a larger role in regulatory decision-making.
As the FDA continues to explore AI integration, balancing innovation with patient safety and scientific rigor remains paramount.
The agency’s pilot program represents a cautious but important first step and the outcomes may shape the future of drug regulation not just in the U.S., but globally.