October 16, 2025

The FDA’s AI tool underperforms on fundamental medical device evaluations.

FDA’s AI Tool for Medical Devices Faces Functional Challenges and Ethical Scrutiny

The U.S. Food and Drug Administration (FDA) is testing a new artificial intelligence (AI) tool aimed at streamlining the review and approval process for medical devices such as pacemakers and insulin pumps. However, according to two individuals familiar with the matter, the AI system is currently grappling with several basic operational issues.

Still in its beta phase, the tool is reportedly plagued by bugs, struggles with document uploads, and lacks integration with the FDA’s internal systems. It also cannot access the internet, preventing it from retrieving the latest research or any content behind paywalls. Internally referred to as CDRH-GPT, the tool is designed to assist the Center for Devices and Radiological Health (CDRH), which is responsible for ensuring the safety of essential diagnostic and therapeutic devices like CT scanners and X-ray machines.

The FDA’s adoption of AI comes at a time of strained resources. The Department of Health and Human Services (HHS), which oversees the FDA, recently implemented wide-ranging layoffs. Although many device reviewers were retained, much of the backend support critical for timely device evaluations was eliminated.

Device reviewers typically analyze extensive datasets from animal studies and clinical trials—a process that can take months or even more than a year. In theory, an AI system could help shorten this timeline significantly.

Yet, some experts caution that the agency’s push for AI may be outpacing the technology’s current capabilities. Since assuming office on April 1, FDA Commissioner Dr. Marty Makary has advocated for increased AI integration across all divisions. He recently set a June 30 deadline for broader AI deployment and claimed the agency was ahead of schedule. However, the two sources suggest that CDRH-GPT is far from ready and may struggle to meet that deadline in its intended form.

Arthur Caplan, head of the Division of Medical Ethics at NYU Langone, expressed concerns over the FDA’s rapid adoption of AI. “I worry they may be pushing AI too fast, driven by necessity rather than readiness,” he said. “This technology still requires human oversight. It isn’t advanced enough to critically assess applicants or engage in meaningful interactions.”

When contacted, the FDA directed all media inquiries to the HHS, which has yet to respond to a request for comment.

Meanwhile, another AI tool named Elsa has been deployed agency-wide for basic administrative functions such as summarizing adverse event reports. According to Dr. Makary, the tool has significantly reduced task completion times. “One reviewer said Elsa did in six minutes what would typically take two to three days,” he noted.

Despite these claims, internal feedback paints a more cautious picture. Sources describe Elsa as a promising initiative that is nevertheless being rolled out prematurely. While the tool represents progress, it still lacks the robustness needed for complex regulatory tasks. Tests conducted on Monday revealed Elsa’s responses to inquiries about FDA-approved products were often incomplete or incorrect.

Whether CDRH-GPT will eventually be merged with Elsa or remain a standalone system remains unclear.

Beyond functionality, ethical concerns are also surfacing. Richard Painter, a law professor and former government ethics lawyer, questioned whether safeguards exist to prevent FDA officials from having financial interests in AI companies receiving federal contracts. “Conflicts of interest can severely undermine public trust in regulatory institutions,” he warned.

Internally, some FDA employees view AI not as a tool for support, but as a potential threat to their roles. “The agency is already under strain from layoffs and hiring freezes,” one insider noted. “There’s growing unease about whether AI is being seen as a solution or as a signal of workforce reduction.”

While the vision for AI-enhanced medical device reviews holds promise, the execution—as of now—appears to require significant refinement, oversight, and consideration of both technological and ethical implications.

 

Tags

Facebook
WhatsApp
Telegram
LinkedIn
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x