U.S. FDA Launches Agency-Wide AI Tool “Elsa” to Optimize Performance

The FDA launched Elsa, an agency-wide generative AI tool developed by Deloitte and based on Anthropic’s Claude, ahead of schedule and under budget. While intended to enhance internal efficiency and streamline regulatory reviews, the system has been widely criticized by staff as buggy, unreliable, and unsuited for scientific work. Experts and insiders warn that the rollout lacks sufficient guardrails, security transparency, and ethical oversight—raising concerns over the FDA’s rapid adoption of AI in critical public health contexts.

Stressed regulatory worker (U.S. FDA Launches AI Tool Elsa) - Credit - ChatGPT, The AI Track
Stressed regulatory worker (U.S. FDA Launches AI Tool Elsa) - Credit - ChatGPT, The AI Track

U.S. FDA Launches AI Tool “Elsa” – Key Points

Launch & Purpose

  • Elsa launched in June 2025 as a generative AI assistant intended to support FDA staff across divisions.
  • Developed by Deloitte using Anthropic’s Claude, Elsa is designed for:
    • Clinical protocol review support
    • Summarizing adverse event reports
    • Drug label comparisons
    • Coding assistance for nonclinical databases
    • Identifying inspection targets
  • Elsa is deployed agency-wide—from scientific reviewers to investigators—as part of the FDA’s larger AI modernization strategy.
  • The tool operates within Amazon Web Services’ GovCloud, a secure federal infrastructure. While the FDA states Elsa avoids training on proprietary industry data, it hasn’t disclosed the full scope of its training datasets.

Aggressive Timeline & Claims

  • FDA Commissioner Marty Makary launched Elsa ahead of his June 30 deadline, stating the agency is “ahead of schedule and under budget.”
  • In an interview, Makary cited internal reports that Elsa helped a reviewer complete a two-day task in six minutes—a claim disputed by some staff.
  • Chief AI Officer Jeremy Walsh declared Elsa represents “the dawn of the AI era at the FDA,” with future plans for expanded generative AI integration.
  • The FDA has framed Elsa as a foundational technology for faster drug, food, medical device, and diagnostics review workflows.

Employee Concerns

  • Internal feedback, reported by NBC, Axios, Stat, and TechTarget, reveals systemic issues:
    • Elsa provides incomplete or incorrect summaries, especially when asked about FDA-approved products or public-facing information.
    • It struggles with basic functionality like uploading documents and answering user-submitted queries.
    • It’s not integrated with internal FDA systems and cannot access real-time or paywalled content, limiting its utility.
  • Staff at the Center for Devices and Radiological Health (CDRH) say Elsa’s predecessor, CDRH-GPT, is still in beta and faces identical limitations—raising concerns about system maturity.
  • Many employees feel Elsa is only suitable for administrative support and not ready for regulatory or scientific tasks.
  • There is no formal policy framework or usage oversight. Ethics experts, such as Richard Painter (University of Minnesota), have raised concerns about conflicts of interest, warning that AI-related contracts could compromise regulatory integrity.
  • Some reviewers fear Elsa could eventually replace their roles. Layoffs and hiring freezes have exacerbated job security anxiety, particularly after the loss of 3,500 staff and a 25% proposed cut to the HHS budget.

Technical Background

  • Since 2020, Deloitte has developed infrastructure for FDA’s AI tools:
    • Received $13.8 million for building the initial database of FDA documents.
    • In April 2025, was awarded $14.7 million to scale Elsa.
  • Elsa evolved from CDER-GPT, originally built by the Center for Drug Evaluation and Research, which replaced CDRH-GPT after cost-cutting and was rebranded Elsa for wider rollout.

Security Context

  • Hosted in GovCloud, Elsa ensures:
    • No internet connectivity
    • Secure internal access for FDA employees
    • Exclusion of proprietary submission data from training
  • However, Axios and NBC report that the FDA has not disclosed training data details nor implemented robust AI-specific cybersecurity protocols.

Why This Matters

  • Public Safety Risk: Flawed summaries or decision support from Elsa in clinical or drug review tasks could endanger health outcomes.
  • Governance Deficit: Elsa launched without guardrails, oversight bodies, or ethical frameworks—concerning for a tool embedded in public health decision-making.
  • Transparency Gaps: The AI’s training base, evaluation methodology, and technical documentation remain undisclosed, limiting accountability.
  • Labor Displacement: AI integration amid layoffs and resource strain stokes fears of job replacement, eroding morale and institutional trust.
  • Ethics Red Flags: Experts stress the need for independent safeguards to prevent conflicts of interest in AI contracting or tool usage in regulatory settings.
  • Federal Trend Indicator: Elsa reflects a broader U.S. federal shift toward fast, top-down AI integration without adequate operational maturity or public safeguards.

Discover the pivotal role of AI in healthcare, from enhancing workflow efficiencies to facilitating personalized medicine, AI is revolutionizing medical practices

Read a comprehensive monthly roundup of the latest AI news!

The AI Track News: In-Depth And Concise

Scroll to Top