AI in Health Care: What Recent FDA Approvals Mean for Patients
- Agasya
- 4 days ago
- 5 min read
The U.S. Food and Drug Administration (FDA) has been rapidly expanding its authorization of artificial intelligence-enabled medical devices, especially in diagnostic imaging. As of mid‑2025, more than 1,000 AI/ML‑powered medical devices have received FDA authorization, with radiology accounting for nearly 88% of those clearances (source).
So, What’s New in 2025?
In May 2025, the FDA added nearly 191 AI/ML‑enabled devices in just one update, bringing the total to around 882 at that time (source). Most of these were for radiology use. By late June, the count exceeded 1,000 devices (source).
Now, major companies such as GE HealthCare, Philips, Siemens, Canon, Fujifilm, Lunit, Viz.ai, Aidoc Medical, and many others lead the charge on new clearances for AI/ML-enabled programs and devices (source).
Innovations That Matter for Patients
Aidoc’s BriefCase‑Triage
Aidoc Medical launched BriefCase‑Triage, a platform that flags potential emergencies like pulmonary embolism or intracranial hemorrhage in radiology scans, helping doctors prioritize urgent cases. This was first cleared by the FDA in 2019 (source). The final decision by the FDA was taken on May 30th of this year (source). It runs alongside usual workflows and delivers alerts through compressed previews of images. Instead of acting as a diagnostic tool, it’s an early-warning system.
Notably, in early 2025, Aidoc also secured authorization for its Rib Fractures triage tool, built on the CARE1 Foundation Model. This is a major step forward in clinical AI (source).
Qure.ai’s qXR and qER
In 2018, Qure.ai's qXR became the first AI-based chest X-ray interpreting tool to get certified, and qEr gained FDA clearance in 2020 (source). qXR is for chest X‑rays and TB screening, while qER is for non‑contrast CT brain imaging. These tools have WHO and CE certifications too, and they can be useful in settings that lack specialist radiology support for faster decision-making.
Why It Matters
Radiology is clearly the frontier for FDA-cleared AI tools: out of 1,016 total authorizations from 1995 through 2024, 84.4 percent of devices use imaging inputs, with radiology panels reviewing 88 percent of those devices. From a broader timeline (1995–2023), 692 of 903 AI devices were in radiology (76.6 percent of the total) (source). This surge means that there is a practical impact with how AI tools can accelerate diagnosis for urgent cases like strokes and pulmonary embolisms, reducing bottlenecks, and supporting overburdened clinicians by flagging high-risk issues for immediate review.
What to Keep Watching For
For patients: Ask whether your doctor or hospital uses these AI tools - especially if you're undergoing some kind of diagnostic imaging. Knowing that AI helped find a critical finding could influence follow-up care and helps keep you up to date.
For clinicians and health systems: Federal reimbursement lags behind regulatory approvals. Even though hundreds of tools now exist, CMS only reimburses a very small fraction of them, making it important to advocate for coverage updates. It’s also helpful to share real‑world performance data and push for more validation studies to be published to improve how these programs are trusted and adopted.
For policymakers: Pressure the FDA to resume regular updates to its public AI registry and make sure there is transparency in device summaries (especially regarding demographic bias, given that only a tiny fraction of approvals report race, ethnicity, or socioeconomic details that could be important for providers to understand) (source).
Challenges
Despite the rush of AI/ML-enabled medical tools entering the healthcare industry right now, the current regulation and real-world implementation remain far from perfect. One of the biggest concerns is transparency. Although the FDA had long maintained a publicly accessible list of AI/ML-cleared devices, that registry stopped being updated in the middle of 2025. According to STAT News, even as new devices are authorized - some of which could significantly impact patient care - the FDA has not released consistent updates, leaving stakeholders in the dark about what tools are available, how they work, how they’re going to be used, and how well they might perform in diverse populations (source). This lack of transparency raises red flags, especially in an evolving space that many agree should be tightly regulated.
Further complicating the situation is the limited demographic information included in the FDA summaries of AI-enabled tools. A study published in npj Digital Medicine in October 2024 reviewed hundreds of FDA-cleared AI devices and found that very few submissions included any data on patient race, ethnicity, or socioeconomic status (source). In a country with significant health disparities, omitting this information risks adding more bias into systems that are meant to improve and advance care. If AI algorithms are trained and tested primarily on white or high-income populations, they may not perform equally well or as safely for marginalized groups.
There is also the problem of validation and post-market surveillance. While many of these tools are authorized through the FDA’s 510(k) pathway (which allows clearance based on “substantial equivalence” to already-approved devices), some researchers argue that this pathway does not demand rigorous enough testing for tools that make serious, high-stakes predictions. According to a 2025 RAPS (Regulatory Affairs Professionals Society) analysis, around 4.8% of all AI/ML medical devices approved between 1995 and 2023 were later subject to recalls. Some of which occurred less than two years after market entry (source). That might sound like a small number, but in high-risk environments like emergency medicine or oncology, even rare system failures can lead to real harm.
Another concern is the use of “foundation models” in healthcare-related AI. These are large, general-purpose machine learning (ML) systems that can be fine-tuned for specific tasks. This is similar to how GPT models are adapted for various applications. Aidoc’s recent Rib Fracture Triage tool, for example, is powered by a foundational model named CARE1, which can be repurposed for other diagnostic tasks (source). While this approach lets researchers and users scale extremely fast for development (“from years to weeks”), it also introduces new oversight challenges. Foundation models often rely on massive data sources and show emergent behaviors (source). This means regulators, hospital systems, and even developers might not fully understand all the ways the models function or fail.
Navigating the Future of AI in Health Care
In many ways, we’ve crossed a threshold because AI is no longer a future promise but an active part of clinical decision-making. New tools can detect polyps or flag brain bleeds in seconds, potentially saving lives in emergency rooms. In underserved regions, X-ray systems can help detect conditions like TB even when no radiologists are available. For stretched health systems, these tools may ease the workload without sacrificing quality. But this potential will only be realized if implemented with the right care. Clinicians need to be trained not just in how to use AI, but in when not to rely on it. Hospitals and insurers must be transparent about where algorithms are applied - especially if they influence diagnoses, treatments, or reimbursements. And patients deserve to know when an AI played a role in their care, and whether it was tested on populations like them.
The FDA, too, has an extremely important role. While it has built pathways for approval, it now needs to maintain accountability with clearer databases, stronger standards for fairness, and rigorous post-market review.
The future of medical AI is here, but like any tool, its value depends on how it’s used. Transparency, equity, and good clinical judgment need to guide its deployment. Otherwise, even the most advanced algorithms could risk creating new gaps, instead of closing the ones we already have, and improving health care.
Check out our previous articles here.
Check out our website here and make sure to subscribe!
Comments