
Under the new leadership of Marty Makary, the FDA should focus more attention on the critical role of real-world clinical data — such as from health insurance databases — in the ongoing assessment of medication safety and efficacy, beyond the original clinical trials submitted for FDA approval.
Issues with drug safety and efficacy that may have been missed in the clinical trials phase can be reassessed after a drug goes to market. The FDA has already started to recognize this problem, for example, in the deployment of the FDA Sentinel Initiative, which has developed into a core component of the agency’s evolving safety surveillance system.
Clinical trials submitted to the FDA during the drug approval process, while methodologically rigorous, typically involve controlled settings with limited participant diversity, short follow-up periods, and small sample sizes. Real-world clinical data provide complementary insights by capturing data from broader, more diverse populations in routine clinical practice.
For example, pregnant women, patients with co-occurring medical conditions, underrepresented groups (e.g., elderly and minorities), or those taking other medications are often excluded from clinical trials to isolate specific variables. But real-world clinical medicine does not have this feature: physicians treat patients in all their complexity and messy circumstances.
What really matters is not how a new medication performs under tightly controlled experimental conditions, but how the drug is used and how it performs in the practice of everyday medicine. Real-world data can detect rare adverse events or long-term safety issues that were not evident in the original trials.
Health insurance databases, with large-scale longitudinal data, can be particularly valuable for identifying signals of drug-related risks. The FDA’s regulatory activity does not end with the drug approval process but includes ongoing vigilance to assess problems that were initially missed in the clinical trials phase.
Critics will argue that double-blind placebo-controlled trials provide more reliable information than real-world clinical data, especially about the question of causation, whether for efficacy or adverse-effects. But the realities on the ground are more complicated, which is why we should use all available evidence in making assessments of drug safety and efficacy. Every study design has its own strengths and weaknesses, including randomized controlled trials.
Randomization is only one among many methods in research study design for controlling potential confounding factors, and it only works if you end up with large numbers of subjects in the relevant outcome arm.
The mere fact that randomized controlled trials submitted for FDA approval are conducted and funded by the pharmaceutical companies seeking approval for their product does not automatically invalidate their results, but should certainly alert us to the possibility of methodological and reporting bias. While real-world clinical data also requires scrutiny, and comes with its own set of limitations, advanced statistical methods such as propensity score matching and instrumental variable analysis can mitigate confounding and bias, and improve the ability to make causal inferences from this kind of data.
To mention just one example of a real-world clinical data study that corrected the erroneous findings of clinical trial data submitted to the FDA, my colleagues Jamie Bryan-Hall and Ryan Anderson at the Ethics and Public Policy Center recently published a robust safety study of mifepristone, the pill used in chemical abortions. This represented the largest study of the abortion pill ever conducted, utilizing data from an all-payer insurance claims database that included 865,727 prescribed mifepristone abortions from 2017 to 2023.
The study found that nearly 11 percent of women experienced sepsis, infection, hemorrhaging, or another serious adverse event within 45 days following mifepristone. The real-world rate of serious adverse events following mifepristone abortions in this study was at least 22 times higher than the summary figure of “less than 0.5 percent” in clinical trials reported on the drug label and submitted to the FDA.
This is not the first time that serious adverse events were discovered only after a drug was approved for use. The discovery of the fatal adverse effects of the medication Vioxx (rofecoxib) — particularly its association heart attacks and strokes — through real-world clinical data highlights the critical role of post-marketing surveillance in identifying safety signals missed in clinical trials. Vioxx had been approved by the FDA in 1999 for pain management and inflammation, but it was withdrawn from the market in 2004 due to serious side-effects missed in the original clinical trials, which enrolled relatively small, controlled populations that were healthier, younger, and had fewer comorbidities than the broader population using Vioxx post-approval. Also, the short duration of the trial precluded the assessment of longer-term adverse effects such as cardiovascular events.
It was precisely studies that used electronic health records (for example, from Kaiser Permanente and Medicare) and pharmacovigilance systems like the FDA’s Adverse Event Reporting System that allowed researchers to analyze outcomes in millions of patients taking Vioxx. Without these studies, and the consequent removal of Vioxx from the market, many thousands more people would have died unnecessarily from heart attacks and strokes.
Similarly, real-world clinical data informed the FDA’s decision to add a black box warning on suicidality to SSRI antidepressants for adolescents and young adults.
Physicians practice medicine in the complex and messy real world, not under the artificially pure conditions of a controlled clinical trial. While the initial studies submitted for FDA approval have their merits, they also have built-in limitations. These should be complemented—and where necessary, corrected—by real world clinical data. To fulfill its mandate, the FDA cannot ignore the importance of ongoing safety and efficacy testing once a drug goes to market, even after its initial work of drug approval is finished. Lives depend on this work of ongoing vigilance.
Aaron Kheriaty, MD, is a fellow and director of the Bioethics and American Democracy Program at the Ethics and Public Policy Center.