Real-world data, real-world evidence, real-world problems.
Although randomized clinical trials are considered to be the gold standard for generating clinical evidence, the use of real-world evidence to evaluate the efficacy and safety of medical interventions is gaining interest. While one of the most seductive ideas in medicine, does it work? In this latest article from JAMA, researchers question the feasible of replicating clinical-trial evidence using real-world data.
This cross-sectional analysis used PubMed to identify all US-based clinical trials, regardless of randomization, published in the top 7 highest-impact general medical journals of 2017. A total of 429 prospective clinical trials were published. Of these, 220 met the study's inclusion criteria and were included in the analysis. Trials were excluded if they did not involve human participants, did not use end points that represented clinical outcomes among patients, were not characterized as clinical trials, and had no recruitment sites in the US.
Of the 220 clinical trials, 204 (92.7%) were randomized and 16 (7.3%) were not. Of the RCTs, 113 (55.4%) were double-blind, 30 (14.7%) were single-blind, and 61 (29.9%) were open-label. 147 (66.8%) of these trials tested pharmaceutical interventions.
The study concluded that only 15% of the published clinical evidence could feasibly be replicated using currently available real-world data sources.
Real-world evidence simply cannot replace traditional clinical trials. Real-world evidence can only correct for biases that researchers already observe and understand - the 'a priori /a posteriori' distinction.