Do 'High-impact' Medical Journals Report Accurate Outcomes?

April 10, 2014
Frank J. Domino, MD

Family Practice Recertification, April 2014, Volume 32, Issue 4

It is worrisome that "high-impact" data read by most healthcare providers and then further interpreted and distributed by lay news organizations may have funder bias.

Frank J. Domino, MD

Review

Becker JE, Krumholz HM, Ben-Josef G, Ross JS. Reporting of results in ClinicalTrials.gov and high-impact journals. JAMA. 2014; 311(10):1063-5. http://jama.jamanetwork.com/article.aspx?articleid=1840223.

Study Methods

This cross-sectional analysis compared clinical trial results published in major peer-reviewed medical journals to those filed with ClinicalTrials.gov, the international repository of trial results. All of the journals referenced in this review — including the New England Journal of Medicine, Lancet, and the Journal of the American Medical Society — were deemed “high impact.”

For each trial, results reported to ClinicalTrials.gov and corresponding publications were compared based on the following parameters: cohort characteristics, trial intervention, primary outcomes, secondary outcomes, and results. The trial results were examined for concordant reporting, and whenever there was discordant reporting, the authors attempted to discern the reason why the discrepancy took place.

Results and Outcomes

Among 96 clinical trials published in 19 high-impact journals, the most common conditions included were cardiovascular disease (CVD), diabetes, hyperlipidemia, cancer, and infectious disease. For 73% of the trials, a vested interest was the leading funder.

Ninety-three of the trials (97%) had at least one discordant inclusion in either the trial information or reported outcomes. Characteristics of the discordant information included completion rate, trial intervention, descriptions of dosages, and frequencies or durations of interventions.

In trials with defined primary efficacy endpoints, 85% were described in both sources, compared to 9% only on ClinicalTrials.gov and 6% only in the publications. In trials where endpoints were described in both places, results for 23% could not be compared, while 16% were discordant. Reassuringly, the majority of discordant results did not alter interpretation of the trial; however, for 6 of the studies, the discordance did alter the conclusion.

Fifty-two percent of the primary efficacy endpoints were described accurately in both sources and reported appropriately. Concerning secondary endpoints — 30% of which were described in both sources — 20% were only on ClinicalTrials.gov, and 50% were only in the publications. When secondary endpoints in both sources were further analyzed, results for 37% could not be compared, 9% were discordant, and only 16% were described accurately and reported appropriately.

Commentary

This analysis of the most prestigious and highly peer-reviewed medical journals attempted to determine whether trial results reported in the medical literature are also accurately reported to the registration body that oversees all clinical research conducted across the globe.

The first concerning finding is the vast majority of the trials were funded by a commercial interest, which imparts a very high risk for industry bias. It is a credit to our healthcare system’s transparency that we are able to ascertain this information so easily, but it remains worrisome that "high-impact" data read by most healthcare providers and then further interpreted and distributed by lay news organizations may have funder bias. While private industry backing for research does not inherently suggest partiality, there is still greater risk for both selection bias on who gets included in the study and reporting bias on which degree of data gets reported.

In addition, most of the papers included in this review revealed at least one discrepancy when their results were compared between the 2 sources. Although there is high likelihood much of this discrepancy occurred due to non-malicious causes like typographical and peer-review error, the fact that the most visible medical research may contain this degree of error is still alarming.

The implications of this study raise a number of issues. First, there should be a process for comparing all data submitted for publication with ClinicalTrials.gov for concordance. This would reduce the risk for error and allow authors and editors to resolve discrepancies prior to new information being released to the media.

The second-most important consideration involves funding for medical research. One potential solution is for private industry to provide funding to an unbiased body for distribution as grants, which would then be made to independent researchers, instead of those selected by the industry to conduct and publish the data.

The implication here is not that researchers are willing to publish inaccurate or even harmful information. Rather, all parties — from patients to physicians, insurers, and governments — can rest assured “high impact” research is conducted through an ethical and unbiased process, and the findings were published in an accurate and useful manner.

About the Author

Frank J. Domino, MD, is Professor and Pre-Doctoral Education Director for the Department of Family Medicine and Community Health at the University of Massachusetts Medical School in Worcester, MA. Domino is Editor-in-Chief of the 5-Minute Clinical Consult series (Lippincott Williams & Wilkins). Additionally, he is Co-Author and Editor of the Epocrates LAB database, and author and editor to the MedPearls smartphone app. He presents nationally for the American Academy of Family Medicine and serves as the Family Physician Representative to the Harvard Medical School’s Continuing Education Committee.