When reading a clinical trial report, pediatricians should look beyond the bottom line, according to new findings which show that a large portion of studies failed to take steps to minimize risk for bias.
When reading a clinical trial report, pediatricians should look beyond the bottom line, according to a Johns Hopkins review of nearly 150 randomized controlled trials on children, which shows that 40 to 60% of the studies failed to either take steps to minimize risk for bias, or to at least properly describe those measures.
A report of the team’s findings in the August issue of Pediatrics indicates that experimental trials sponsored by pharmaceutical or medical-device makers, along with studies that are not registered in a public-access database, had higher risk for bias, as did trials evaluating the effects of behavioral therapies rather than medication.
“There are thousands of pediatric trials going on in the world right now and given the risk that comes from distorted findings, we must ensure vigilance in how these studies are designed, conducted and judged,” says lead investigator Michael Crocetti, M.D., M.P.H., a pediatrician at Johns Hopkins Children’s Center, in a press report.
Considered the gold standard of medical research, the hallmark of double-blind randomized controlled trials (RTC) is a design that rules out or accounts for actual or potential bias. Results of such studies, when peer-reviewed and published in reputable medical journals, can influence treatment; therefore, a poorly designed or executed trial can lead researchers to erroneous conclusions about the effectiveness of a drug or a procedure.
Citing the degree of bias risk in the studies they reviewed, the researchers caution pediatricians to be critical readers of studies, even in highly respected journals.
The investigators advise that when reading a report on a trial, pediatricians should not merely look at the bottom line but ask two essential questions: How did the researchers reach the conclusion, and was their analysis unbiased.
Doctors should apply “smell tests,” common sense, and skeptical judgment about whether the conclusions fit the data, especially when a study boasts dramatic effects or drastic improvement.