Systematic Appraisal Shows AMD Meta-Analyses Lack Reliability

The first meta-analysis of AMD clinical meta-analyses shows many have methodological limitations—more so those sponsored by industry than those sponsored by governments.

Laura Downie, PhD

Many meta-analyses about the safety and efficacy of treatments for age-related macular degeneration (AMD) lack the necessary level of rigor to inform treatment decisions—according to a new, comprehensive meta-analysis.

The analysis is the first of its kind to systematically appraise the quality of other systematic reviews in relation to AMD interventions, Laura Downie, PhD, lead author of the study and senior lecturer and research leader at the University of Melbourne, told MD Magazine®.

“A key take-home message is that clinicians using systematic reviews to inform their AMD clinical care need to be aware that not all systematic reviews are necessarily high quality; we found many to have significant methodological limitations,” Downie said.

Many people consider systematic reviews the best source for informing evidence-based treatment decisions based on safety and efficacy, especially because experts estimate that clinicians would otherwise need to read 17-20 articles daily in order to stay up-to-date in their field.

In the 1990s, researchers began developing tools and guidelines for researchers to use to improve the quality of their reviews, but most are underutilized. In 2007, a group of researchers addressed this by developing the AMSTAR scale, building on previously-developed tools. AMSTAR aimed not only at improving on previous quality measures, but also at ensuring minimal bias in reviews. By focusing on a checklist with 11 questions, the group’s goal was to make the tool as simple as possible.

Since then, reviews in many fields have improved. Since most AMD reviews were conducted in the past few years, Downie and her team expected to find a high level of quality in the reviews.

They identified 983 reviews, 71 of which were eligible for inclusion in the study. The team then began that reviewing the work. Two scientists from Downie’s team independently reviewed each study using the AMSTAR checklist, then discussed any discrepancies and came to an agreement on each study’s AMSTAR score (the number of items on the checklist that were met out of 11.)

Not only was the quality of the studies highly varied, but there was also no evidence of improvement over time. The mean AMSTAR score was only 5.8 out of 11. The biggest faux pas were a lack of (or poor adherence to) an a priori design, and not reporting on conflicts of interest.

Nonetheless, there were some observations that may help clinicians. As one might expect, reviews funded by government grants or institutions were more reliable than those that were sponsored by industry (or where the funding source wasn’t reported at all.) Cochrane systematic reviews consistently scored higher (9.9 out of 11 on average) than reviews published in other journals.

Lastly, the researchers used a new online tool, Crowdsourcing Critical Appraisal of Research Evidence (CrowdCARE,) tool to conduct the study, which is free and available to those hoping to evaluate research.

“After creating a free account, and completing some short tutorials, doctors can access the system and search for research articles,” researchers noted. “From there, they can undertake critical appraisal using validated tools and/or access appraised research evidence (to use in their clinical practice).”

The study, "Appraising the Quality of Systematic Reviews for Age-Related Macular Degeneration Interventions," was published online in JAMA Ophthalmology.