A team of 6 hospital officials hope a comprehensive review of the 4 major hospital rating systems will yield a more uniform and reliable review process.
Karl Bilimoria, MD
While the public loves seeing how their local hospital stacks up against the rest of the country using various ratings systems, hospital officials believe these annual grades can be misleading for patients, clinicians, payers, and administrators.
A 6-person team of experts from Northwestern Medicine; Sound Physicians; Council of Medical Specialty Societies; University of Michigan; Washington University in St. Louis; University Hospitals, recently comprised a comprehensive overview of the popular hospital rating services, finding scores can range drastically among the rating systems, ultimately misleading the public and stakeholders relying on the systems to identify top rated hospitals.
Because there is no gold standard for how a rating system should be constructed or perform, there is no objective way to compare the rating systems.
However, the investigators evaluated the strengths and weaknesses of the 4-major public hospital quality rating systems—U.S. News & World Report Best Hospitals, the Centers for Medicare and Medicaid Services’ (CMS) Overall Star Ratings, Leapfrog Safety Grade and Top Hospitals, and Healthgrades Top Hospitals—based on the team’s experience as physician scientists with methodological expertise in health care quality measurement.
Of the 4 different systems, U.S. News & World Report faired the best with a B grade, while the CMS Star Ratings scored a C. The 2 lowest grades were a C- for Leapfrog and D+ for Healthgrades. Evaluators were recused from grading a particular rating system if they had a direct current or recent relationship with the rating system itself.
The evaluators reviewed literature and met with leaders from all the major rating systems to establish their own rating system based on 6 major criteria—potential for misclassification of hospital performance, importance/impact, scientific acceptability, iterative improvement, transparency, and usability.
They also used this information to create standardized fact sheets for each rating system, which included objective and factual information such as the number of hospitals reviewed, number of elements, number of elements included, and the rick-adjustment methodology selected.
In an effort to develop a more equable rating system, the evaluators created a strengths and weaknesses summary, which described and categorized what they found beneficial and what they did not.
The team found several issues with the rating systems, including limited data and measures, a lack of robust data audits, composite measure development, measuring diverse hospital types together, and lack of formal peer review of their methods.
Patients often use rating systems in a number of ways, including determining where to receive care. Clinicians can also use the systems to decide where to refer patients to for care and by payers and purchasers who direct patients to certain hospitals or establishing contracts with high-quality hospitals. Payers also use them for pay-for-performance programs and hospital leaders use them to identify opportunities for improvement and to market their own performance.
The 4 different systems also can have conflicting results based on their own metrics, where 1 hospital may score high on 1 system and poorly on another.
"It's been confusing for patients who are trying to make sense of these ratings," Karl Bilimoria, MD, director of the Northwestern Medicine Surgical Outcomes and Quality Improvement Center, said in a statement. “How are patients supposed to know which rating systems are good or bad? This study gives them information from a group of quality measurement experts to figure out which rating system is the best."
The study, “Rating the Raters: An Evaluation of Publicly Reported Hospital Quality Rating Systems," was published online in the New England Journal of Medicine Catalyst.