A review in NEJM Catalyst of 4 of the best known US hospital quality rating systems, published by a group of experts in health care quality measurement, gives no system better than a B grade. Disappointing, given how established these reports are and the attention they get from the US media, providers, and public.
Nearly 15 years ago, South Africa experienced its own interesting but short lived experiment with public reporting of the quality of hospital care in the form of the Discovery Hospital Rating Index. The story of its brief life and death holds lessons worth telling – another day.
Public reporting was reintroduced less contentiously by Discovery ten years later, limited this time to patient-reported experience measured using a US-created and validated survey (HCAHPS), and labelled PaSS (Patient Survey Score). Reports are available only to Discovery members, behind the member log-in, but “top 20” hospitals are publicly recognised each year.
In due course, private hospital groups such as MediClinic and the Life Healthcare group began to report their own patient-reported assessments online, generated by similar surveys, together with rates of healthcare-acquired infections and other safety measures. The usefulness, and accuracy of these reports are has not been independently addressed.
As South Africa heads towards a future in which a central fund (NHI) will contract with accredited providers only, the methodology and integrity of external evaluations of quality will increase in importance. What can we learn from the American experience?
The physician authors of the NEJM Catalyst study are most concerned about potential misclassification of hospital performance generated by inclusion of flawed measures, use of proprietary, unvalidated data, and specific methodological decisions. They conclude that all reviewed rating systems (US News & World Report, HealthGrades, CMS Hospital Compare, Leapfrog) are limited by problems with their data and measures, by a lack of robust data audits, by flawed composite measure development, inappropriate measurement of diverse hospital types lumped together, and lack of formal peer review of their methods. Composite measures are criticised for arbitrary weighting, for example those that accord mortality the same weight as readmission.
The authors feel that opportunities to advance the field of hospital quality measurement lie in better data that is subject to robust audits, creating more meaningful measures, and developing standards and peer review to evaluate rating system methodology. They offer a set of criteria (see below) with which to “evaluate the evaluators’.
Interestingly, US News & World Reports are commended for use of a fairly controversial reputation-based measure created by asking professionals where they would refer their own patients for specialty care. Less surprisingly, they encourage the collection of patient-reported outcomes to supplement experience measures – though none of the reporting systems do so currently.
From the SA standpoint, the fact that several rating systems have dropped certain claims-based measures that have been core to these quality reporting activities is notable e.g. the so-called Patient Safety Indicators (PSI). Also, the omission of data derived from the mandated National Hospital Safety Network system, as some PSI measures are in use in our environment, though no system like NHSN yet exists here.
Accurate public comparative reporting of quality is a mountain to climb. How far should SA should go to replicate these kinds of efforts at accountability? Are our own more limited resources better spent encouraging hospitals to measure carefully for their own improvement initiatives or in standardizing measurement for comparative purposes with the requirement for risk adjustment of outcomes? Is it possible to solve for both data for accountability and data for improvement?
The need for external evaluation seems unavoidable, not only in the private sector which has initiated its own efforts, but more especially in the public sector with its starkly apparent quality issues and failure of accountability to the citizens of this country. It makes sense for all our hospitals to work together on common ways to measure themselves to assure and improve quality and safety.