John Kane-Berman in Politicsweb (October 6, 2019) objects to Health Market Inquiry (HMI) recommendations that there be mandatory outcomes reporting in the private health sector, quoting an (un-named) general practitioner who says reporting outcomes is “spurious”. A second un-named health professional, a “specialist”, puts forward a number of specific objections:
- Specialists (i.e. roughly 50% of doctors), would be prejudiced, having the worst outcomes only because of the need to “rectify” everyone else’s “failed care” .
- A “rich demanding patient might have much higher expectations from surgery than a poor unassuming patient and might give worse self-reported scores, even if the clinical outcome was better than that of the poor patient”.
- Doctors who normally have follow-ups with patients may stop doing so, because it might look as if their patients were not getting better, thereby “instantly improving their results if additional care is used as a measure”.
- If doctors are forced to report outcomes, a lot of patients may return for a follow-up consult only for that purpose.
- “Quality data in health care is very subjective”, which is why it is so difficult to measure, has not been implemented more widely
- There are no peer-reviewed international outcome measures available.
Responding to these claims…
The last assertion is the easiest to disprove. In just about every developed country health system, outcomes are routinely measured, often published on a web site or in a formal annual report. This includes countries with highly regarded health systems (e.g. Scandinavia, Netherlands, Singapore).
The definitions of each of these outcome measures are peer-reviewed, as are results. The National Quality Forum (US) hosts a library of several thousand such peer-reviewed measures.
Private hospitals in the UK, which happens to include several facilities owned by SA-based companies, are required to publicly report certain outcomes via PHIN, the Private Hospital Information Network.
Not all countries report outcomes at the level of the individual doctor. Sample sizes are small, and outcome differences must be large to be statistically meaningful. Fears expressed by Kane-Berman’s reporters may be well founded if public measurement is at the single doctor level. Reporting at the level of the hospital or region is more appropriate.
Other objections have some basis and are worth reviewing.
Some practitioners will indeed have predictably worse measured outcomes because of disease severity, age and other underlying risk factors. As Kane-Berman’s specialist points out, we should “consider the complexity of the patient on an ongoing basis and whether the patient comes off a low health base to start with”. Measuring and comparing outcomes without considering differences in patients’ pre-existing health status unfairly discriminates against doctors who take care of the sickest patients. Risk adjustment methods exist but normally require additional data and expertise to implement.
The reference to patient-reported outcomes is relevant. We have only recently begun to systematically collect and appropriately value these kinds of outcomes, such as pain, function and quality of life. At the end of the day, what really does count is the patient’s outcome reported on the basis of their symptoms, their ability to function in daily life, their quality of life.
Expectations may vary considerably, but not only on the basis of wealth or socio-economic status. Personal preferences and values for different health outcomes are important and these differences should ideally be taken into account when comparing across hospitals, doctors or other professionals. In some circumstances it is possible to base assessment on improvement in health that results for an individual in relation to the medical intervention being studied. If no improvement for that individual in terms of symptom relief, functional gain, or duration of life, then we have not done well for the patient, no matter the variation in patient expectation. The same applies when considering measurement across a group of patients; trends over time can be very meaningful.
The notion that because of outcomes measurement doctors would stop bringing their patients in for follow-up care is upsetting and hopefully unfounded. It is however a realistic concern that if doctors are unfairly judged – and penalized – for poor outcomes, that they will find ways of avoiding reputational damage, which could mean avoiding the care of patients with the highest likelihood of a bad outcome. This was demonstrated in the early days of public outcomes reporting in the US, where cardiac surgeons in New York state began avoiding the most complex cases in order to improve their reported mortality rates. But Kane-Berman plays both sides when claiming that patients will be brought in for otherwise unnecessary visits purely in order to satisfy a measurement requirement.
Measurement of the use of additional care e.g. hospital readmission, is a common quality indicator. Readmissions are costly and patients generally do not wish to return to the hospital. Unexpected returns to deal with complications originating in care provided during the original admission represents cost and harm but penalties applied for excessive hospital length of stay, and for readmissions, place doctors between a rock and a hard place. On the other hand, hospitals with unusually high readmission rates after elective surgery could well have serious quality problems that are exposed with comparative measurement.
It is a truism that any measurement to which consequences are attached will be compromised, even running the risk of becoming “spurious”. However the South African health care system cannot escape the global trend toward greater accountability for outcomes – both cost and health related. The enormous costs of modern health care and the importance attached to it means professionals may lose public trust without more transparency.
As we adopt measurement we have to avoid unintended consequences, for example adding significant administrative burden that contributes to physician burn-out and gaming that may pollute the data needed to understand how our system is working and how to improve it.
We need a public debate and education campaign around the need for but also limitations of outcome measurement. We must risk adjust comparative outcomes where possible and relevant, consider restricting the audience for measurement at least initially to peer groups, and avoid harsh penalties for poor performance, unless reckless or grossly negligent. Penalties drive reporting underground. We should focus on improvement not only comparing; the requirement for outcome measurement should be paired with technical support for measurement, learning and improvement.