As I said last month, quality is difficult to define and is almost in the eye of the beholder. This is very much like Humpty Dumpty’s assertion that, “When I use a word, it means just what I chose it to mean – neither more, nor less”. Maybe then quality means what a group says they are measuring is what defines quality. There are many groups that report quality metrics to the public in the “lay press”. There are more than a dozen sets of publically reported quality metrics. These are in no particular order:
- US News and World Report annual surveys
- Consumer Reports
- Hospital Compare (CMS website’s public reporting)
- National Quality Forum
- The Leapfrog Group
- Truven (used to be Thompson Reuters which itself used to be Solucient)
- The Joint Commission (with its ORYX set)
- National Committee on Quality Assurance (with its HEDIS – Health Employer Data and Information Set (1998), or Healthcare Effectiveness Data and Information Set (2012))
- Premier Healthcare Alliance (with its QUEST – QUality Efficiency Safety & Transparency) reports
- Several non governmental insurers including:
- Blue Cross – Blue Shield (with its Blue Distinction)
- United Health
The number of metrics going into a hospital ranking is not consistent, ranging from approximately 8 to more than 80. In addition, a single ranking organization will often change the metrics from year to year. Many of these organizations use data from other sources to help with the quality rankings. One of the most often used outside sources is the Agency for Health Care Research & Quality, which provides the Consumer Assessment of Healthcare Providers and System (CAHPS) scoring – first used in 1995. Many Healthcare systems and providers also use the Press Ganey Company (founded in 1985) scoring of patient satisfaction.
It should not be surprising that none of these several organizations report on the same set of metrics including the same process or outcome measures. I compared the top several hospitals in the Chicago area by each of the rating groups. None listed the same hospitals.
Are there potential unanticipated consequences of reporting quality? Some providers (practitioners and health care systems) may tend to skew their behavior toward a quality metric that may or may not be really associated with a desired outcome. One that was recently reported was how physicians are potentially providing unnecessary and potentially harmful services in order to help with Press Ganey satisfaction scores.
In the mid 1990’s one ranking agency listed a Chicago area hospital in the top three in the region. The next year, the Health Care Financing Authority (the predecessor of the Centers for Medicare and Medicaid Services of the federal government) determined that there were enough potential quality lapses that it began an investigation of that same hospital. 13 years later, the hospital closed. This example would suggest that in some cases, rankings by outside agencies may not always be a reliable indicator of a healthcare providers performance.
There are, however, data that suggest that those healthcare systems that do better on any one scoring system tend to have better “hard” outcomes such as lower readmission rates, lower hospital acquired complication rates, and even lower mortality rates, than those that don’t rank well on any. These data are, however, likely skewed by reporting biases and are not necessarily embraced by all those who are looking at the research.
What is one to do with this plethora of data?
Hospitals and Systems might charge the clinical leaders such as the CMO & CNO with picking one or two of the major reporting groups to report to. Institutions should do well on marketing or advertising favorable results from a reporting group. Thus, I would posit that this initiative should be supported from the marketing dept. budget.
Consumers – patients, families and caregivers should review their hospital’s web site to see what they are reporting. They should also look at one or two sites that rank their local hospitals, in addition to Hospital Compare to determine whether they look at things like:
- Re-admission rates (higher readmission rates expose patients to lost time out of hospital and increased likelihood of hospital acquired conditions such as infections and adverse reactions to medications, among others).
- Rates of hospital acquired conditions.
- 30-90 day post hospital mortality from common illnesses such as heart attack, pneumonia, or surgery among others.
- Use of Electronic Medical Records (EMR) and connectivity to physician offices as well as ability to access “your” data.
- Patient satisfaction scores (what percentage of patients would recommend the hospital/system to friends or relatives) – while these are potentially biased, they among other considerations can help suggest whether you may want to use that system.
To date there is minimal transparency of reliable data for patients to use to determine what real quality exists in their health care delivery system. Some of these considerations may help in choosing wisely.
 http://forbes.com/sites/kaifalkenberg/2013/01/02/why-rating-your-doctor-is-bad-for-your-health/ accessed 2/25/13