“Diagnosis is the mental act of selecting the one explanation most compatible with all the facts of clinical observation”. – Raymond Adams in Harrison’s Principles of Internal Medicine – 4th edition
In almost all instances, Government and other third party payer incentives for improving performance in medicine rely on a clinical diagnosis upon which to judge performance. The data reported in Hospital Compare rely on an accurate and inclusive diagnosis of Myocardial Infarction, Congestive Heart Failure (CHF), and Pneumonia for hospital ratings. In each of these clinical conditions there are specific criteria for making the diagnosis. However, also in each, the diagnosis must be considered before the diagnostic criteria can be applied. Once criteria are applied they must be evaluated. There are no specific criteria for the diagnosis of CHF for example – there are several sets of diagnostic “criteria”, including proposals by the Framingham study group, a Harvard Study Group, and a group from the University of Virginia, which have sensitivities ranging from 0.41 to 0.71 and specificities from 0.89 to 0.97. The addition of BNP values doesn’t help much especially when the diagnosis is not suspected. Estimates of error in diagnosis suggest that there is an error in diagnosis in somewhere between 10% and 15% of encounters,. Many of these may never be detected (see our prior post on a diagnostic error (“Who worries about physician behavior …”). If a patient has a clinical condition, but it is not appropriately diagnosed, then that patient never appears in any denominator of performance (in either process or outcome measures). Consider a hypothetical patient who is overweight (BMI 32), smokes, is mildly short of breath (SOB), coughs and has mild ankle swelling. If this patient is diagnosed as being obese and having chronic pulmonary disease, then he/she will be looked at as if they might have COPD. Suppose again that this patient comes into the hospital with a mild fever, an increase in cough and SOB, and is diagnosed as having an exacerbation of COPD, he is treated with antibiotics and then recovers after 3 days. Again, the performance measures are met for COPD. Three years later, the same patient comes in with orthopnea, PND, moderate ankle edema and cardiomegaly. At this time, the diagnosis of CHF is entertained. For four years this patient has been considered, by those looking at quality metrics, as having a condition that, in retrospect, was probably not correct. For those years, if there was P4P, P4Q or some other reward system, the physician practice or health care system would have been the recipient of inappropriate incentive compensation.
There are other areas where an error in diagnosis is important – a diagnostic error can, as shown above, delay initiation of appropriate treatment for a patient. There is legal exposure for diagnostic errors. Some estimates suggest that over 29% of malpractice claims and judgments are for diagnostic errors.
Defining diagnostic errors themselves is difficult – the final arbitrator may always be challenged. Coming up with the causes of diagnostic errors is even harder. Not all physicians are equally adept at arriving at a correct diagnosis (even sometimes an experienced physician when presented with the same clinical context may not even arrive at their prior diagnosis). Certainly research and training of physicians should help in understanding causes of error and increase vigilance to try to avoid such errors. Sometimes it may be enough to encourage a diagnostician to be aware of biases that may cloud judgment (one estimate is that there are over 15 potential biases that may impede accurate diagnosis). In other instances it may be enough to encourage diagnosticians to be aware that over reliance on heuristics in approaching a patient can lead to errors in reasoning.
Expert diagnosticians in the past used to insist that after a diagnosis was reached that the clinician be encouraged to keep an open mind by defining a minimum of 3 alternate explanations for the clinical presentation – called a “differential diagnosis”. Over reliance on advanced diagnostic imaging may also lead the clinician astray. Many physicians and surgeons believe that advanced imaging techniques such as CT and MRI scans are a “gold standard” of anatomic diagnosis. However, almost every orthopedic surgeon has more than several instances in which the MRI scan suggested a diagnosis in which the result was either not confirmed, or had nothing to do with the patient’s illness/complaints. One study suggests that for the diagnosis of meniscus tears the MRI is only accurate (compared to intraoperative findings) approximately 75% of the time. Another showed that operating on the back, based on the findings of an MRI exam didn’t necessarily improve patient symptoms,
The real gold standard may still be the autopsy, which has fallen out of favor as a check on our clinical diagnosis. William Osler considered the autopsy so important in his and others education that he did his own. Richard Cabot brought the autopsy to the fore when in the early 1900s he proposed the Clinical Pathologic Conference as a teaching tool. This became formalized in 1925 when the NEJM began publishing a weekly CPC under the rubric of Case Records of the Massachusetts General Hospital.
Diagnosis has taken a back seat to proceduralism today, partly because as many other commentators have pointed out, there is little time or reward for non-procedural patient encounters. This leads to potential skimping in the taking a thorough history, performing a complete physical exam and coming up with a differential diagnosis because this behavior is not rewarded in today’s fee for service system. In spite of the importance of reaching the correct diagnosis to direct the correct treatment and determining the appropriateness of pay for performance/quality, the outstanding diagnostician isn’t rewarded. Not all physicians today are outstanding diagnosticians, nor were they in the past. In the past going to see a “diagnostician” was something patients often valued. Then it was understood that a prerequisite to effective treatment was the right diagnosis.
 McKee NEJM, 1971 285, 1441
 Carlson: J Chron Dis, 1985, 38, 733
 Gheorghiade, M et al; Am J Card, 1983, 51, 1243
 The sensitivity and specificity values in all three were against an “expert” diagnostician or panel of expert diagnosticians.
 Graber, M: Joint Commission J on Quality and Patient Safety, 2005, 31, 106
 Berner, ES, Graber, M: Am J Med 2997, 121, S2
 Tehrani, ASS, et al: BMJ Qual Saf, 2013, 22,672
 Landro, L; Wall Street Journal 2013, Nov 17: http://online.wsj.com/news/articles/SB10001424052702304402104579151232421802264 Accessed 1/21/14 Subscription may be required
 Hardy, JC et al: Sports Health 2012, 4, 222
 Deyo, RA et al: J Am Board Fam Med 2009, 22, 62-68
 Roberts CS. The Case of Richard Cabot. In: Walker HK, Hall WD, Hurst JW, editors. Clinical Methods: The History, Physical, and Laboratory Examinations. 3rd edition. Boston: Butterworths; 1990. Available from: http://www.ncbi.nlm.nih.gov/books/NBK702/