Sorry, you need to enable JavaScript to visit this website.
Skip to main content

Diagnostic Errors in Medicine: What Do Doctors and Umpires Have in Common?

Mark L. Graber, MD | February 1, 2007 
View more articles from the same authors.

Graber ML. Diagnostic Errors in Medicine: What Do Doctors and Umpires Have in Common?. PSNet [internet]. Rockville (MD): Agency for Healthcare Research and Quality, US Department of Health and Human Services. 2007.

Save
Print
Cite
Citation

Graber ML. Diagnostic Errors in Medicine: What Do Doctors and Umpires Have in Common?. PSNet [internet]. Rockville (MD): Agency for Healthcare Research and Quality, US Department of Health and Human Services. 2007.

Perspective

"Strike 3! You're OUT!" Many a baseball game hinges on the accuracy of calls made by the men in black behind home plate. Umpires make crucial split-second decisions under conditions of substantial pressure and uncertainty, a challenge familiar to front-line clinicians. The decision-making processes used by umpires and medical diagnosticians are remarkably similar or identical, and the decisions made by both carry the possibility of error. However, unlike physicians, when it comes to trying to minimize the chance of error, the umpires are doing something about it!

The Table explores some of the similarities and differences between the fields as they pertain to "diagnostic" errors.

In an ideal world, medical decisions would be made using "normative" techniques—decision-making processes that guarantee the best chance of the correct outcome. In this world, physicians would know the prevalence of each possible condition under consideration, have full knowledge of test characteristics, and use Bayesian calculations to adjust the odds of disease accordingly. If they didn't know, they would use readily accessible decision support tools to allow them to function like experts.

In real life, physicians use a very different approach, known as "flesh and blood" or "fast and frugal" decision-making.(3,4) Given the preliminary facts about a case, a likely diagnosis simply emerges, effortlessly, from our subconscious. Experienced clinicians have an extensive knowledge base, replete with exemplars of problems, symptom complexes, and clinical presentations. Automatic, subconscious cognition matches up the features of the case at hand with one or more of these patterns.(5) The Nobel prize–winning economist Herbert Simon popularized the phrase "satisficing" to describe the cognitive events that serve up satisfactory answers under real-world constraints (6), and this process nicely captures how physicians arrive at their initial diagnostic choices—these are the first possibilities that fit the facts at hand.

Surprisingly, "fast and frugal" decisions are highly accurate, especially in the hands of experts.(4) This decision-making process, used every day by most physicians, is also used successfully by fighter pilots, commanders of fire-fighting units, and professional shoppers. A recent study actually compared subconscious decision-making to deliberate consideration, and "fast and frugal" was the clear winner, especially for complex problems.(7) Gladwell presents several entertaining examples of successful decision-making in a "blink"—such as the ability of movie producers to select just the right actor from the hundreds of potentials, or the skill of art experts in detecting a forgery.(8)

Unfortunately, we face a problem in medicine: However good "fast and frugal" decision-making may be, it is not good enough—especially when we need high-stakes decisions to be right 100% of the time. This shortcoming translates into patients whose treatable conditions are missed or misdiagnosed, and patients harmed or killed by tests or treatments they never really needed.

Knowledge deficits are rarely the cause of cognitive errors in medicine; these errors more commonly involve defective synthesis of the available data.(9) More than 40 cognitive pitfalls have been described.(10) In our studies (9), the three most common pitfalls were:

  • Context errors: The physician inappropriately limits consideration to only one set of diagnostic possibilities, in lieu of others. For example, gastrointestinal causes are not considered for a patient presenting with chest pain.
  • Availability errors: The physician chooses the most likely diagnosis over conditions that are more rare, or they choose conditions they are most familiar with. An example would be the patient with a dissecting aortic aneurysm whose chest pain is attributed to a musculoskeletal strain.
  • Premature closure: Once a plausible condition is identified, other possibilities are not fully considered; we just stop thinking.

Solutions

There are two promising approaches to reducing the risk of cognitive error. The first involves perfecting performance through feedback, and the other focuses on improving physicians' skills in metacognition, the ability to monitor and understand one's own thought processes.

Feedback

Becoming an expert in any field requires extensive practice and feedback concerning one's performance. This approach is being trialed with Major League umpires through an experiment with the QuesTec system, which captures the ball path of every pitch using multiple video cameras. Umpires receive recorded images of each pitch during a game, allowing them the chance to compare their call with the actual ball path (see QuesTec video demonstrations: Robin Ventura Home Run Pitch and Christian Guzman Called Strike Three). The hope is that eventually the accuracy of their calls will improve, and between-umpire variability will be reduced.

A similar approach already seems to be working in the field of mammography, where participation in a program involving feedback improves the accuracy of image interpretation.(11) Radiologists participating in these programs regularly review a standard set of films, with feedback on the findings they missed or misinterpreted. Consistent with the large literature on developing expertise (12), this type of focused feedback improves performance.

An analogous process for planned feedback does not exist for internists or other front-line clinicians. Indeed, the opportunities for feedback may be substantially fewer now than ever before: The checks and balances provided by autopsies have almost vanished, and a diagnostic error may never be conveyed back to the original diagnostician, as patients weave their way through different health care organizations or different silos of the same organization. Medical trainees, whose clinical experience is progressively being fragmented by work hour restrictions and ever-shorter rotations, are also missing out on opportunities for feedback that were once more plentiful.

Metacognition

The alternative approach is to educate physicians about subconscious processing and the need to monitor this for optimal decision-making.(13) The argument is that if physicians were more aware of the inherent biases that can creep into subconscious processing, these might be avoided. Croskerry (13) has advocated that this should include de-biasing training at three levels for clinicians: (i) General metacognitive training would create an awareness of the need for monitoring, (ii) generic skills would focus on each of the heuristics known to affect subconscious processing, such as advice to offset the availability heuristic by always considering a few zebras along with the horses, and (iii) specific de-biasing would apply to individual diagnostic areas, such as an admonition to consider incipient herpes zoster in a patient with unexplained abdominal pain.

I would argue that perhaps the best opportunity to apply conscious monitoring would be during the "final common pathway." Regardless of the subconscious processes used to derive the initial diagnostic considerations, one or more of these is eventually judged to be acceptable and serves as the working diagnosis. Although the actual diagnosis is different, the process of choosing a working diagnosis is repeated in every new case. Physicians tend to accept these subconscious choices without appropriate cognitive review, and this persists because physicians are highly confident in their diagnoses, sometimes inappropriately so.(14) This step of selecting and accepting a working diagnosis is therefore a final common pathway and would be an ideal place to conduct a formal, conscious review of the diagnoses being considered, and to consider whether the search needs to be broadened or extended.

This concept of a final review fits well in the context of "reflective practice" (15) and would allow the clinician the opportunity to identify cognitive errors before it's too late. Unlike umpires, we can (and often should) change our minds. An example is included in the sidebar.

Advice to Reduce "Fast and Frugal" Cognitive Errors in Diagnosis

  • Be aware of the odds of being wrong. It is probably closer to 10% than 1%.
  • Learn how diagnoses emerge from subconscious processing and the inherent biases that can lead to errors. Learn de-biasing approaches that might prevent these errors—to quote Croskerry, use "cognitive pills for cognitive ills."
  • Focus on the final common pathway. Once you've come up with a working hypothesis, examine it carefully and consciously. Consider the opposite, rethink your key assumptions, and think about diagnoses that you can't afford to miss. Learn the principles of reflective practice. Ensure follow-up, not only for the patient's sake but for your own cognitive education.(16)
  • Seek out feedback. Ask for autopsies. Attend Morbidity and Mortality conferences, or, particularly if you don't have access to them, participate in similar exercises such as this site. Anything that allows you to learn from your own mistakes or those of others will increase the chances of correctly calling clinical "balls and strikes."

References

Back to Top

1. Shojania KG, Burton EC, McDonald KM, Goldman L. Changes in rates of autopsy-detected diagnostic errors over time: a systematic review. JAMA. 2003;289:2849-2856. [go to PubMed]

2. Schiff GD, Kim S, Abrams R et al. Diagnosing diagnosis errors: Lessons from a multi-institutional collaborative project for the diagnostic error evaluation and research project investigators. Advances in patient safety: from research to implementation. Rockville, MD: Agency for Healthcare Research and Quality; 2004.

3. Gigerenzer G, Goldstein DG. Reasoning the fast and frugal way: models of bounded rationality. Psychol Rev. 1996;103:650-669. [go to PubMed]

4. Klein G. Sources of Power: How People Make Decisions. Cambridge, MA: The MIT Press; 1998.

5. Elstein AS, Schwarz A. Clinical problem solving and diagnostic decision making: selective review of the cognitive literature. BMJ. 2002;324:729-732. [go to PubMed]

6. Simon HA. Invariants of human behavior. Annu Rev Psychol. 1990;41:1-19.

7. Dijksterhuis A, Bos MW, Nordgren LF, van Baaren RB. On making the right choice: the deliberation-without-attention effect. Science. 2006;311:1005-1007. [go to PubMed]

8. Gladwell M. Blink: The Power of Thinking Without Thinking. New York, NY: Little, Brown and Company; 2005.

9. Graber ML, Franklin N, Gordon RR. Diagnostic error in internal medicine. Arch Intern Med. 2005;165:1493-1499. [go to PubMed]

10. Croskerry P. The importance of cognitive errors in diagnosis and strategies to minimize them. Acad Med. 2003;78:775-780. [go to PubMed]

11. Smith-Bindman R, Chu PW, Miglioretti DL, et al. Comparison of screening mammography in the United States and the United Kingdom. JAMA. 2003;290:2129-2137. [go to PubMed]

12. Chi MTH, Glaser R, Farr MJ, eds. The Nature of Expertise. Hillsdale, NJ: Lawrence Erlbaum Associates; 1988.

13. Croskerry P. Cognitive forcing strategies in clinical decisionmaking. Ann Emerg Med. 2003;41:110-120. [go to PubMed]

14. Graber ML. Diagnostic error in medicine: a case of neglect. Jt Comm J Qual Patient Saf. 2005;31:106-113. [go to PubMed]

15. Mamede S, Schmidt HG. The structure of reflective practice in medicine. Med Educ 2004;38(12):1302-1308. [go to PubMed]

16. Redelmeier DA. Improving patient care. The cognitive psychology of missed diagnoses. Ann Intern Med. 2005;142:115-120. [go to PubMed]

Table

Back to Top

(Go to table citation in the text)

Issue Umpires Doctors
What is the complexity level of the decision? Binary—it's a strike or a ball. There are thousands of diseases and syndromes, but typically the number of reasonable choices is less than 10.
How well do we do? What is the ACTUAL error rate? The stated accuracy rate for Major League Baseball umpires is 92%–94%. 10% or more of autopsies disclose important discrepancies that would have changed the clinical care or prognosis.(1) Studies looking at specific clinical conditions find an error rate of approximately 10%.(2)
How well do we THINK we do? What is the perceived accuracy rate? Anecdotally, better than the truth would support. Much better than reality. Most clinicians can't recall a diagnostic error they themselves have made. While they are keenly aware that diagnostic error exists, they believe errors are made by other physicians, less careful or skilled than themselves.
What are the consequences of error? Typically, no impact. Rarely, errors lead to lost games, a losing series, or career changes. Typically, no impact—the error is inconsequential or is not discovered. Rarely, the error may cause injury or death.
What types of cognitive processes are used to make the decision? The umpire integrates his perception of the ball's path in the context of his knowledge of the strike zone, all interpreted automatically (subconsciously). Most patient problems are very familiar to clinicians. Physicians integrate their perception of the facts in the context of their medical knowledge base. This occurs automatically (subconsciously) and involves recognizing patterns (schema) they have seen before.
What factors detract from perfection? Stress, fatigue, distractions, affective factors, and the inherent shortcomings of automatic processing (bias).
Can the error rate be reduced? Possibly—the QuesTec system is providing feedback to umpires to improve performance and calibration. Possibly—avenues for improvement exist but are unproven (decision support, feedback, "cognitive de-biasing").

Sidebar

Back to Top

Reflective Practice: Pause for a Moment During the Final Common Pathway(Go to sidebar citation in the text)

A 74-year-old woman comes to the emergency department with several hours of abdominal pain. She describes the pain as a burning sensation in the mid-abdomen, pointing to the area just below her epigastrium. She took some Mylanta, which helped a little bit. She says the pain is just like the pain she had when she had her peptic ulcer attack last year. The patient admits to having stopped taking her omeprazole several months ago because she forgot what it was for.
On exam, the vital signs are normal except for tachycardia. The epigastrium is not particularly tender, and there are no masses. Her white blood cell count is normal, but her hemoglobin and hematocrit are significantly lower than they were a few months earlier.
Standard Practice Reflective Practice
Hmmm. The patient is usually right. I bet her ulcer is back! She should have stayed on that omeprazole. Anyway, she seems stable enough now. I'll admit her to a medicine ward, follow her hematocrit, restart the omeprazole, and ask the GI folks to see her in the morning... Hmmm. The patient is usually right. I bet her ulcer is back! She should have stayed on that omeprazole. I'll admit her to a medicine ward, follow her hematocrit, restart the omeprazole, and ask the GI folks to see her in the morning... I wonder if there is anything else I should be thinking of? What things can I not afford to miss? Maybe I should get a CT scan before she leaves the ED to make sure her pain isn't from something else...
The Next Morning
What do you mean she is unresponsive? Are you sure that BP is correct? There is no way her hematocrit dropped to 20! Call the surgeons! Call my lawyer! Wow—I am SO glad I stopped for a minute and got that CT scan! She did well in surgery, and her leaking aortic aneurysm was repaired before the leak got any bigger.
This project was funded under contract number 75Q80119C00004 from the Agency for Healthcare Research and Quality (AHRQ), U.S. Department of Health and Human Services. The authors are solely responsible for this report’s contents, findings, and conclusions, which do not necessarily represent the views of AHRQ. Readers should not interpret any statement in this report as an official position of AHRQ or of the U.S. Department of Health and Human Services. None of the authors has any affiliation or financial involvement that conflicts with the material presented in this report. View AHRQ Disclaimers
Save
Print
Cite
Citation

Graber ML. Diagnostic Errors in Medicine: What Do Doctors and Umpires Have in Common?. PSNet [internet]. Rockville (MD): Agency for Healthcare Research and Quality, US Department of Health and Human Services. 2007.

Sections