Sorry, you need to enable JavaScript to visit this website.
Skip to main content

Getting the Diagnosis Both Right and Wrong

Olson AP. Getting the Diagnosis Both Right and Wrong. PSNet [internet]. Rockville (MD): Agency for Healthcare Research and Quality, US Department of Health and Human Services. 2019.

Save
Print
Cite
Citation

Olson AP. Getting the Diagnosis Both Right and Wrong. PSNet [internet]. Rockville (MD): Agency for Healthcare Research and Quality, US Department of Health and Human Services. 2019.

Andrew P. Olson, MD | September 25, 2019
View more articles from the same authors.

The Case

A 27-year-old woman with a history of acute myeloid leukemia was sent to the emergency department (ED) from the outpatient oncology infusion center for shortness of breath after receiving chemotherapy. On arrival in the ED, the patient was hypotensive and intravenous fluids were rapidly administered. Laboratory tests showed acute kidney injury with markedly elevated levels of potassium, phosphate, and uric acid. This constellation of findings was highly suggestive of tumor lysis syndrome—metabolic abnormalities due to rapid destruction of cancer cells by the chemotherapy drugs. The patient's oncologist was contacted, who agreed with this provisional diagnosis.

The decision was made to start emergent hemodialysis to address the electrolyte abnormalities. The patient's blood pressure improved with fluids. Further laboratory results showed neutropenia (low white blood cell count) and a chest radiograph showed a right-sided infiltrate. After a dialysis catheter was placed, the patient began experiencing worsening dyspnea and was emergently intubated. She was then transferred to the intensive care unit. Consultants from nephrology, oncology, and intensive care had all seen the patient while she was in the ED, and the consultants and primary team agreed on the need to start treatment for tumor lysis syndrome with hemodialysis and other measures.

Six hours after admission, the patient's blood pressure started to drop, requiring further intravenous fluids as well as vasopressor administration. Despite aggressive measures to maintain her circulation, blood pressures further dropped, and a code blue was called. Despite 45 minutes of resuscitative efforts, the patient never regained a heartbeat and was pronounced dead less than 9 hours after presentation.

While the patient was being coded, the laboratory called the ICU to report that the patient had blood cultures positive for gram positive cocci. Review of the laboratory results and chart revealed that the ICU team had ordered blood cultures and broad-spectrum antibiotics while the patient was in the ED, but the antibiotics were not administered until after the patient was transferred to the ICU. An autopsy was performed, which confirmed that although the patient did have tumor lysis syndrome, the cause of death was septic shock due to a disseminated Staphylococcus aureus infection.

A formal case review was conducted and the case was discussed at the departmental Morbidity and Mortality conference. The review revealed missed opportunities to detect sepsis at multiple points in the care of the patient. Antibiotic orders were placed, but the ED nurse did not see the order because she was busy assisting with the multiple procedures the patient required while in the ED. Moreover, these orders were not placed until more than 3 hours after the patient had presented. The major cause of the missed diagnosis of sepsis was thought to be the focus on tumor lysis syndrome. The priority for the primary team and consultants was around managing this oncologic emergency, with the result that a concomitant serious infection was not considered.

The Commentary

by Andrew P. Olson, MD

The art of diagnosis involves collecting and then synthesizing complicated, complex, and sometimes conflicting information into an explanation for a patient's health problem that then informs further diagnostic, prognostic, and therapeutic decision-making.1 It is an often idiosyncratic process made difficult by many different factors, including chaos of the health care environment, the overwhelming number of potential diagnoses for a given patient concern, and the varied presentations of even common conditions—such as sepsis in this case. In addition, even though diagnosis is often portrayed in educational programs and case presentations as a linear process, it is actually much less linear and methodical than it would appear. As this case demonstrates, one of the key reasons is that clinicians often must make decisions about further diagnostic evaluation and treatment while substantial uncertainty exists.2

How does diagnostic reasoning occur? Two key theoretical constructs are helpful in beginning to answer this question: dual process theory and situated cognition.3-6 Although these concepts seem esoteric when considering a real case such as the one presented here, they have very real clinical consequences that must be considered when aiming to understand and effectively improve the diagnostic process.

Dual process theory suggests that humans make decisions through one of two basic processes: nonanalytic and analytic decision-making.6 Nonanalytic decision-making relies on rapid, subconscious recognition of previously encountered patterns in order to arrive at a decision, while analytic reasoning seeks to generate and test potential hypotheses in a slower, more effortful, and deliberate manner. Our daily lives—and our medical practice—rely on both of these processes, although the vast majority of our decision-making occurs through nonanalytic decision-making. Said differently, most of our decisions are made based on rapid recognition of a situation, comparing that situation to a previous experience, and making a decision. These experience-based decisions rely on heuristics—"rules" that are an amalgamation of the experiences we accrue over time. These heuristics are often highly accurate, but they can also lead to faulty decisions. The term "cognitive bias" is often used to describe a heuristic that, when applied in a specific situation, led to a faulty decision. More than 100 of these biases have been described in the literature and, moreover, we can all easily identify situations in our daily lives in which these biases affect our decision-making. 

In general, there are three main types of cognitive biases that lead to diagnostic errors: (i) hasty judgements, (ii) biased judgements, and (iii) distorted probability estimates. Hasty judgements generally relate to arriving at a decision before all the necessary information is in place and failing to reconsider that decision when new, contradictory, information arises. Biased judgements occur when a decision is affected by an individual's personal feelings or experiences about something in the situation, such as a patient demographic feature, emotional stress, or the pursuit of an "exciting" diagnosis. Finally, distorted probability estimates lead clinicians to overestimate or underestimate the prevalence of a disease in a given population or inaccurately weigh the value of a piece of diagnostic information. 

Analytic reasoning, often a form of hypothetico-deductive reasoning, is the other major means by which humans make decisions. This involves the conscious, effortful generation and testing of diagnostic hypotheses. Analytic reasoning has an important role in both the solving of cases and in the calibration of nonanalytic decision-making processes over time and with increasing experience.7 

There is no doubt that cognitive biases play a role in diagnostic errors. However, heuristics also play a role in clinicians getting diagnoses right. Much of medical training is aimed at improving pattern recognition, and it is clear that expert clinicians use pattern recognition and other nonanalytic techniques successfully much of the time. Studies of strategies aimed solely at eliminating or decreasing the use of nonanalytic reasoning have been largely unsuccessful at improving diagnostic accuracy. Further, the context-specificity of diagnostic reasoning makes implementation of strategies combating cognitive bias ("debiasing") challenging—what works in one context may actually be detrimental in another.6,8 

Situated cognition is another concept that highlights the context in which clinicians make decisions and the significant impact that the situation has on a given clinical decision. The aforementioned two main types of reasoning—nonanalytic and analytic—do not take place in a vacuum but instead are profoundly influenced by context. Diagnosis often occurs as a coproductive activity between persons in specific ecosystems, and the interactions between these individuals and between the individuals and the ecosystem have significant impacts on the diagnostic process. That is, one must consider the context in which decisions are made in order to understand how those decisions are made. This social nature of the diagnostic process makes the theoretical construct of situated cognition readily applicable to diagnostic reasoning.3 The clinical environment is often chaotic, underresourced, and dominated by antiquated hierarchies that may inhibit the free exchange of ideas, fail to question the fallible decisions of those in authority, and do not harness the promise that emerging technologies can bring to bear on the diagnostic process.9 While much of the literature around improving diagnosis has focused on the individual, the need to enhance teamwork was the leading recommendation of the landmark 2015 National Academy of Medicine report, Improving Diagnosis in Health Care.1

These two theories—dual process theory and situated cognition—provide an lens for us to learn from this case and consider strategies to avoid such diagnostic errors in the future. First, it is important to consider that clinicians' prompt recognition of one condition—tumor lysis syndrome—likely occurred through nonanalytic pattern recognition, and appropriate treatment was initiated. However, at the same time in the very same case, clinicians' pattern recognition failed to identify a much more common clinical condition: sepsis (in this case, due to Staphylococcus aureus). Taken together, this means that interventions to improve diagnostic reasoning must not simply try to avoid the use of nonanalytic, pattern recognition decision-making but instead aim to consider common pitfalls in specific clinical situations. For example, rather than asking, "Am I falling victim to premature closure in this case?" it may be more helpful to ask, "What doesn't fit with my working diagnosis?" and "What if there is more than one thing going on?" It can be challenging to ask these questions in a busy clinical environment, but there are likely methods that individuals, teams, and systems can employ to begin to systematically ask these questions, especially when faced with a clinically deteriorating patient. For example, such questions could be considered as part of a team-based clinical evaluation during a rapid response situation or during escalation of care. While there are not yet specific evidence-based strategies to support such a suggestion, one could imagine the medical team systematically engaging in a brief diagnostic time-out, led by the resident or attending, during which these questions would be asked. Similarly, the electronic health record could be engaged to ask such questions when clinical alerts are triggered. 

Second, we must consider the effect that teamwork and the clinical ecosystem have on the diagnostic process. In this case, it is clear that some aspects of teamwork were effective—rapid multispecialty consultation was obtained, urgent procedures were performed, and the acute care team interfaced with the outpatient care team. However, there is also a risk that team members may be overly deferential to a specific person due to authority, expertise, previous patient–provider relationship, or other factors. This deference to authority—in many forms—can lead to diagnostic errors. In this case, it is possible that team members deferred to the primary oncologist—and thus overemphasized the oncologic emergency at hand—rather than considering other diagnoses such as sepsis. Teamwork is most effective when all members of the team feel a sense of psychological safety to question all other members of the team, share ideas, and present contradictory information. To avoid diagnostic errors from occurring, it will be important to systematize methods to engender psychological safety among team members. There is a substantial body of literature supporting and describing strategies for improving teamwork in health care, although to-date, few, if any, of these have been focused on diagnostic performance. Since existing programs focused on teamwork are already well-rooted in many health care systems and health professions education programs, it is likely that they can be tweaked to focus on diagnostic performance in addition to other aspects of health care. 

So how do we improve future decision-making to avoid diagnostic errors such as the tragedy in this case? Certainly, no clear road map exists, and multiple innovations are being implemented and evaluated. However, one of the most promising strategies to improve the decision-making of individuals and teams is to ensure that everyone knows the outcomes of as many cases as possible and has the chance to calibrate future decision-making based on these outcomes.10 Such feedback about outcomes is fundamental to tune both nonanalytic and analytic decision-making as well as help teams operative more effectively in the clinical environment. In this case, it would be fundamental to ensure that all the providers in the case were aware of the final diagnosis—since it is quite plausible that some of them involved early in the case would still consider tumor lysis syndrome as the correct diagnosis. This can be accomplished through systematized discharge summary routing between providers, discussion at Morbidity and Mortality Conferences, or other novel methods. However these discussions occur, it is key to ensure that everyone involved has a chance to learn.

Cases such as this are tragic—and common. It is a moral imperative to learn from them while applying a rigorous understanding of theoretical principles to avoid the same errors occurring in the future.

Take-Home Points

  • Nonanalytic decision-making (pattern recognition) is an important component of diagnostic reasoning. This decision-making can be improved over time by ensuring that people know the outcomes of their decisions. Systems should seek to close as many open feedback loops as possible.
  • The interactions between clinicians, patients, and the clinical environment have substantial impact on diagnostic reasoning, and it is important to address the ecosystem of care when analyzing diagnostic errors.
  • Systems should consider methods to highlight potentially contradictory information and question whether there could be additional diagnoses to be considered, especially in the setting of clinical decompensation.

Andrew P. Olson, MD
Associate Professor of Medicine and Pediatrics
University of Minnesota Medical School
Minneapolis, MN

References

1. Improving Diagnosis in Health Care. Committee on Diagnostic Error in Health Care, National Academies of Science, Engineering, and Medicine. Washington, DC: National Academies Press; 2015. [Available at]

2. Bhise V, Rajan SS, Sittig DF, Morgan RO, Chaudhary P, Singh H. Defining and measuring diagnostic uncertainty in medicine: a systematic review. J Gen Intern Med. 2018;33:103-115. [go to PubMed]

3. Durning SJ, Artino AR. Situativity theory: a perspective on how participants and the environment can interact: AMEE Guide no. 52. Med Teach. 2011;33:188-199. [go to PubMed]

4. Holmboe ES, Durning SJ. Assessing clinical reasoning: moving from in vitro to in vivo. Diagnosis (Berl). 2014;1:111-117. [go to PubMed]

5. Monteiro SM, Norman G. Diagnostic reasoning: where we've been, where we're going. Teach Learn Med. 2013;25(suppl 1):S26-S32. [go to PubMed]

6. Norman GR, Monteiro SD, Sherbino J, Ilgen JS, Schmidt HG, Mamede S. The causes of errors in clinical reasoning: cognitive biases, knowledge deficits, and dual process thinking. Acad Med. 2017;92:23-30. [go to PubMed]

7. Croskerry P, Nimmo GR. Better clinical decision making and reducing diagnostic error. J R Coll Physicians Edinb. 2011;41:155-162. [go to PubMed]

8. Zwaan L, Monteiro S, Sherbino J, Ilgen J, Howey B, Norman G. Is bias in the eye of the beholder? A vignette study to assess recognition of cognitive biases in clinical case workups. BMJ Qual Saf. 2017;26:104-110. [go to PubMed]

9. Linzer M, Poplau S, Brown R, et al. Do work condition interventions affect quality and errors in primary care? Results from the Healthy Work Place Study. J Gen Intern Med. 2017;32:56-61. [go to PubMed]

10. Trowbridge RL, Olson APJ. Becoming a teacher of clinical reasoning. Diagnosis (Berl). 2018;5:11-14. [go to PubMed]

 

This project was funded under contract number 75Q80119C00004 from the Agency for Healthcare Research and Quality (AHRQ), U.S. Department of Health and Human Services. The authors are solely responsible for this report’s contents, findings, and conclusions, which do not necessarily represent the views of AHRQ. Readers should not interpret any statement in this report as an official position of AHRQ or of the U.S. Department of Health and Human Services. None of the authors has any affiliation or financial involvement that conflicts with the material presented in this report. View AHRQ Disclaimers
Save
Print
Cite
Citation

Olson AP. Getting the Diagnosis Both Right and Wrong. PSNet [internet]. Rockville (MD): Agency for Healthcare Research and Quality, US Department of Health and Human Services. 2019.

Related Resources