Cognitive Biases and Mitigation Strategies in Emergency Diagnosis


Chapter 60
Cognitive Biases and Mitigation Strategies in Emergency Diagnosis


John Bedolla1 and Jesse M. Pines2,3


1 Department of Surgery and Perioperative Care, Dell Medical School, University of Texas, Austin, TX, USA


2 US Acute Care Solutions, Canton, OH, USA


3 Department of Emergency Medicine, Drexel University, Philadelphia, PA, USA


Introduction


Emergency medical decision‐making is a complex process and requires intense use of multiple brain functions. Human brain function has a performance envelope, performance limits, and failure modes. Cognitive biases are failure modes that negatively impact clinical reasoning, increase medical errors, and compromise patient safety.123 Fortunately, cognitive biases are well understood and amenable to mitigation. This chapter focuses on the cognitive biases most salient to emergency medicine and mitigation strategies to reduce their effect in clinical practice.


Scope of the problem


Cognitive bias is estimated to play a part in 36–77% of diagnostic errors.4,5 Clinical reasoning is a complex task learned over many years and is learned as a self‐aware and self‐critical process that mitigates bias.6,7 Nonetheless, difficult diagnostic scenarios, personal factors, and operational stressors can defeat the bias‐mitigating effects of medical training and result in medical error.813


Dual processing, executive function, and cognitive errors


Kahnemann and Tversky,14 human thinking is modeled as dual process: System 1 and System 2. System 1 is the default: it is fast, automatic, and works through heuristics and instinct. System 2 is slow, effortful, and expensive in terms of mental energy required. System 2 gets recruited only when necessary. However, System 1 is remarkably efficient but susceptible to errors. For example, System 1 reasoning can be erroneously influenced by emotion, fatigue, intrinsic cognitive biases, and personality.15 System 2 prevents System 1 errors, but there is a limit on how frequently it can be recruited without slowing down or exhausting the clinician’s cognitive apparatus. A long and complex resuscitation of a critically ill patient is an example of a case that calls on System 2 for a prolonged period. The heavy System 2 utilization often leaves the physician mentally depleted for some time afterward (Figure 60.1).

Schematic illustration of dual-process decision model.

Figure 60.1 Dual‐process decision model.


(Adapted from [15].)


The medical encounter is a series of complex tasks, and executive function assigns cognitive resources to complete them. For each emergency medical visit, there is an ideal balanced allocation of System 1 and System 2 resources to achieve both efficiency and safety. It is theorized that System 1 runs automatically in the background, constantly and uncritically inputting into executive function. Executive function then judges if and how much System 2 thinking is needed to achieve the optimal balance. Most of the time executive function equilibrates this process well.


Cognitive errors occur when executive function’s equilibrating process is overpowered, bypassed, suppressed, or de‐calibrated.16 Most cognitive biases overpower the equilibration process by amplifying the force and salience of System 1 input.14,17 Executive function can also be de‐calibrated or suppressed by operational stressors, burnout, sleep deprivation, fatigue, and deconditioning.17,18 Operational stressors, personal stressors, or just being too busy for a prolonged period can leave the clinician too exhausted to recruit System 2.1921 In this setting, for example, in the Libby Zion case, the clinician can default to automatic and uncritical responses with catastrophic results.


Prospect theory


Humans do not experience negative and positive consequences equally. Kahneman and Tversky22 show negative experiences are more salient than equivalent positive experiences. This is termed “loss aversion.” For example, the negative feeling around losing 1 dollar is 2.6× larger than the positive experience of gaining 1 dollar.22 By nature of the relationship between the amygdala, hippocampus, and limbic system, negative experiences are instinctually more salient than positive ones. From the perspective of evolutionary psychology, it’s not hard to see how cognitive mechanisms that prompt animals to avoid previously negative experiences are associated with better survival, thus retaining the encoding genes over generations. For clinicians, the memory of a missed diagnosis is more painful and more salient than the memory of a good catch is pleasant and long‐lasting.23


Uncertainty and risk


Humans are not innately adept at quantifying risk but do respond to risk thresholds.24 Human beings are generally risk‐avoidant. Humans also experience uncertainty as a risk. The aversion to uncertainty is called ambiguity aversion. The following example illustrates the difference. Avoiding a snake is risk aversion. Avoiding a pile of leaves because you’re not sure there’s no snake in it is ambiguity aversion. Human beings will often prefer to take on a small known risk to avoid complete (on Knightian) uncertainty. Compare the evaluation of chest pain, where the most common conditions, acute coronary syndrome (ACS)25 and pulmonary embolism (PE),26,27 which have well‐worked pathways. By contrast, in the evaluation of dizziness in an elderly patient, where a myriad of conditions could be causative, the workup tends to be highly variable.28 Next time you work with a clinician who is slow to pick up a dizziness chart, or tends to do extensive workups or each, realize it is most likely due to ambiguity aversion.29,30


Mitigation strategies


Cognitive errors can be reduced with debiasing strategies, cognitive best practices, attention to global mental function, and by engineering the emergency department (ED) to support and protect executive function. Below we focus on debiasing strategies and cognitive best practices.3144


Debiasing strategies and cognitive best practices aim at decreasing bias and diagnostic error.45 Below, we discuss the following categories: bias inoculation, cognitive best practices, mental workflow debiasing, traditional debiasing strategies, best practices, nudge, forcing strategies, and bias‐specific interventions.


Bias inoculation includes education about bias and has been shown to enhance self‐awareness and has been shown to decrease the effects of bias.32 Education can take many forms, including selected readings, experiential review of cases in case conference, departmental meetings, mortality and morbidity conferences, and simulations. The most effective educational experiences are likely to be experiential.46


Cognitive best practices and mental workflow debiasing techniques include self‐checking, structured data collection, Diagnostic Time Out, and the “Not Yet Diagnosed” strategy. Mindful self‐reflection during the medical encounter activates System 2 thinking to decrease bias effects and enhance reliability.39 Structured data collection is a debiasing technique and the history/ECG/age/risk factors/troponin (HEART) score is an example of structured data collection, which with routine practice can become much more proficient, and in the setting of time constraints, also quicker.47,48 Structured data collection ensures that a core set of information is obtained for better decision‐making. A thorough history and physical examination is another example of structured data collection.49 A Diagnostic Time Out is a brief cognitive pause performed before making critical decisions. A Diagnostic Time Out enhances reliability by giving the clinician mental space to activate System 2, respool, review, and reflect on the clinician encounter and make sure no missing data, bias, or other error is present. “Not Yet Diagnosed” is a strategy of reserving a final diagnosis until all the data is obtained and the clinician has had time for a Diagnostic Time Out. However, one controlled trial exploring diagnostic error did not demonstrate any medical error reduction with System 1 versus System 2 thinking.50 At first these structured and reflective practices can result in slower decision‐making, which can be a major problem in the time‐constrained setting of the ED; however, with practice, the speed and quality of decision‐making improve.5153


Traditional strategies and best practices start with the classic history and physical examination. When well performed, this remains one of the best debiasing techniques available in emergency medicine. A well‐performed history and physical examination serve as one debiasing technique. The key is to avoid shortcuts and extract the full value from the data and the process. The differential diagnosis is a classic tool for ensuring that several probabilities are considered before making a diagnosis. Life‐long learning in evidence‐based practices enhances System 2 thinking and provides a statistical framework to make sound decisions.54 “Rule out worst case”: considering the worst‐case scenario for a chief complaint activates System 2 thinking. Checklists ensure that crucial elements of the encounter are performed prior to disposition.55 Colleague consultation, when available, is among the most valuable strategies for ensuring reliability.5658


Nudges and forcing strategies include “Until Proven Otherwise,” prompts, soft stops, and hard stops are also debiasing techniques. “Until Proven Otherwise” is a traditional forcing strategy that helps disprove the presence of a serious diagnosis. Examples include “chest pain and neurological symptoms equal aortic dissection until proven otherwise,” and “scrotal pain in a young male is torsion until proven otherwise.” Care should be exercised in using this strategy with clinicians who have low‐risk tolerance, because it may lead to over‐testing. Prompts and soft stops: the electronic health record (EHR) can enhance the diagnostic process with soft stops. For example, if a patient is febrile and tachycardic, the EHR can pop up a sepsis screening alert. However, alerts should be used judiciously, as pop alert fatigue can occur. Pop‐up fatigue can itself cause errors and has been shown to be serious issue in EHR design.59,60 If not used carefully, EHR prompts can decrease patient safety by creating inefficiencies.61 Hard stops are usually departmental policies that engage System 2 thinking before proceeding. Examples include Discharge Time Out, mandatory rechecks of abnormal vital signs, and mandatory checklists.62,63


Current evidence suggests that debiasing techniques are effective in bias reduction. Current research suggests that debiasing training can significantly reduce bias and enhance diagnostic decision‐making.64,65


Prominent cognitive biases and interventions


Based on national claims analysis data one review identified the cognitive biases most salient to emergency medicine, with mitigation strategies for each type (Table 60.1).3,4 However, most of these recommendations are theoretical, lack compelling proof‐of‐concept in real‐world emergency medicine, and are thereby criticized by some experts as “myths of general thinking skills.”66


Premature closure


Premature closure occurs when the information obtained in the first few seconds or minutes of the encounter points firmly in the direction of a particular diagnosis, and other probabilities are discarded without further contemplation. With premature closure the clinician fails to obtain a full picture, including information relevant to other possibilities – this is called an “unpacking error.”


The fast pace, various operational stressors, and frequent interruptions of the ED can abbreviate the history and physical exam, creating an obstacle to System 2 recruitment.67 Operational stressors (workload, patient acuity, and patient complexity) also deplete mental resources, with decreased executive function and recruitment of System 2 thinking.6870


Premature closure occurs most commonly in the data acquisition phase of the medical encounter. Synthesis is often flawed because crucial data points are not obtained. Once started, it is difficult to reverse as there is a sunk cost bias (see below) against going back and asking more questions once critical decisions are made. As an example, a patient with chest pain has ST depression on electrocardiogram (ECG). The clinician sees typical coronary pain with dyspnea and does not inquire about travel history or PE risk factors. Cardiac enzymes are elevated, and the patient is taken urgently to the catheterization lab. Shortly before needle insertion, the patient dies from a massive pulmonary embolus.


Along with most common diseases, there are less common but still significant mimics. The key to finding the mimics is to avoid cutting short the standard structured history and physical exam. Every major chief complaint has one most common severe cause and 2–3 less common causes. As time and acuity permit, the clinician should screen for mimics through a complete history of present illness, augmented with the previous medical history, recent events, and a review of systems targeted to the body area involved and 2–3 proximal body systems. Time permitting, it is important to do a minimum standard every time and not skip too many steps because one feels like the diagnosis is clinched.


Table 60.1 Debiasing strategies


Source: Adapted from [4].
































































Strategy Purpose Examples of potential biases addressed
History and physical exam Systematic gathering of data Unpacking principle ascertainment bias
Differential diagnosis Promotes consideration of diagnostic possibilities other than the most likely Anchoring and adjustment Search satisficing
Premature diagnostic closure
Availability representativeness confirmation bias
“Not yet diagnosed” strategy Keeps open diagnostic possibilities Premature closure diagnostic momentum confirmation bias
Clinical prediction rules Forces scientific, statistical assessment of signs and symptoms and uses other data to develop numerical probabilities of a disease outcome Base rate fallacy errors of reasoning errors in estimating possibilities
Evidence‐based medicine Establishes an imperative for objective scientific data to support decision‐making Many biases
Checklists Ensures specific, important issues have been considered, especially under conditions of complexity, stress, and fatigue Anchoring and adjustment availability
Memory failures
Mnemonics Protects against memory failures ensuring a full differential diagnosis is considered Availability
Anchoring and adjustment premature closure
Pitfalls Alerts inexperienced clinicians to predictable failures Many biases
Rule out worst‐case scenarios Ensures the most serious condition in a particular clinical setting is not missed Anchoring and adjustment premature diagnostic closure
Until proven otherwise Ensures a particular diagnosis cannot be made unless other specific diagnoses have been excluded Anchoring confirmation bias diagnostic momentum premature closure
Caveats Offers discipline‐specific warnings to ensure important rules are followed to avoid missing significant conditions Many biases
Red flags Salient, specific signs and symptoms in the context of commonly presenting conditions to avoid missing serious conditions Anchoring confirmation bias diagnostic momentum premature closure

Before disposition, a brief cognitive pause to stop and think, a “Discharge Time Out” is warranted. In the Discharge Time Out, the clinician should mobilize System 2 to briefly respool the key elements of the clinical exams, and labs of tests and compare the “fit” of the final diagnosis to the overall picture.63,71,72


Satisficing, also called “search satisficing,” can lead to premature closure. Satisficing occurs when the clinician settles on the first diagnosis that fits “well enough” with the clinical scenario. Example: settling on a positive urinalysis as the cause of altered mental status in an older patient. Confirmation bias occurs when the clinician looks for information that confirms the original impression and ignores data that does not.73


Diagnostic momentum


Diagnostic momentum2 occurs when a patient receives an early label or diagnosis that continues uncritically through the episode of care and inhibits consideration of other possibilities. Diagnostic momentum differs from premature closure in that it is more of a passive uncritical acceptance of early information and often involves a social dimension of the influence of other clinicians’ early opinions and framing of the clinical picture. Diagnostic momentum can start even before the patient arrives in the ED or as late as well into the hospital admission.74


The following case illustrates diagnostic momentum. A 6‐year‐old boy is sent from a clinic for “appendicitis.” The exam reveals minimal discomfort in the right lower quadrant and labs and ultrasound are negative. The child is sent home and returns the next with torsion and a nonsalvageable right testicle. In this case, the diagnostic momentum led the clinician to focus solely on ruling appendicitis in or out.


Diagnostic momentum can be mitigated with strategies that help the clinician keep open to other possibilities. These include the differential diagnosis, the Discharge Time Out, and considering the worst‐case scenario. In this case, one of the worst‐case scenarios in right lower quadrant pain in young boys is testicular torsion.


Anchoring


Anchoring is often incorrectly defined in the medical literature as a “hanging one’s hat” on a diagnosis. More precisely, anchoring is an inappropriate setting of the range of possibilities

Only gold members can continue reading. Log In or Register to continue

May 14, 2023 | Posted by in Uncategorized | Comments Off on Cognitive Biases and Mitigation Strategies in Emergency Diagnosis

Full access? Get Clinical Tree

Get Clinical Tree app for offline access