- Research article
- Open Access
A portable mnemonic to facilitate checking for cognitive errors
BMC Research Notesvolume 9, Article number: 445 (2016)
Although a clinician may have the intention of carrying out strategies to reduce cognitive errors, this intention may not be realized especially under heavy workload situations or following a period of interruptions. Implementing strategies to reduce cognitive errors in clinical setting may be facilitated by a portable mnemonic in the form of a checklist.
A 2-stage approach using both qualitative and quantitative methods was used in the development and evaluation of a mnemonic checklist. In the development stage, a focus-driven literature search and a face-to-face discussion with a content expert in cognitive errors were carried out. Categories of cognitive errors addressed and represented in the checklist were identified. In the judgment stage, the face and content validity of the categories of cognitive errors represented in the checklist were determined. This was accomplished through coding responses of a panel of experts in cognitive errors.
From the development stage, a preliminary version of the checklist in the form of four questions represented by four specific letters was developed. The letter ‘T’ in the TWED checklist stands for ‘Threat’ (i.e., ‘is there any life or limb threat that I need to rule out in this patient?’), ‘W’ for ‘Wrong/What else’ (i.e., ‘What if I am wrong? What else could it be?’), ‘E’ for ‘evidences’ (i.e., ‘Do I have sufficient evidences to support or exclude this diagnosis?’), and ‘D’ for ‘dispositional factors’ (i.e., ‘is there any dispositional factor that influence my decision’). In the judgment stage, the content validity of most categories of cognitive errors addressed in the checklist was rated highly in terms of their relevance and representativeness (with modified kappa values ranging from 0.65 to 1.0). Based on the coding of responses from seven experts, this checklist was shown to be sufficiently comprehensive to activate the implementation intention of checking cognitive errors.
The TWED checklist is a portable mnemonic checklist that can be used to activate implementation intentions for checking cognitive errors in clinical settings. While its mnemonic structure eases recall, its brevity makes it portable for quick application in every clinical case until it becomes habitual in daily clinical practice.
Striving to make an accurate diagnosis using sound clinical decision making skills is undoubtedly the goal of every clinician. In reality though, diagnostic error rates range from 5 to 15 % [1, 2]. Although the root causes of diagnostic errors are often multi-factorial, a large proportion of these errors have cognitive components [3, 4]. With sufficient training and experience, a clinician acquires a large repertoire of illness representation models known as ‘illness scripts’ . Illness scripts allow a clinician to make fast and accurate clinical decisions via pattern recognition [5–7]. However, while using pattern recognition results in accurate diagnoses most of the time [2, 8], there are occasions when such pattern recognition may derail the clinician into cognitive errors [9, 10] such as anchoring bias . Anchoring bias occurs when the illness “pattern” recognized at the outset of the diagnostic process results in the clinician’s fixation on this initial impression so much so that the clinician fails to adjust this initial impression even in the light of contradicting additional data . Numerous strategies have been suggested for overcoming these cognitive errors [11, 12] but a key question remains whether the clinician implements these strategies, particularly in a busy clinical setting.
According to Gollwitzer [13, 14], people who are absorbed in their on-going tasks may find it difficult to implement their intended goals. In particular, clinicians who are absorbed in highly demanding clinical tasks, such as managing emergency cases or having to attend to multiple patients at the same time may simply not remember to carry out the intention of minimizing cognitive errors. In other words, merely having the intention is not sufficient to ensure its implementation. One of the reasons for this intention-implementation gap is because clinician forgets to act on the intended task . The ability to remember acting on a postponed intended task following a period of interruptions is known as prospective memory [15, 16]. Unfortunately, during a clinical emergency, time and cognitive resources are limited. When under stress, the clinician’s memory becomes increasingly unreliable leading to prospective memory failure especially if the intended task is not part of one’s routine activities [14, 17–19]. Nonetheless, this intended task of reducing cognitive errors can become more attainable when a person explicitly incorporates specific implementation intentions in their clinical work . Implementation intentions are the cognitive “if–then” steps that serve to bridge the intention-implementation gap [13, 14].
Using a checklist can timely activate this “if–then” step of implementation intention . For example, the clinician may say, “If I have arrived at an initial diagnostic impression, then I must remember to ask myself these questions aimed to reduce cognitive errors”. The linchpin of mnemonic checklist is that it eases recall and overcomes the barrier of prospective memory failure, particularly in a stressful clinical environment [15, 16].
This paper describes the two-stage approach in the development and evaluation of such a mnemonic checklist with the objective of aiding prospective memory in checking cognitive errors. Both qualitative and quantitative research methods were incorporated. Institutional research and ethics committee approval was obtained prior to starting this study.
To quantitatively assess the content validity of the TWED checklist, ten senior emergency physicians with more than 5 years’ experience were invited as judges. Eight of them responded. These judges were first given the instructions for the evaluation task. This task included the use of an assessment form to evaluate the representativeness of these categories of cognitive errors in the respective quadrants of the TWED checklist, as well as their relevance in a clinical setting by using four-point Likert scales. The judges performed their evaluations independently. They were between 35 and 45 years old (mean age = 38.5, SD = 3.16 years), 5 out of 8 (62.5 %) of them were male, years of clinical experience varied between 9 and 19 years (mean years = 12, SD = 3.16 years).
Whereas, to verify the face validity and applicability of the checklist, seven experts on cognitive errors in clinical decision making were invited. They were invited via emails and all seven of them agreed to participate. Their expertise and clinical positions are listed in the Additional file 1.
In the development stage, a focus-driven search was conducted via Web of Science and Google Scholar using the keywords of “diagnostic error” OR “cognitive error” OR “cognitive bias” AND “checklist” to specifically answer the following questions: Is there any form of classification or category of common cognitive errors in clinical setting that has been developed? (These categories of cognitive errors would then be used as the basis to develop this mnemonic checklist). Are there any comparative checklists aimed to reduce diagnostic errors that have already been developed and published? Are these checklists formatted in mnemonic format? A mnemonic format checklist is one where specific letters or keywords are framed to represent the items in the checklist in order to ease prospective memory while a non-mnemonic checklist is one where the items are listed without incorporation of any memory aid that can activate implementation intentions.
Search was limited to English language articles only from year 2008 onwards. Only articles that describe some form of classification or categories of cognitive errors and articles that describe checklists aimed to reduce cognitive errors in a clinical setting were selected. Articles that merely describe individual cognitive errors without any form of classification were excluded. Articles addressing cognitive errors in non-clinical settings were also excluded. After the categories of common cognitive errors in diagnostic errors were identified, the checklist was then drafted and several sessions of face-to-face discussions and conversation with a content expert (PC) in cognitive errors was carried out to verify the importance of these categories in clinical setting. This content expert was chosen based on his contributions in this area of cognitive errors in clinical decision making including some of the articles referenced here [1, 8, 10, 11]. One of the authors [KSC] first contacted this content expert and spent over a period of 3 months with him on how he conducted education and trainings in this area of cognitive errors in clinical decision making and efforts to minimize them.
In the judgment stage, the content validation and the applicability of the checklist were determined (Table 1). To assess the content validity of the categories of cognitive errors represented in the checklist, content validity index (CVI) and the modified kappa were used. In particular, the representativeness and relevance of these categories of cognitive errors were determined. The CVI for relevance and representativeness is defined as the proportion of the judges who rate the item with scores of 3 or 4 on a four-point Likert scale (for representativeness: 1 = not representative of the quadrant, 2 = somewhat representative, 3 = quite representative, 4 = highly representative; and for relevance: 1 = not relevant at all, 2 = somewhat relevant, 3 = quite relevant, and 4 = highly relevant) . The content validity for the entire checklist was then calculated by averaging the CVIs of each individual category . Each of these categories was marked as an item in the CVI analysis. To account for chance agreement, modified kappa statistics  for each item were calculated as well.
To assess the face validity of the checklist as well as its applicability in clinical settings, a structured expert consultation via email communications was carried out. Responses from the experts were then coded by one of the authors (CH) using the NVivo for Mac software. Open coding was first performed in the software by repeated analytical readings and labeling keywords and phrases from these email responses, based on the five questions asked:
Are the important facets of preventing cognitive errors adequately covered in this mnemonic checklist?
When should this mnemonic checklist be used? Before formulating the initial diagnosis? Or after?
How often should this mnemonic checklist be used? For every case seen? Or only for selective cases? What types of cases, if selective?
Cognitive processes involved in medical decision making are highly complex. Does the use of a mnemonic checklist lead to oversimplification of the cognitive processes involved?
Is a mnemonic checklist mainly useful for novice clinicians, for more experienced ones, or for both groups?
After the open coding, axial coding was performed by re-analyzing these open codes to look for similarities, differences and relationships among these responses. The analyses were then sent back to the experts for member-checking before the final version of their detailed opinions was tabulated (see Additional file 1). Appropriate modifications based on the participants’ suggestions and comments were made accordingly.
From the focus-driven search, six articles were selected [4, 9, 19–22]. Six categories of common cognitive errors contributing to diagnostic errors were identified [9, 22]. Although not exhaustive, this classification addresses the important ones in clinical setting. They are: (1) errors due to over-attachment to a particular diagnosis (examples of cognitive biases in this class include anchoring and confirmation bias); (2) errors due to failure to consider alternative diagnoses (one example is search satisficing); (3) errors due to inheriting someone else’s thinking (for example, diagnostic momentum and framing effect); (4) errors in prevalence perception or estimation (for example, availability bias, gambler’s fallacy and posterior probability error); (5) errors involving patient characteristics or presentation context (for example, fundamental attribution error, gender bias), and (6) errors that are associated with the doctor’s affect or personality (for example, visceral bias and sunk cost fallacy). Previously published checklists aimed to reduce diagnostic errors have all been formatted in non-mnemonic format [4, 19–21].
A preliminary version of the checklist was developed after discussion with the content expert (PC). This checklist is divided into four quadrants with each quadrant posing an activation question represented by a letter. The letter ‘T’ stands for ‘Threat’ (i.e., ‘is there any life or limb threat that I need to rule out in this patient?’), ‘W’ for ‘Wrong/What else’ (i.e., ‘What if I am wrong? What else could it be?’), ‘E’ for ‘Evidences’ (i.e., ‘Do I have sufficient evidences to support or exclude this diagnosis?’), and ‘D’ for ‘Dispositional factors’ (i.e., ‘is there any dispositional factor that influences my decision’; and this quadrant is further divided into two groups of factors, represented by 2 ‘E’s: the ‘Environmental’ factors and the ‘Emotional’ factors’ which can come from the doctor or the patient. Hence the checklist was named the ‘TWED checklist’. The TWED checklist and the corresponding groups of cognitive errors addressed by each quadrant are given in Table 2.
In the judgment stage, the results for the content validity are given in Tables 3 and 4. Generally, most of these categories of cognitive errors were rated highly in terms of their relevance and representativeness (with a modified kappa value of 0.65–1.0), except for the relevance of two categories, namely, ‘cognitive errors due to erroneous estimation or perception of prevalence’ under the quadrant of “E = Evidences” (with a modified kappa value of 0.41) and ‘cognitive errors associated with patient characteristics (‘emotive’ influence of patient)’ under the quadrant of “D = Dispositional factors”. These two categories were rated as “fair” and “good” respectively although they were rated as “excellent” in terms of how well they are represented in their respective quadrants.
Based on the codings, five main findings regarding the face validity and applicability of TWED checklist were identified. Saturation reached after the analysis of 5–6 of these transcripts. First, the TWED checklist comprehensively although not exhaustively covers all major facets of cognitive errors. Specifically, the quadrant ‘T’ is placed appropriately as the first one since this represents the first priority to check against possible fatal disease processes. The quadrants ‘W’ and ‘E’ although distinctly different, are interrelated. This is because reflecting on ‘E’ may stimulate the consideration of other possibilities and, hence, a clinician may need to go back to ‘W’. Finally, the quadrant ‘D’ serves as an internal double check, prompting a clinician to ask: “Is there any other reason I need to slow down?” The quadrant ‘D’ also helps the clinician to acknowledge any possible internal or external pressures that a clinician is going through that may affect the quality of the decision made. However, as the cognitive errors represented in quadrant ‘D’ may not be easily recalled and can be confusing for practicing clinicians as it involves an additional step of running through the 2 ‘E’s, greater amount of time and effort may be needed to explain this quadrant to a novice clinician (Additional file 1).
Second, the TWED checklist should be used only after a working diagnosis has been established. As aptly stated by one of the content experts (JR): “[The TWED checklist] really should be (applied) after some form of an initial diagnostic impression is made intuitively. Otherwise it is unlikely to be an efficient process.” Nonetheless, whilst the domains “W-E-D” should be used right before the closing of the decision making process, the domain “T” should be used at the very outset of the process of generating working diagnoses.
Third, the TWED checklist should be used on every case until it becomes habitual. It can easily be memorized and used regularly for rapid screening to determine whether a more robust cognitive exercise is needed. Furthermore, the subtlety of cognitive errors is that these errors often take place unconsciously. As a result, clinicians may not be aware that they are at risk of committing cognitive errors and should therefore use the checklist, or its mental equivalent, for every case. In fact, as pointed out by another expert (MG), “The cases where the doctor is most sure of are in fact, the cases where the checklist would be most helpful. This is because, when the clinician is puzzled by a case, the clinician would automatically be applying Type II process (analytical thinking) and would be thinking more broadly.”
Fourth, the TWED checklist does not lead to oversimplification. As it should be used only after the generation of an initial impression, it does not actually interfere with the cognitive process itself. Rather, the TWED checklist reinforces the clinical decisions already made. Furthermore, it is expected that clinicians would not be using the TWED checklist alone. Like most mnemonics, it is more of an adjunct to the normal clinical reasoning process. And even if it does lead to oversimplification, as one expert (RT) puts it: “Having this simple application for a complex task is better than omitting the task entirely”.
Finally, the TWED checklist is useful for novice clinicians. The usefulness of this checklist is agreed upon by all the experts who participated in this evaluation process. Three experts even believe that TWED checklist does have some use also among experienced clinicians because essentially what the TWED checklist strives to achieve is on developing good habits. Nonetheless, novice and experienced clinicians may use the checklist differently. Novice clinicians who regularly use the TWED checklist will generally be able to integrate its contents as part of their clinical habits as they mature to become more experienced clinicians, whereas the experienced physicians will use it as a form of double-check mechanism in regulating their clinical decisions.
In contrast to the previous checklists identified from literature review, the TWED checklist is structured in a mnemonic format, facilitating its portability and potentially its use in the clinical setting. The advantage of a mnemonic format is that it aids prospective memory by transforming the technical terms  of common cognitive biases into four memorable questions (represented by the four quadrants) that enhances activation of implementation intention.
The brevity of the TWED checklist is an advantage as it helps the clinician to focus on the most pertinent activating questions only. By focusing on these most pertinent questions, this allows the clinician the flexibility to exercise his or her own judgment and to continue using any other strategies the clinician has already been using to reduce cognitive errors. A checklist that is too long with too many additions may render it redundant and useless [16, 24]. More importantly, the brevity of the TWED checklist makes it “portable”. It can be “carried along” with the clinician during his or her multiple patient encounters. It can be applied quickly and repetitively for every clinical case. These repetitive practices nurture the habit of reducing cognitive errors into an automatized routine and by then, the clinician would probably no longer need to rely on the TWED checklist.
For content validation, although most of the categories of cognitive errors represented in the TWED checklist were rated as “excellent” in terms of their representativeness and relevance, the categories of “cognitive errors due to erroneous estimation or perception of prevalence” and “cognitive errors associated with patient characteristics” were rated slightly lower in terms of their relevance in clinical settings. This could be due to the fact that compared to novice clinicians, more senior clinicians are more familiar with the prevalence of common disease processes in their patient populations and were more objective in their clinical evaluations.
For face validity and applicability, the experts believe that the checklist should be applied for every case and practiced repetitively until the checklist is internalized in memory. The majority of the experts believe that the TWED checklist does not lead to oversimplification. In fact, it is the complexity of medical decision making that all the more enhances the need for a simple and brief mnemonic tool like this.
Ideally, the additional diagnoses generated from the checklist should then be subjected to Bayesian analysis to gauge the probability of each of these diagnoses, as not all of these diagnoses should be given the same weight .
Several limitations of the studies deserve mentioning. First, the construct of the four quadrants in the TWED checklist was done based on literature review and discussion with only one expert (PC). Construct validity was not quantitatively determined. The lack of construct validation process as well as the reliance on the opinions of a single expert may have introduced personal biases. Second, content validity of the checklist was determined by senior emergency physicians, and the responses of these clinicians may not truly reflect the sentiments of the junior clinicians. Third, the codings for the email responses were performed by a single researcher only (CH) and again, this could have introduced personal biases.
The next step in the TWED checklist development would be to implement it in clinical settings and to study its effects on the prevention of cognitive errors in simulated followed by actual clinical practice. A practical suggestion is to display the TWED checklist in the patient clerking sheet in order to reinforce its use until it becomes a part of the clinician’s cognitive process.
The TWED checklist is a brief, portable and focused mnemonic checklist, arranged in order of priority with the aim of activating implementation intentions for checking cognitive errors in clinical settings. While its mnemonic structure eases prospective memory and its brevity allows for portability in quick application in every case, its flexibility allows the clinicians to incorporate its use with other forms of cognitive interventions that clinicians are already using.
National Academies of Sciences, Engineering, and Medicine. Improving diagnosis in health care. Washington, DC: The National Academies Press; 2015. http://nas.edu/improvingdiagnosis. Accessed on 25 Nov 2015.
Berner ES, Graber ML. Overconfidence as a cause of diagnostic error in medicine. Am J Med. 2008;121(5 Suppl):S2–23.
Graber ML, Franklin N, Gordon R. Diagnostic error in internal medicine. Arch Intern Med. 2005;165(13):1493–9.
Graber ML. Educational strategies to reduce diagnostic error: can you teach this stuff? Adv Health Sci Educ Theory Pract. 2009;14(Suppl 1):63–9.
Coderre S, Mandin H, Harasym PH, et al. Diagnostic reasoning strategies and diagnostic success. Med Educ. 2003;37(8):695–703.
Norman G. Dual processing and diagnostic errors. Adv Health Sci Educ. 2009;14(Suppl 1):37–49.
Trowbridge RL, Dhaliwal G, Cosby KS. Educational agenda for diagnostic error reduction. BMJ Qual Saf. 2013;22(Suppl 2):28–32.
Croskerry P. The importance of cognitive errors in diagnosis and strategies to minimize them. Acad Med. 2003;78(8):775–80.
Kassirer JP, Kopelman RI. Cognitive errors in diagnosis: instantiation, classification, and consequences. Am J Med. 1989;86(4):433–41.
Croskerry P. A universal model of diagnostic reasoning. Acad Med. 2009;84(8):1022–8.
Croskerry P. Achieving quality in clinical decision making: cognitive strategies and detection of bias. Acad Emerg Med. 2002;9(11):1184–204.
Graber ML, Kissam S, Payne VL, et al. Cognitive interventions to reduce diagnostic error: a narrative review. BMJ Qual Saf. 2012;21(7):535–57.
Gollwitzer PM. Implementation intentions: strong effects of simple plans. Am Psychol. 1999;54(7):493–503.
Gollwitzer PM, Sheeran P. Implementation intentions and goal achievement: a meta-analysis of effects and processes. Adn Exp Soc Psychol. 2006;38:69–119.
McDaniel MA, Einstein GO, Graham T, et al. Delaying execution of intentions: overcoming the costs of interruptions. Appl Cognit Psychol. 2004;18(5):533–47.
Thomassen O, Espeland A, Softeland E, et al. Implementation of checklists in health care; learning from high-reliability organisations. Scand J Trauma Resusc Emerg Med. 2011;19:53.
Lynn MR. Determination and quantification of content validity. Nurs Res. 1986;35(6):382–5.
Polit DF, Beck CT, Owen SV. Is the CVI an acceptable indicator of content validity? Appraisal and recommendations. Res Nurs Health. 2007;30(4):459–67.
Mamede S, Schmidt HG, Penaforte JC. Effects of reflective practice on the accuracy of medical diagnoses. Med Educ. 2008;42(5):468–75.
Graber M, Sorensen Asta V, Biswas J, et al. Developing checklists to prevent diagnostic error in emergency room settings. Diagnosis. 2014;1:223.
Ely JW, Graber ML, Croskerry P. Checklists to reduce diagnostic errors. Acad Med. 2011;86(3):307–13.
Campbell SG, Croskerry P, Bond WF. Profiles in patient safety: a “perfect storm” in the emergency department. Acad Emerg Med. 2007;14(8):743–9.
Gibson HA. Using mnemonics to increase knowledge of an organizing curriculum framework. Teach Learn Nurs. 2009;4(2):56–62.
Winters BD, Gurses AP, Lehmann H, Sexton JB, Rampersad CJ, Pronovost PJ. Clinical review: checklists—translating evidence into practice. Crit Care. 2009;13(6):210.
Hall S, Phang SH, Schaefer JP, Ghali W, Wright B, McLaughlin K. Estimation of post-test probabilities by residents: Bayesian reasoning versus heuristics? Adv Health Sci Educ Theory Pract. 2014;19(3):393–402.
KSC was responsible for the acquisition of both the quantitative and qualitative data as well initial drafting of the manuscript. All authors were responsible for the conception of the project, analysis of the data, revisions of the manuscript as well as contributing to the intellectual content of the manuscript. All authors are accountable for all aspects of the work in relation to the accuracy or integrity of the work. All authors read and approved the final manuscript.
The authors would like to acknowledge the following experts for their inputs: Dr. James B. Reilly (JR), Dr. Patrick Croskerry (PC), Dr. Robert L. Trowbridge (RT), Dr. John W. Ely (JE), Dr. Mark L. Graber (MG), Dr. Matthew Sibbald (MS) and Dr. Sílvia Mamede (SM).
The authors declare that they have no competing interests.
Availability of data and materials
Consent for participation
All participants agreed for this anonymous voluntary participation in this study as well as any subsequent publications based on the results of this study.
Consent for publication
In the content validity analysis, the data obtained from the experts who participated voluntarily were obtained anonymously. Other than their age, gender and years of clinical experience, no other personal details of these experts were recorded in this manuscript. In the face validity analysis, the details of the email responses from the experts are included in the supplemental data. Ethical clearance was obtained from the institutional research ethics committee of Universiti Sains Malaysia.