This study provides pilot evidence of the reliability of an observational rating tool to assess interprofessional team interactions about ABCDE in one ICU.
We demonstrated the highest inter-rater reliability for which components of the bundle were addressed during team interactions and most specifically, for ‘Breathing’, ‘Coordination’, ‘Delirium,’ and ‘Early mobility’ components (Table 1). Yet, reliability for the item that rated whether ‘Awakening’ was addressed (k = −0.07) was much lower. There are a couple of reasons for our finding of lower reliability for whether ‘Awakening’ was addressed. First, it is possible that during morning rounds in a large and busy ICU, raters may not have uniformly heard the interaction about ‘Awakening’ when transitioning from one patient to another. Second, ‘Awakening’ may not have been addressed at all, which would potentially explain the lower reliability of this component of the observational rating tool.
Low inter-rater reliability for one component of the bundle also highlights the complexity of assessing team interactions in the clinical setting. Two raters were present and observed the same team interactions but interactions occur in a fluid, dynamic context. Rounds in the USA can be large with representation from at least four professions—a physician, a nurse, a pharmacist and a respiratory therapist [10]—in addition to any trainees in an academic setting. Since the ABCDE bundle is not yet a routinized part of care [6] and discussion of ‘Awakening’ may be first, we would expect variability in rating team interactions about ‘Awakening’ in a large ICU with multiple clinicians present during morning rounds. Further, not all team members may actively engage or participate in interactions about ABCDE even when present which could influence measurement. Recent data suggest that some clinicians do not actively engage in morning rounds despite having pertinent knowledge [11]. Lack of engagement and participation of ICU teams may also have an adverse effect on potential quality improvement projects. Indeed, lack of engagement by clinicians to ABCDE delivery is cited as a frequent barrier [7].
We demonstrated slight to fair reliability for the items about initiation of interactions about ABCDE but were unable to assess inter-rater reliability for participation in interactions about ABCDE. We suspect that it was difficult to reliably classify interprofessional team member participation using the tool because of the free-form documentation format. Each rater could identify the team member who participated, in his or her own words, which may not be equivalent. Free-form text may not be the most appropriate way to assess participation and future modifications to the tool should include defined options for this domain (i.e. check boxes for each clinician type). Further, no option was available on the observational rating tool to indicate that no other clinician participated in the interaction (besides those that initiated the interaction). This may have contributed to too few ratings on the participation domain and thus our inability to evaluate reliability. Given these findings and the complexity in documenting the number and type of interprofessional team members, we intend to modify the tool for future work to include check boxes to select team member participation.
Despite being one of the first studies to develop an observational rating tool to assess team interactions about ABCDE, our study does have limitations. First, this study was conducted in one ICU in a large academic medical center in the Midwest and our results may not be generalizable. Second, we present pilot data of the development of an observational rating tool and we are limited by a small sample size and potential lack of power. As such, we focus in this article on the psychometrics properties and tool development. Lastly, our operational definitions of the individual bundle components, although informed by a review of a literature and an enrollment tool from a prospective ABCDE trial, may require further clarification and we acknowledge this limitation.
In conclusion, we find moderate to substantial reliability of an observational rating tool to assess team interactions about ABCDE in one medical ICU. We find slight to fair reliability when assessing which team members initiated interactions about ABCDE but were unable to assess team member participation in interactions about ABCDE. Future work should focus on further testing of this tool to understand how information about team interactions could be leveraged to improve delivery of a complex care bundle like ABCDE.