- Research note
- Open access
- Published:
Physician experience with speech recognition software in psychiatry: usage and perspective
BMC Research Notes volume 11, Article number: 690 (2018)
Abstract
Objective
The purpose of this paper is to extend a previous study by evaluating the use of a speech recognition software in a clinical psychiatry milieu. Physicians (n = 55) at a psychiatric hospital participated in a limited implementation and were provided with training, licenses, and relevant devices. Post-implementation usage data was collected via the software. Additionally, a post-implementation survey was distributed 5 months after the technology was introduced.
Results
In the first month, 45 out of 51 (88%) physicians were active users of the technology; however, after the full evaluation period only 53% were still active. The average active user minutes and the average active user lines dictated per month remained consistent throughout the evaluation. The use of speech recognition software within a psychiatric setting is of value to some physicians. Our results indicate a post-implementation reduction in adoption, with stable usage for physicians who remained active users. Future studies to identify characteristics of users and/or technology that contribute to ongoing use would be of value.
Introduction
For a number of years, physicians have used speech recognition software (SRS) to support clinical documentation [1,2,3,4]. The software allows physicians to dictate clinical notes using SRS to convert voice into electronic text, with editing in real time. Available findings suggest a range of outcomes associated with SRS use. Specifically, reduced report turnaround time has been found [5,6,7,8]. Cost-effectiveness of SRS over traditional transcription has also been noted [9]. Fewer interruptions of emergency room physicians occurred with SRS when compared to written data entry [10].
However, not all findings from SRS implementations have been positive. Some studies suggest that usability and productivity decrease with the use of SRS [11,12,13]. Similarly, the learning curve has been a challenge for physicians [3]. In addition, errors that arise during conversion [13] to text could potentially lead to clinical misinterpretation; quality control and feedback to users may reduce such errors over time [4, 14].
A limited number of publications on psychiatric SRS exist despite the large volume of narrative text content in mental health and addictions documentation. To date, there are two published investigations of SRS in psychiatry. One report’s findings were mixed: there were no clear benefits from a time savings, quality of care, quality of documentation or impact on workflow perspective. A limitation of this study was the small sample (n = 12) [15]. While a second study was conducted in a psychiatric setting, it did not examine physician use, as it was directed at administrative assistants and transcriptionists [16]. Thus, our objective was to further evaluate SRS in a psychiatric setting by describing psychiatrist usage and perceptions.
Main text
The SRS evaluation was conducted using a descriptive design at the Centre for Addiction and Mental Health (CAMH) in Toronto, Canada between November 2016 and May 2017. CAMH is Canada’s largest academic mental health and addictions hospital. CAMH achieved stage 7 on the Healthcare Information Management Systems Society (HIMSS) Electronic Medical Record Adoption Model in 2017 [17]. The SRS evaluated in this paper facilitates documentation by physicians within the CAMH electronic medical record (EMR).
Specifically, the SRS used in this evaluation is Dragon Medical© Network Edition 360 version 12.51.215.103 (Dragon) by Nuance. The software deployed at CAMH requires dictating into a handheld microphone or headset that is tethered to a desktop or laptop.
In October 2016 all physicians at CAMH were provided with the opportunity to participate in a limited SRS implementation. Fifty-five (n = 55) physicians indicated their interest and received a license of the Dragon Medical© Network Edition 360 version 12.51.215.103, and either a Nuance PowerMic II© or headset microphone. Two hours of training on Dragon Medical© was provided to physicians, with additional training available as needed.
Five months after the SRS implementation, physicians received a post-implementation survey on: (1) the number of patients each physician sees per week, (2) self-reported comfort with SRS technology, (3) acceptability of the level of SRS accuracy, (4) the length of time required to complete documentation when using the SRS. Data generated by the SRS was also collected, including: (1) the number of active users, (2) average active physician user minutes, and (3) average active physician user lines. Additionally, the number of physicians who attended the additional training and the number of licenses provided were recorded.
The CAMH Research Ethics Board (REB) waived a review since we used unlinked anonymous data—and therefore deemed exempt from requiring ethical approval based on article 2.4 of the Government of Canada Tri-Council Policy Statement: Ethical Conduct for Research Involving Humans [18]. In accord, review was also waived by the CAMH Quality Projects Ethics Review (QPER) Chair.
A total of 55 SRS licenses were provided to CAMH physicians. Results of the post-implementation survey and SRS usage data are discussed below.
Fifty-three of the 55 physicians who indicated an interest in using SRS attended training, and 51 activated their SRS. Fourteen physicians attended additional optimization training by January 2017. Fifty-four physicians from the original 55 were asked to participate in a post-implementation survey, as one physician left CAMH. The post-implementation survey response rate was 38%. Respondents reported an average volume of 26.6 patients per week (range 8–80). Most reported being very comfortable (n = 12, 60%) or somewhat comfortable (n = 5, 25%) using technology. One physician reported feeling neutral, and two (10%) felt somewhat uncomfortable using technology. A majority (16/21, 76%) of physicians either somewhat agreed (8/21, 38%) or strongly agreed (8/21, 38%) that SRS reduced their time spent documenting clinical care. A majority (15/21, 71%) also either somewhat agreed (9/21, 43%) or strongly agreed (6/21, 28%) that subjectively SRS was acceptably accurate at transcribing speech.
Data from the SRS provided the number of active users, the monthly average number of active user minutes of dictation, and the monthly average number of active user dictated lines. Figure 1 depicts total active (blue) and inactive (orange) users per month over the course of the 5-month evaluation. Most (n = 45, 88%) were active in the first month. Five months after implementation 27 physicians (53%) were still active.
Figures 2 and 3 show the time spent and lines generated by active users of the SRS. Specifically, Fig. 2 shows that the average number of active user minutes of dictation per month fluctuated over the period of the evaluation, with a 3.4-min increase (4%) in average active user minutes from beginning to end over the 5 months.
The average number of active user lines is seen in Fig. 3: similar to average dictated minutes, the average number of active user lines remained relatively stable, with an increase over 115 lines/month (14%) in 5 months.
Limitations
Following initial physician enthusiasm, over the period of the 5-month evaluation, there was a 47% (24/51) drop in the number of active users. This finding is congruent with the Gartner Hype Cycle ‘trough of disillusionment’ phase, which occurs after a technology implementation [19]. Limitations to address in the future include monitoring of the number of active and inactive users over a longer period of time may provide insight into whether the remaining stages of the Gartner Hype Cycle may occur—at the time of submission the number of active licenses is stable at sixty. For future efforts, standardized assessments of satisfaction, usability and documentation quality assessments would be more informative.
Another factor contributing to the decline in active users over time may have been the voluntary nature of the SRS, and the availability of other methods available for documenting clinical notes. Physicians were not dependent on SRS—they may have opted out of SRS since CAMH physicians have been typing clinical notes for 9 years already and are generally comfortable with keyboard use. The availability of organizational transcription services is less likely to have been a factor, as CAMH transcription services are restricted to only two document types. If users experienced benefits of the SRS that were not dramatically better than the other documentation methods, they may not have wanted to put the time and effort into using SRS in their practice. In addition, it could be that some physicians never felt comfortable using the technology, and therefore discontinued their own use of it which has been an identified reason for discontinuation in the literature [3].
The results of this evaluation also suggested that the average number of active user minutes, and the average number of active user lines remained stable or slightly increased over time, with the exception of 1 month (January 2017) when a large outpatient service at CAMH increased the amount of dictation completed using SRS to catch up on a backlog new referrals. Although there was variation, the absence of decline in the active user minutes and average number of active user lines suggests that active physician users had even monthly usage. It may be that patient volumes and types of visits that lend themselves to SRS use remain relatively constant. These results differ from those of a study that reviewed the length of physician notes using SRS over time, and indicated that notes became shorter [3].
Results of the post-implementation survey indicate that most physicians reported a decreased amount of time spent documenting. There are mixed results in the literature related to time-savings with SRS [11, 13, 15]. It may be that CAMH physicians who were active users of the SRS were the main completers of the post-implementation survey, and physicians less interested in SRS may have been less likely to respond to the survey. Other limitations of this report include a lack of objective measures of satisfaction, usability, document quality, productivity and accuracy (error rates).
Finally, the results of this study add to the small body of literature on the use of SRS in a psychiatric setting. Similar to our earlier study of physician use of SRS in psychiatry, the results are mixed [15]. This may in part be a result of differences in design. The initial study used a smaller sample size, and statistical comparisons were performed. Less optimally, for the current, larger, descriptive study, no formal statistical hypothesis testing was conducted. To summarize, SRS technology may be of value to physicians in the psychiatry context. This notion is further supported by the stable number of physicians with active SRS licenses at the time of submission—since the evaluation was completed, there are now sixty active licenses. However, SRS does not appear to have universal acceptance among this unique group of physicians.
Two general observations were made by the CAMH SRS team. First, it was important to keep in regular communication with the physician users to identify any technical or education problems and address physicians’ SRS difficulties in a timely manner. Second, it takes time to learn how to effectively use the SRS and incorporate it into physician workflow. It was observed that physicians who spent time refining their use of the technology continued with SRS. However, the value proposition of SRS varies across users—some physicians gain much efficiency e.g. those who have physical challenges with typing, are slow at typing or are early technology adopters. Since 2014—well prior to our SRS implementation—most physician document types were documented by keyboard, and so many CAMH physicians gained less efficiency by already having a high comfort level with keyboard entry.
This evaluation demonstrated that SRS technology may be useful to some physicians in psychiatric settings—however, the technology is not a ‘one size fits all’ solution. Supporting physicians with post-implementation training and regular communication may help to identify challenges that physicians are having that may influence use. Future efforts should use formal assessment tools and measures. Review usage data over an extended period of time would help to identify if the Gartner Hype Cycle applies to SRS.
Abbreviations
- CAMH:
-
Centre for Addiction and Mental Health
- EMR:
-
Electronic Medical Record
- HIMSS:
-
Healthcare Information Management Systems Society
- QPER:
-
Quality Projects Ethics Review
- REB:
-
Research Ethics Board
- SRS:
-
speech recognition software
References
Hart JL, McBride A, Blunt D, Gishen P, Strickland N. Immediate and sustained benefits of a “total”; implementation of speech recognition reporting. Br J Radiol. 2010;83:424–7. https://doi.org/10.1259/bjr/58137761.
Akhtar W, Ali A, Mirza K. Impact of a voice recognition system on radiology report turnaround time: experience from a non-english-speaking South Asian country. AJR Am J Roentgenol. 2011;196:W485. https://doi.org/10.2214/AJR.10.5426.
Kauppinen TA, Kaipio J, Koivikko MP. Learning curve of speech recognition. J Digit Imaging. 2013;26:1020–4. https://doi.org/10.1007/s10278-013-9614-7.
Motyer RE, Liddy S, Torreggiani WC, et al. Frequency and analysis of non-clinical errors made in radiology reports using the national integrated medical imaging system voice recognition dictation software. Ir J Med Sci. 2016;185:921–7. https://doi.org/10.1007/s11845-016-1507-6.
Kang HP, Sirintrapun SJ, Nestler RJ, Parwani AV. Experience with voice recognition in surgical pathology at a large academic multi-institutional center. Am J Clin Pathol. 2010;133:156–9. https://doi.org/10.1309/AJCPOI5F1LPSLZKP.
Krishnaraj A, Lee JKT, Laws SA, Crawford TJ. Voice recognition software: effect on radiology report turnaround time at an academic medical center. Am J Roentgenol. 2010;195:194–5. https://doi.org/10.2214/AJR.09.3169.
Strahan RH, Schneider-Kolsky ME. Voice recognition versus transcriptionist: error rates and productivity in MRI reporting. J Med Imaging Radiat Oncol. 2010;54:411–4. https://doi.org/10.1111/j.1754-9485.2010.02193.x.
Khorasani R. Can health IT tools enable improved documentation of quality, safety measures, and regulatory requirements in radiology reports? J Am Coll Rad. 2013;10:381–2. https://doi.org/10.1016/j.jacr.2013.02.003.
Johnson M, Lapkin S, Long V, et al. A systematic review of speech recognition technology in health care. BMC Med Inform Decis Mak. 2014;14:1–14. https://doi.org/10.1186/1472-6947-14-94.
Dela Cruz JE, Shabosky JC, Albrecht M, et al. Typed versus voice recognition for data entry in electronic health records: emergency physician time use and interruptions. West J Emerg Med. 2014;15:541–7. https://doi.org/10.5811/westjem.2014.3.19658.
Reiner BI. Expanding the functionality of speech recognition in radiology: creating a real-time methodology for measurement and analysis of occupational stress and fatigue. J Digit Imaging. 2013;26:5–9. https://doi.org/10.1007/s10278-012-9540-0.
Hodgson T, Magrabi F, Coeira E. Evaluating the usability of speech recognition to create clinical documentation using a commercial electronic health record. Int J Med Inform. 2018;113:38–42.
Hodgson T, Magrabi F, Coeira E. Evaluating the efficiency and safety of speech recognition withina commercial electronic health record system: a replication study. Appl Clin Inform. 2018;9:326–35. https://doi.org/10.1055/s-0038-1649509.
Ringler MD, Goss BC, Bartholmai BJ. Syntactic and semantic errors in radiology reports associated with speech recognition software. Stud Health Technol Inform. 2015;216:922.
Derman YD, Arenovich T, Strauss J. Speech recognition software and electronic psychiatric progress notes: physicians’ ratings and preferences. BMC Med Inform Decis Mak. 2010;5:1–7. https://doi.org/10.1186/1472-6947-10-44.
Mohr DN, Turner DW, Pond GR, et al. Speech recognition as a transcription aid: a randomized comparison with standard transcription. J Am Med Inform Assoc. 2003;10:85–93. https://doi.org/10.1197/jamia.M1130.
CAMH achieves HIMSS stage 7, the highest level. http://www.canhealth.com/blog/camh-achieves-himss-stage-7-the-highest-level/. Accessed 12 June 2018.
Government of Canada. TCPS 2 (2014)—the latest edition of Tri-council policy statement: ethical conduct for research involving humans. 2017. http://www.pre.ethics.gc.ca/eng/policy-politique/initiatives/tcps2-eptc2/Default/. Accessed 20 Dec 2017.
Gartner. Gartner Hype Cycle. 2017. http://www.gartner.com/technology/research/methodologies/hype-cycle.jsp. Accessed 29 Oct 2017.
Authors’ contributions
JF, IB and JS conceptualized the evaluation, in consultation with GS. JF and IB carried out data collection. JF, IB, SB, GS and JS were involved in data analysis and interpretation. SB led the literature review. GS and SB co-wrote the manuscript. All authors provided feedback. All authors read and approved the final manuscript.
Acknowledgements
The authors would like to thank Daniela Mucuceanu and Damian Jankowicz for their executive support of this project.
Competing interests
The authors declare that they have no competing interests.
Availability of data and materials
The datasets used during the current study are available from the corresponding author on reasonable request.
Consent to publish
Not applicable.
Ethics approval and consent to participate
The CAMH Research Ethics Board (REB) waived a review since we used unlinked anonymous data—and therefore deemed exempt from requiring ethical approval based on article 2.4 of the Government of Canada Tri-Council Policy Statement: Ethical Conduct for Research Involving Humans [18]. In accord, review was also waived by the CAMH Quality Projects Ethics Review (QPER) Chair.
Funding
Not applicable.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
About this article
Cite this article
Fernandes, J., Brunton, I., Strudwick, G. et al. Physician experience with speech recognition software in psychiatry: usage and perspective. BMC Res Notes 11, 690 (2018). https://doi.org/10.1186/s13104-018-3790-y
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13104-018-3790-y