Analysis of errors in dictated clinical documents assisted by speech recognition software and professional transcriptionists.
Clinical documentation is an essential part of patient care. However, in the electronic health record era, documentation is widely perceived to be inefficient and a significant driver of physician burnout. Speech recognition software, which directly transcribes clinicians' dictated speech, is increasingly being used in order to streamline the documentation workflow. This study examined the accuracy of speech recognition software in a sample of notes (progress notes, operative notes, and discharge summaries) produced by 144 clinicians of multiple disciplines in 2 health systems. Transcripts produced by speech recognition software had 7.4 errors per 100 transcribed words, with many of these errors being potentially clinically significant. Although review by a professional medical transcriptionist corrected most of these errors, about 1 in 300 words remained incorrect even in the final physician-signed note. This study corroborates prior research that found potentially significant error rates in software-transcribed emergency medicine and radiology notes. A WebM&M commentary discussed an adverse event that occurred due to a transcription error in a radiology study report.