Sorry, you need to enable JavaScript to visit this website.
Skip to main content

Are We Getting Better at Measuring Patient Safety?

Amy K. Rosen, PhD | November 1, 2010 
View more articles from the same authors.

Rosen AK. Are We Getting Better at Measuring Patient Safety?. PSNet [internet]. Rockville (MD): Agency for Healthcare Research and Quality, US Department of Health and Human Services. 2010.

Save
Print
Cite
Citation

Rosen AK. Are We Getting Better at Measuring Patient Safety?. PSNet [internet]. Rockville (MD): Agency for Healthcare Research and Quality, US Department of Health and Human Services. 2010.

Perspective

The past decade has witnessed unprecedented interest in patient safety. The Institute of Medicine (IOM) report, "To Err Is Human: Building a Safer Health System," catalyzed national efforts to reduce medical errors and improve patient safety (i.e., "freedom from accidental injury due to medical care or medical errors").(1) Given increasing awareness that medical errors are relatively common and a leading cause of morbidity and mortality (2), much progress has been made in developing voluntary and mandatory patient safety reporting systems, promoting automated surveillance of clinical data, undertaking root cause analyses, and establishing standardized reporting of select serious reportable events.(3,4)

Despite these achievements, no systematic national reporting system of patient safety events currently exists. Thus, precise estimates of the number of patients who are injured as a result of unsafe medical care are not yet available.(5) The health care sector has focused much of its effort on retrospective reporting of events and on errors of commission and omission.(2) Without accurate, real-time measurement and tracking of events, our ability to comprehensively assess patient safety on a national basis or institute meaningful improvements in care remains hampered. In short, although we are getting better, we are still at a relatively undeveloped state of patient safety measurement.

Progress in measurement of patient safety has been limited by many factors, including differences in definitions and concepts of adverse events, lack of an existing taxonomy and meaningful framework for patient safety, infrequent events, inconsistencies in data elements across health care systems, and lack of scientifically sound measures.(2,3,6-8) In this article, I will review the present state of patient safety measurement, reflecting on how these challenges are being addressed.

Common Methods of Measuring Patient Safety

The most common methods used for measuring patient safety highlight some of the reasons why progress in safety measurement has moved slowly. Retrospective medical chart review remains the "gold standard" for identifying adverse events. Although medical records contain detailed clinical information on patients, and often contain information about the safety event(s) and the circumstances surrounding it, using them to systematically detect and measure safety events is not practical. Medical record review, particularly when the records are paper-based rather than electronic, is costly, labor-intensive, and typically involves one or more clinicians.(2,9) The quality of medical records is highly variable, and important clinical information related to the safety event and/or the patient's clinical history might be missing. The transfer of patients between systems, lack of staff training or experience in documenting patients' charts, or systems' failures in retrieving complete information from patients' visits, ancillary services, or other data sources, might contribute to this variability.(2) After data are abstracted, they need to be transformed into a research database, generating additional costs and labor.

Voluntary and mandatory incident reporting systems are increasingly being used at the private-sector, state, and federal levels as part of internal safety improvement programs.(2) These systems gather retrospective information on safety events, primarily relying on self-reports by providers. Techniques such as root cause analysis are then used to understand the cause and contributing factors associated with the event.(6) However, the usefulness of incident reporting systems for measurement of safety events on a national basis is limited.(2) Incident reports are not generated automatically, unless an information technology infrastructure is present, making data collection cumbersome and costly. Because there are no uniform standards for reporting, systems differ on the types of events reported and on the information collected, preventing accurate measurement of safety events, and making aggregation and comparison of data across systems untenable. Further, event reporting is variable, depending on whether a provider chooses to use the reporting system or not. Thus, these reporting systems capture only a small (and biased) fraction of safety events that occur and fail to provide information on the true rate of a particular safety event for a given population.(8,10). This type of ad hoc reporting frequently leads to underestimation and underdetection of safety events, hindering the ability to improve care.

Automated surveillance is also becoming a standard method for safety event detection and measurement. An advantage of this approach is that surveillance can occur retrospectively or prospectively (2); moreover, data are generated without depending on the willingness of caregivers to self-report. Trigger tools are an example of this methodology; an automated trigger is used to identify cases that are "triggered" for further review to determine whether or not a specific adverse event, such as an adverse drug event (ADE), occurred.(11-13) For example, an example of a trigger for one type of ADE is, "active prescription for potassium and serum potassium greater than 6.0 meq/L." A case that is triggered would be further reviewed to determine if an ADE occurred (i.e., serum potassium > 6 meq/L).

Because this methodology would be too time consuming using manual review of the data, it is best suited for organizations that have large amounts of data that are captured electronically (10), limiting its applicability to select organizations. Another limitation, albeit well recognized and accepted, is that automated surveillance is likely to yield a high rate of false positive events (i.e., events that are not "true" adverse events) because the trigger algorithm serves as a flag for identifying potential cases at high risk of adverse events. For example, an "emergency room" trigger automatically gets triggered when a patient has a subsequent emergency room visit within 21 days of an outpatient surgery. Although this trigger was designed to identify patients at high risk of surgical adverse events, many false positive cases are detected because of its broad logic, signaling the need for a second layer of review to identify true adverse events.(2) Finally, although the feasibility of using trigger tools to identify adverse events has been demonstrated (14), further empirical testing is needed to demonstrate their validity in detecting true safety events.(2)

Using administrative or claims data is another common approach for detecting and tracking safety events. Administrative data–based measures take advantage of low-cost, readily available administrative data, making them attractive relative to other measures of safety, such as those obtained from labor-intensive chart review. They contain demographic and clinical characteristics of patients, such as length of stay and ICD-9-CM diagnosis and procedure codes, which can be used to detect and track safety events across large populations of patients over time.(15) One example of an administrative data–based measure is the Patient Safety Indicators (PSIs). Developed by the Agency for Healthcare Quality and Research (AHRQ), the PSIs were designed to capture potentially preventable events that compromise patient safety in the acute-care setting, such as complications after surgeries, procedures, or medical care.(15) Despite their "high marks" for feasibility and utility, these measures also have some limitations. Although studies to date suggest that selected PSIs have moderate to good positive predictive validity (PPV; i.e., true positives/flagged cases) (16-18), the PSIs are regarded as "indicators" of potential safety-related events rather than as definitive measures because their criterion validity (i.e., their agreement with a standard such as medical record data) is still being examined.(19) Also, they lack the rich clinical details found in patients' medical charts. Data elements across health care systems are not consistent, and the accuracy and reliability of ICD-9-CM codes have been consistently questioned.(2,20) Improvements in coding (e.g., distinguishing between preexisting conditions at admission and complications that develop during hospitalization) will greatly enhance their utility for national safety measurement.

The Table lists these commonly used measurement strategies, along with their key advantages and disadvantages.

Conclusion

More than 10 years ago, the IOM report, "To Err Is Human," recommended that medical errors and patient safety events be reported in a systematic manner to improve detection and measurement.(1,21) As this brief review demonstrates, we have not yet achieved this goal. We lack an integrated approach to measuring patient safety; at present, we have several complementary methods of measuring safety. Given the increased use of quality and safety measures for hospital-level public reporting and pay-for-performance at the national and state levels (22-24), it is critical that accurate and robust measures of patient safety be developed and used for collecting standardized information on patient safety. Safety measurement lags behind quality measurement, in which a relatively robust set of outcome and process measures exists (e.g., 30-day risk-adjusted mortality and readmission for select conditions). This is due in part to the difficulty in measuring generic "systems failures" compared with condition-specific processes or outcomes of care, as well as the lack of a meaningful framework and taxonomy for safety measurement. Much work remains to be accomplished before we can target those areas that need the most improvement in safety practices.

Amy K. Rosen, PhD Professor, Health Policy and Management Boston University School of Public Health

References

Back to Top

 

1. Kohn L, Corrigan J, Donaldson M, eds. To Err is Human: Building a Safer Health System. Washington, DC: Committee on Quality of Health Care in America, Institute of Medicine, National Academies Press; 2000. ISBN: 9780309068376.

2. Aspden P, Corrigan JM, Wolcott J, Erickson SM, eds; Committee on Data Standards for Patient Safety. Patient Safety: Achieving A New Standard For Care. Washington, DC: Institute of Medicine, National Academies Press; 2004. ISBN: 9780309090773.

3. Patient safety. In: 2009 National Healthcare Quality Report. Rockville, MD: Agency for Healthcare Research and Quality; March 2010. [Available at]

4. Serious Reportable Events in Healthcare 2006 Update: A Consensus Report. Washington, DC: National Quality Forum; 2007. ISBN: 1933875089. [Available at]

5. Jha AK, Prasopa-Plaizier N, Larizgoitia I, Bates DW; Research Priority Setting Working Group of the WHO World Alliance for Patient Safety. Patient safety research: an overview of the global evidence. Qual Saf Health Care. 2010;19:42-47. [go to PubMed]

6. Pronovost PJ, Goeschel CA, Marsteller JA, Sexton JB, Pham JC, Berenholtz SM. Framework for patient safety research and improvement. Circulation. 2009;119:330-337. [go to PubMed]

7. Leape L, Berwick D, Clancy C, et al; Lucian Leape Institute at the National Patient Safety Foundation. Transforming healthcare: a safety imperative. Qual Saf Health Care. 2009;18:424-428. [go to PubMed]

8. Chang A, Schyve PM, Croteau RJ, O'Leary DS, Loeb JM. The JCAHO patient safety event taxonomy: a standardized terminology and classification schema for near misses and adverse events. Int J Qual Health Care. 2005;17:95-105. [go to PubMed]

 

9. Verelst S, Jacques J, Van den Heede K, et al. Validation of Hospital Administrative Dataset for adverse event screening. Qual Saf Health Care. 2010;19:e25. [go to PubMed]

10. Brown C, Hofer T, Johal A, et al. An epistemology of patient safety research: a framework for study design and interpretation. Part 4. One size does not fit all. Qual Saf Health Care. 2008;17:178-181. [go to PubMed]

 

11. Classen DC, Pestotnik SL, Evans RS, Burke JP. Computerized surveillance of adverse drug events in hospital patients. JAMA. 1991;266:2847-2851. [go to PubMed]

12. Jha AK, Kuperman GJ, Teich JM, et al. Identifying adverse drug events: development of a computer-based monitor and comparison with chart review and stimulated voluntary report. J Am Med Inform Assoc. 1998;5:305-314. [go to PubMed]

13. Bates DW, Evans RS, Murff H, Stetson PD, Pizziferri L, Hripcsak G. Detecting adverse events using information technology. J Am Med Inform Assoc. 2003;10:115-128. [go to PubMed]

14. Honigman B, Light P, Pulling RM, Bates DW. A computerized method for identifying incidents associated with adverse drug events in outpatients. Int J Med Inform. 2001;61:21-32. [go to PubMed]

15. Romano PS, Geppert JJ, Davies S, Miller MR, Elixhauser A, McDonald KM. A national profile of patient safety in U.S. hospitals. Health Aff (Millwood). 2003;22:154-166. [go to PubMed]

16. Sadeghi B, Barron R, Zrelak P, et al. Cases of iatrogenic pneumothorax can be identified from ICD-9-CM coded data. Am J Med Qual. 2010;25:218-224. [go to PubMed]

17. White RH, Sadeghi B, Tancredi D, et al. How valid is the ICD-9-CM based AHRQ Patient Safety Indicator for postoperative venous thromboembolism? Med Care. 2009;47:1237-1243. [go to PubMed]

18. Utter GH, Zrelak P, Baron R, et al. Positive predictive value of the AHRQ accidental puncture or laceration patient safety indicator. Ann Surg. 2009;250:1041-1045. [go to PubMed]

 

19. AHRQ Quality Indicators. Rockville, MD: Agency for Healthcare Research and Quality. [Available at]

20. Rosen AK, Rivard P, Zhao S, et al. Evaluating the patient safety indicators: how well do they perform on Veterans Health Administration data? Med Care. 2005;43:873-884. [go to PubMed]

 

21. Kizer KW, Stegun MB. Serious reportable adverse events in health care. In: Henriksen K, Battles JB, Marks E, Lewin DI, eds. Advances in Patient Safety: From Research to Implementation, Vol. 4. Rockville, MD: Agency for Healthcare Research and Quality and Department of Defense; 2005. [Available at]

22. NQS. Washington, DC: National Quality Forum. [Available at]

23. CMS. Baltimore, MD: Centers for Medicare & Medicaid Services. [Available at]

 

24. NYPORTS User's Manual. New York Patient Occurrence Reporting and Tracking System, Version 2.1. Albany, NY: New York State Department of Health; 2001.

 

Table

Back to Top

Table. Strategies for Measuring Patient Safety. (Go to table citation in the text)

Measurement Strategies Advantages Disadvantages
Retrospective Chart Review Considered the "gold standard," contains rich detailed clinical information Costly, labor-intensive, data quality variable due to incomplete clinical information, retrospective review only
Incident Reporting Systems Useful for internal quality improvement and case-finding, highlights adverse events that providers perceive as important Capture small fraction of adverse events that occur, retrospective review only based on provider self-reports, no standardization or uniformity of adverse events reported
Automated Surveillance Can be used retrospectively or prospectively, helpful in screening patients who may be at high risk for adverse events using standardized protocols Need electronic data to run automated surveillance, high proportion of "triggered" cases are false positives
Administrative/Claims Data Low-cost, readily available data, useful for tracking events over time across large populations, can identify "potential" adverse events Lack detailed clinical data, concerns over variability and inaccuracy of ICD-9-CM codes across and within systems, may detect high proportion of false positives

 

 

This project was funded under contract number 75Q80119C00004 from the Agency for Healthcare Research and Quality (AHRQ), U.S. Department of Health and Human Services. The authors are solely responsible for this report’s contents, findings, and conclusions, which do not necessarily represent the views of AHRQ. Readers should not interpret any statement in this report as an official position of AHRQ or of the U.S. Department of Health and Human Services. None of the authors has any affiliation or financial involvement that conflicts with the material presented in this report. View AHRQ Disclaimers
Save
Print
Cite
Citation

Rosen AK. Are We Getting Better at Measuring Patient Safety?. PSNet [internet]. Rockville (MD): Agency for Healthcare Research and Quality, US Department of Health and Human Services. 2010.

Sections