Sorry, you need to enable JavaScript to visit this website.
Skip to main content

The e-Autopsy/e-Biopsy: A Systematic Chart Review to Increase Safety and Diagnostic Accuracy Innovation

Save
Print
August 30, 2023
View more articles from the same authors.
Summary

Addressing diagnostic errors to improve outcomes and patient safety has long been a problem in the US healthcare system.1 Many methods of reducing diagnostic error focus on individual factors and single cases, instead of focusing on the contribution of system factors or looking at diagnostic errors across a disease or clinical condition. Instead of addressing individual cases, KP sought to improve the disease diagnosis process and systems. The goal was to address the systemic root cause issues in systems that lead to diagnostic errors. In 2010, KP developed the e-Autopsy/e-Biopsy (eA/eB) methodology, an innovation designed to compile cases of specific populations of patients and diagnoses that have quality, safety, and diagnostic issues.2 The goal of the innovation is to improve outcomes by identifying patterns of error for specific diagnoses. e-Autopsy denotes the eA methodology review process used among patients who died. e-Biopsy denotes the non-mortality eB methodology review process.

The eA/eB methodology is a three-part process involving reviewing cases, analyzing results, and implementing system changes. The case review process consists of two steps: electronic filtering of patient records and manual chart review. For electronic filtering of patient records, patients are electronically identified with the defined diagnosis to obtain a pool of charts from which cases are randomly selected for review. The manual review process is conducted by physician reviewers using an electronic review tool with questions relevant to their area of expertise. After the review process is completed, the data from the review tool are analyzed by the eA/eB team, which includes physicians who oversee each eA/eB study, as well as specialty experts, to identify patterns of variations in care, diagnostic accuracy, and failure to provide evidence-based care. The specialty experts are clinicians (e.g., gynecologists) who have expertise in the specific condition being studied (e.g., ectopic pregnancy). The results of the review and suggested system improvement opportunities are then presented to subject matter experts and leaders. Lastly, after the results and recommendations have been shared, the eA/eB team works with specialty and senior organizational leadership teams to implement the system changes.1

KPSC’s September 20223 study discusses the use of eA/eB to improve patient safety with three diagnoses: ectopic pregnancy, advanced colon cancer, and abdominal aortic aneurysms (AAA). Multiple systemic problems were found from the three-part process for each diagnosis. For example, in the eB review for ectopic pregnancy, the review of care found that only 56% of patients who had bloodwork indicating a risk of ectopic pregnancy were offered a repeat diagnostic ultrasound. Only 38% of patients were offered a conclusive procedure, such as an endometrial biopsy, to clarify the diagnosis, as recommended in guidelines. Inconsistent use of screening guidelines was an issue identified in the eA process for advanced colon cancer. KPSC identified that about 15% of patients had a delay in diagnosis of colon cancer due to failure to work up rectal bleeding and microcytic anemia. They noted that they were not measuring adenoma detection rates during colonoscopy. A lack of screening was a significant issue identified in the eA process for AAA.1

Based on the eA/eB review findings, KPSC implemented several system changes that improved care across the three diagnoses. The updated system found eight patients with an ectopic pregnancy with possible early delays in care which were flagged in the SureNet system that led to clear communication with the involved clinicians to reduce the chance of significant diagnostic delays. Without this system, these patients likely would not have been identified, because, by their nature, diagnostic errors are not routinely identified. Due to the new system’s updates, intervention and appropriate care was provided to these eight patients between October 2021 and April 2022, with many more patients having been identified and appropriately cared for since then. The system changes for AAA, including a Best Practice Alert in the electronic health record (EHR), greatly increased the screening rate to identify early AAA, and decreased the rate of failure to recognize and follow up diagnosed AAA. This and other system improvements reduced the number and rate of AAA ruptures (average of 1.03/100,000 members at baseline to 0.5/100,000 postimplementation). KPSC implemented an electronic surveillance system4 to identify patients with rectal bleeding and/or with microcytic anemia who were not worked up, which resulted in their having testing done for colon cancer and reducing potential delays in diagnosis. Finally, between 2016 and 2018, colon cancer adenoma detection rates increased from 30% to 34% among women and from 42% to 47% among men.5

Since 2010, KPSC has conducted a total of 12 eA/eB studies on a wide variety of topics, identifying many needed system improvements. The key to success is in selecting the right topic for the study (i.e., objective criteria exist to diagnose the condition and there are evidence-based guidelines to clarify optimal care) with an identified need for improvements in diagnosis, having the support of leadership, creating actionable results that are implemented in the system itself to support buy-in, and obtaining internal funding to implement the actionable results in the system.

Innovation Patient Safety Focus

The innovation focused on reducing diagnostic errors for certain diagnoses to improve patient safety.

Resources Used and Skills Needed

The intervention requires both physician and non-physician resources.

  • For the physician resources, of the 50 charts reviewed, doctors spend about 15 minutes per chart. KPSC typically uses two research physicians and four to six specialists in the particular condition being studied.
  • For non-physician resources, the intervention requires a project manager who does not need a clinical background, at 25% full-time equivalent per project. The project manager manages the manual review process and data collection completed by the physicians.
  • The innovation requires the ability to create the electronic review tool, which standardizes the manual chart reviews to ensure accuracy.
Use By Other Organizations

Per the innovating organization, the innovation can be replicated at other organizations, but those organizations must have the minimum required resources. The organizations need the ability to review 50 charts, which is equivalent to 12 hours of physician time per study. There must be enough cases of the disease (minimum 50 for chart review) in the system that the organization can study. There should also be leadership support for the goal of identifying system issues amendable to improvement and for the necessary implementation of those changes.

In smaller hospitals or organizations, fewer staff can be used, and designated staff can spend more time reviewing more charts.

Date First Implemented
2010
Problem Addressed

Diagnostic errors are a difficult problem.1 The 2015 National Academy of Medicine report found that “most people will experience at least one diagnostic error in their lifetime, sometimes with devastating consequences.”1 Given the magnitude of the problem as outlined in the report, there is an urgent need to decrease the rate of diagnostic errors.

Several methods used to detect diagnostic and other errors include voluntary reporting, malpractice claims, patient complaints, physician surveys, random quality reviews and audits, and peer review data. These methods usually evaluate single cases rather than the systems that allowed the error. A common outcome of these methods is for the involved provider to increase their education and “be more careful,” but most individual error is caused by defective systems of care.

For the above reasons, KPSC sought to investigate the defective systems of care to improve patient outcomes rather than relying on single providers to improve outcomes themselves.

Description of the Innovative Activity

eA/eB is different from other methods used to detect diagnostic error that focus solely on voluntary reporting, patient complaints, and the review of physician surveys. Instead, eA/eB relies on specific clinical diagnoses that meet a set of designated criteria. Therefore, eA/eB seeks to improve the systems that cause diagnostic errors and patient safety events rather than focusing on single individual safety events. eA/eB involves a hybrid review of electronic screening and manual chart abstraction, which is guided by a small dedicated eA/eB team.2

  • Review Process:
    • Electronic Filtering of Patient Records
      • KPSC electronically identified patients with the defined diagnosis to obtain a pool of charts from which cases are randomly selected for review.
    • Conducting Manual Reviews
      • The manual review process is conducted by physician reviewers using the electronic review tool with questions relevant to their area of expertise.
      • Manual reviews average approximately 15–20 min per chart, depending on complexity.
      • KPSC enlists five to eight reviewers per study, who review a total of 50 charts in four to six weeks (one or two charts per reviewer per week), using secure electronic communication behind their organization’s secure firewall for all personal health information.
  • Analyzing Results:
    • The results are analyzed by the eA/eB team and specialty experts using the review tool spreadsheets to identify patterns of variations in care, diagnostic accuracy, and failure to reliably provide evidence-based care.
    • The results of the review and any recommended system improvements are then presented to subject matter experts and hospital/organizational leaders.
  • Implementation:
    • After the results and recommendations based on the eA/eB study are shared, specialty leaders and senior organizational leadership work to implement the system changes.
    • After most studies, current clinical and operational teams are assigned responsibility (or new teams are created if necessary) to address the system issues. There is monitoring to verify improvement.
Context of the Innovation

Solving diagnostic errors to improve outcomes and patient safety has long been a problem in the US healthcare system. Given the magnitude of the problem as outlined in the 2015 National Academy of Medicine report,1 there is an urgent need to decrease the rate of diagnostic errors. The report found that “most people will experience at least one diagnostic error in their lifetime, sometimes with devastating consequences.” Additionally, per the Society to Improve Diagnosis in Medicine (SIDM), major diagnostic errors are found in 10% to 20% of autopsies, suggesting that 40,000 to 80,000 patients die annually in the United States from diagnostic errors.

Current methods of improving diagnostic error often focus solely on the individual and single cases, instead of the system. KPSC sought to improve the systems to increase patient safety in the diagnosis of systems overall.

Results

For each disease, the innovators used either the eB or eA methodology. The evaluation on ectopic pregnancy used eB. Abdominal aortic aneurysm and colorectal cancer used eA.

The eB methodology for ectopic pregnancy revealed actionable trends in diagnostic delays. Based on those findings, Kaiser implemented interventions to improve the diagnosis of ectopic pregnancy. For example, quantitative human chorionic gonadotropin (hCG) results incorporate recommendations to repeat testing and consider consultation with an obstetrician if the repeat test indicates less than a 36% rise in 48 hours. In addition, the electronic medical record (EMR) features an order set to assist clinicians in ordering the appropriate diagnostic tests. KPSC created the SureNet system4 to identify patients with no follow-up visit scheduled, despite having an abnormal rise in sequential hCG results. The SureNet system “leverages electronic health information to efficiently identify and address a variety of potential care gaps across different clinical diagnoses.”4 These systems scan electronic health records to identify patients with a certain condition or event. An example may include harmful medication interactions or potentially missed diagnoses, such as abnormal test results, with no evidence of follow-up care.2 After implementation of the SureNet system for patients with a potential ectopic pregnancy from October 2021 through April 2022, eight patients with delays in were identified and provided with appropriate care.1 Without the SureNet system, these eight patients may have experienced significant delays in care, and a patient safety event could have occurred. Since April 2022, many additional patients have been identified and managed.

The 2011 eA review for AAA identified 24 patients who died with a diagnosis of ruptured AAA in a KPSC hospital within a one-year period.2 The review found a lack of screening to be the major issue. It also noted a case of an incidental AAA mentioned in a radiology report where the ordering physician failed to recognize it and did not follow up. Following these findings, the SureNet system, as well as a screening program based on a Best Practice Alert for high-risk patients were both implemented to increase the detection of occult AAAs and to identify AAAs seen in radiology but not acted upon. In 2020, this program successfully identified 686 patients diagnosed with AAA and did not have appropriate follow-up until identified by the SureNet system. After implementation of these system improvements, the number and rate of AAA rupture has decreased (average of 1.03/100,000 members at baseline to 0.5/100,000 postimplementation).

Inconsistent use of screening guidelines was an issue identified in the eA for advanced colon cancer.5 Of patients diagnosed with stage 3 or 4 cancer, 31% were not screened per guidelines, and many did not have appropriate follow-up testing (such as computed tomography [CT] scans and carcinoembryonic antigen [CEA] tests) after diagnosis. Decision-support systems for CT scans and CEA tests at appropriate intervals post-surgery were implemented. In addition, adenoma detection rates were measured, and the pathology department now tests all specimens for Lynch syndrome, a genetic disorder leading to a much higher risk of colon cancer in patients’ family members. Due to these changes, completion rates for surveillance, when indicated. have gone up to 61% for CEA and 67% for CT scans. From 2016 to 2018, adenoma detection rates increased from 30% to 34% in women and from 42% to 47% in men. There were many patients identified with Lynch syndrome, allowing their family members to benefit from earlier and more intensive screening.5

Planning and Development Process

The development process includes selecting a diagnosis, creating a review team, and developing a review tool.

The diagnosis is selected by using five predetermined criteria: there is suspicion of an opportunity to improve diagnostic accuracy, the diagnoses’ cases can be filtered using administrative data, there are evidence-based guidelines and best practices for the diagnosis, the diagnosis is likely to benefit from an effective systems approach for reducing error, and organizational leadership supports the diagnosis and implementation of potential changes.2

The review team should consist of two types of physicians: a research consultant(s) who consult(s) with clinical leaders on the disease being reviewed, and specialty physicians to conduct the manual chart reviews. For example, a study of colorectal cancer care was done in collaboration with oncology leaders, and conducted by a team of oncologists, a pathologist, and an oncologic surgeon.5

The eA/eB team creates an electronic review tool with specialty experts to standardize the manual chart reviews. It is an electronic spreadsheet which includes guidelines and other evidence-based criteria. The tool should include an appropriate number of questions determined by the implementation team to ensure evidence-based criteria are used.

Resources Used and Skills Needed

The intervention requires both physician and non-physician resources.

  • For the physician resources, of the 50 charts reviewed, doctors spend about 15 minutes per chart. KPSC typically uses two research physicians and four to six specialists in the particular condition being studied.
  • For non-physician resources, the intervention requires a project manager who does not need a clinical background, at 25% full-time equivalent per project. The project manager manages the manual review process and data collection completed by the physicians.
  • The innovation requires the ability to create the electronic review tool, which standardizes the manual chart reviews to ensure accuracy.
Funding Sources

The innovation uses internal funding via hired staff at the organization.

Getting Started with This Innovation

The intervention starts by selecting an appropriate diagnosis and identifying clear guidelines. Diagnoses chosen should have clear and consistent data that can be used to implement system changes to improve patient safety. Data will contribute toward obtaining buy-in from leadership. The intervention must have actionable results that can be implemented within the innovating organization. There must be funding to implement actionable results in the system.

Sustaining This Innovation

It is important that the interventions have actions that are implemented and completed. This process facilitates continuous buy-in from organizational staff. Without changes implemented that provide positive results, interest from organizational staff and leadership will wane. Additionally, the results and impact from the project’s findings and system improvements must be communicated with organizational staff to maintain buy-in for the eA/eB improvement projects.

References/Related Articles

Litman KC, Lau H, Kanter MH, Jones JP. eAutopsy: using structured hybrid manual/electronic mortality reviews to identify quality improvement opportunities. Jt Comm J Qual Patient Saf. 2014;40(10):444-451.

Kanter MH, Ghobadi A, Lurvey L, Liang S, Litman K. The e-Autopsy/e-Biopsy: a systematic chart review to increase safety and diagnostic accuracy. Diagnosis. 2022;9(4):430-436.

Imley T, Kanter MH, Timmins R, Adams AL. Creating a safety net process to improve colon cancer diagnosis in those with rectal bleeding. Perm J. E-pub online, 2022.

Schottinger JE, Kanter MH, Litman KC, et al. Using literature review and structured hybrid electronic/manual mortality review to identify system-level improvement opportunities to reduce colorectal cancer mortality. Jt Comm J Qual Patient Saf. 2016;42(7):303-308.

Link to the KP SCPMG DEx website

Footnotes
  1. Balogh EP, Miller BT, Ball JR, eds. Improving Diagnosis in Health Care. National Academies Press; 2015.
  2. Litman KC, Lau H, Kanter MH, Jones JP. eAutopsy: using structured hybrid manual/electronic mortality reviews to identify quality improvement opportunities. Jt Comm J Qual Patient Saf. 2014;40(10):444-451.
  3. Kanter MH, Ghobadi A, Lurvey L, Liang S, Litman K. The e-Autopsy/e-Biopsy: a systematic chart review to increase safety and diagnostic accuracy. Diagnosis. 2022;9(4):430-436.
  4. Danforth KN, Smith AE, Loo RK, Jacobesen SJ, Mittman BS, Kanter MH. Electronic clinical surveillance to improve outpatient care: diverse applications within an integrated delivery system. EGEMS. 2014;2:1056.
  5. Kanter MH, Schottinger JE, Joshua AP, Slezak JM. Beyond screening: an interim report and analysis of a multimodal initiative to decrease colon cancer mortality. Jt Comm J Qual Pat Saf. 2022;48:388-394.
The inclusion of an innovation in PSNet does not constitute or imply an endorsement by the U.S. Department of Health and Human Services, the Agency for Healthcare Research and Quality, or of the submitter or developer of the innovation.
Contact the Innovator
  • Michael H. Kanter, Michael.H.Kanter@kp.org, Southern California Permanente Medical Group and Kaiser Permanente Bernard J. Tyson School of Medicine, Pasadena, California
  • Kerry C Litman, Kerry.C.Litman@kp.org, Southern California Permanente Medical Group and Kaiser Permanente Bernard J. Tyson School of Medicine, Pasadena, California
  • Ali Ghobadi, Ali.X.Ghobadi@kp.org, Southern California Permanente Medical Group and Kaiser Permanente Bernard J. Tyson School of Medicine, Pasadena, California
  • Mimi Hugh, Mimi.Hugh@kp.org, Southern California Permanente Medical Group Department of Performance Assessment, Pasadena, California
  • Sophia T. Liang, Sophia.T.Liang@kp.org, Kaiser Permanente Bernard J. Tyson School of Medicine, Pasadena, California

Save
Print