Sorry, you need to enable JavaScript to visit this website.
Skip to main content
Subscribed

Artificial Intelligence: System-Level Considerations

Last Updated: July 31, 2024
Created By: Lorri Zipperer, Cybrarian, AHRQ PSNet Team

Description
This curated library explores overarching components that inform system-level design and use of artificial intelligence to improve and support safe care delivery. Resources specific to distinct tasks (decision support) and care processes (diagnosis) are not included.
Library Organization
Custom - This library is organized by custom section header names.
Foundations (4)
View More View Less

Health Ethics & Governance, World Health Organization. Geneva, Switzerland: World Health Organization; 2021.  ISBN: 9789240029200

Advanced computing technologies can help or hinder safe care. This guidance summarizes ethical concerns and risks stemming from the influx of artificial intelligence (AI) into decision making... Read More

View More View Less

Obermeyer Z, Nissan R, Stern M, et al. Center for Applied Artificial Intelligence, Chicago Booth: June 2021.

Biased algorithms are receiving increasing attention as artificial intelligence (AI) becomes more present in health care. This publication shares four steps for organizational assessment algorithms to reduce their potential... Read More

View More View Less
All Library Content (19)
Search Icon
Fleisher LA, Economou-Zavlanos NJ. JAMA Health Forum. 2024;5(6):e241369.
Artificial intelligence (AI) is being characterized as a medical device that requires guidance to ensure its safe use in health care. The authors highlight existing authority through the Centers for Medicare & Medicaid Services (CMS) and other government entities to track and respond to instances of AI use that result in patient harm. They also call for specific training developments that enable clinicians to apply AI-generated information effectively in front-line care.
Hirani R, Noruzi K, Khuram H, et al. Life (Basel). 2024;14(5):557.
Artificial intelligence (AI) applications in healthcare continue to grow. This article summarizes the history and evolution of AI in healthcare and describes current applications in healthcare, such as integration with telemedicine, and advancing personalized medicine. The authors also discuss how AI is being used to advance patient engagement and communication (e.g., the use of chatbots for patient engagement) and medical education, as well as ethical considerations as healthcare continues to integrate AI into practice. 
Chen F, Wang L, Hong J, et al. J Am Med Inform Assoc. 2024;31(5):1172-1183.
When biased data are used for research, the results may reflect the same biases if appropriate precautions are not taken. In this systematic review, researchers describe possible types of bias (e.g., implicit, selection) that can result from research with artificial intelligence (AI) using electronic health record (EHR) data. Along with recommendations to reduce introducing bias into the data model, the authors stress the importance of standardized reporting of model development and real-world testing.
Patrick Tighe, MD, MS; Bryan M. Gale, MA; Sarah E. Mossburg, RN, PhD |

This piece discusses the current and potential impacts of artificial intelligence on patient safety, as well as challenges to successful implementation.

Patrick Tighe photograph

Patrick Tighe, MD, MS, is a practicing anesthesiologist at University of Florida Health (UF Health) and the executive director of UF Health’s Quality and Patient Safety Initiative. We spoke to him about the current and potential impacts of artificial intelligence (AI) on patient safety as well as challenges to successful implementation.

Goldberg CB, Adams L, Blumenthal D, et al. NEJM AI. 2024;1(3).
Artificial intelligence (AI) is increasingly being used and studied in healthcare. This perspective shares insights from the RAISE (Responsible AI for Social and Ethical Healthcare) conference, highlighting that AI in healthcare needs to enhance patient care, support healthcare professionals, and be accessible and safe for all.
Verma AA, Trbovich PL, Mamdani MM, et al. BMJ Qual Saf. 2024;33(2):121-131.
Artificial intelligence and machine learning present both opportunities and threats to patient safety. This article highlights machine learning applications in quality improvement and patient safety (e.g., decision support) and practice considerations before deploying machine learning applications (e.g., presence of underlying biases). The authors provide several recommendations for optimizing implementation of machine learning applications in healthcare settings.
Mello MM, Guha N. N Engl J Med. 2024;390(3):271-278.
Artificial intelligence (AI) has the potential to enhance health care, but there are still concerns regarding its use. This article discusses the challenges in applying existing liability law principles to the increasing use of AI in healthcare. The authors discuss risk management approaches that clinicians and organizations can use to manage AI-related liability risk.

Matheny M, Israni ST, Ahmed M, et al, eds. Washington, DC: National Academy of Medicine. 2022. ISBN: 9781947103177.

Advanced computing technologies have the capacity to greatly affect health equity, decision making, and care in both positive and detrimental ways. This report discusses optimism associated with artificial intelligence (AI) and ethical, social, economic, and safety cautions stemming from global use of AI in health care.
Rowland SP, Fitzgerald JE, Lungren M, et al. NPJ Digit Med. 2022;5(1):157.
The rapid expansion of digital health technologies, particularly in response to the COVID-19 pandemic, can increase patient safety risks. This article summarizes malpractice liability risks associated with digital health technologies, including electronic health record (EHR) systems, telehealth, and artificial intelligence for clinical decision support.
Sujan M, Pool R, Salmon P. BMJ Health Care Inform. 2022;29(1):e100516.
Engineering and design concepts are being applied across many health care domains to improve safety. This article summarizes 8 tenets from human factors and ergonomics practice to reduce the potential for the pressures of humanness to detract from the safe use of artificial intelligence.
Gonzalez-Smith J, Shen H, Singletary E, et al. NEJM Catal Innov Care Deliv. 2022;3(4).
Clinical decision support (CDS) helps clinicians select appropriate medications, arrive at a correct diagnosis, and improve intraoperative decision making. Through interviews with health system executives, clinicians, and artificial intelligence (AI) experts, this study presents multiple perspectives on selection and adoption of AI-CDS in healthcare. Four emerging trends are presented: (1) AI must solve a priority problem; (2) the tool must be tested with the health system’s patient population; (3) it should generate a positive return on investment; and (4) it should be implemented efficiently and effectively.

Health Ethics & Governance, World Health Organization. Geneva, Switzerland: World Health Organization; 2021.  ISBN: 9789240029200

Advanced computing technologies can help or hinder safe care. This guidance summarizes ethical concerns and risks stemming from the influx of artificial intelligence (AI) into decision making throughout health care. The report provides 6 tenets to guide AI implementation worldwide and shares governance recommendations to ensure the clinical and public health impacts of AI are equitable, responsible and safe.

Obermeyer Z, Nissan R, Stern M, et al. Center for Applied Artificial Intelligence, Chicago Booth: June 2021.

Biased algorithms are receiving increasing attention as artificial intelligence (AI) becomes more present in health care. This publication shares four steps for organizational assessment algorithms to reduce their potential for negatively influencing clinical and administrative decision making.  
Quinn TP, Senadeera M, Jacobs S, et al. J Amer Med Inform Assoc. 2021;28(4):890-894.
Artificial intelligence (AI) has the potential to enhance safety and improve diagnosis, but its use is not without risks and challenges. This article discusses the conceptual, technical, and humanistic challenges with AI in health care and how AI developers, validators, and operational staff can help overcome these challenges.
Holm S, Stanton C, Bartlett B. Health Care Anal. 2021;29(3):171-188.
Artificial intelligence (AI) is currently used to assist with many healthcare practices, including diagnosing cancer, detecting deterioration, and medication reconciliations. As the use of AI continues to expand, regulators and legal experts will need to consider how to manage compensation for patients who have experienced medical errors. This commentary suggests no-fault compensation as a possible solution. 

Washington DC; United States Government Accountability Office; November 26, 2020. Publication GAO-21-7SP.

Artificial intelligence (AI) has the potential to enhance the safety and reliability of clinical and administrative functions. This US Government Accountability Office report outlines barriers impacting the widespread use of AI, such as privacy concerns and lack of development transparency. Collaboration and oversight are areas of policy focus highlighted to address these challenges.
Choudhury A, Asan O. JMIR Med Inform. 2020;8(7):e18599.
This systematic review explored how artificial intelligence (AI) based on machine learning algorithms and natural language processing is used to address and report patient safety outcomes. The review suggests that AI-enabled decision support systems can improve error detection, patient stratification, and drug management, but that additional evidence is needed to understand how well AI can predict safety outcomes.  
Habli I, Lawton T, Porter Z. Bull World Health Organ. 2020;98(4):251-256.
Using clinical artificial intelligence as an example, these authors posit that digital tools are challenging standard clinical practices around assigning blame and assuring patient safety. They discuss moral accountability for harm to patients and safety assurances to protect patients against such harm, and examine these issues from both a clinician and patient perspective.
Macrae C. BMJ Qual Saf. 2019;28(6):495-498.
The unintended risks associated with integrating artificial intelligence (AI) systems into health care are a popular topic of debate. This commentary suggests that strong guidance is necessary to reduce risks while capitalizing on the potential inherent in AI to enhance decision-making, diagnosis, and risk management.