An objective framework for evaluating unrecognized bias in medical AI models predicting COVID-19 outcomes.
While artificial intelligence (AI) in healthcare may potentially improve some areas of patient care, its overall safety depends, in part, on the algorithms used to train it. One hospital developed four AI models at the start of the COVID-19 pandemic to predict risks such as hospitalization or ICU admission. Researchers found inconsistent instances of model-level bias and recommend a holistic approach to search for unrecognized bias in health AI.