Skip to main content
Business Analytics

AI and Health Disparities in the Age of COVID-19

Technological advances, including in artificial intelligence, have played a key role in tracking, diagnosing and preventing health care conditions, including COVID-19, around the world. However, AI tools come with significant limitations, especially for those in underserved and underrepresented communities.

Fay Cobb Payton, Ph.D., University Faculty Scholar and professor of information technology and analytics at Poole College of Management, explains the potential health disparities in patient diagnosis and care that result from algorithmic bias.

“When we talk about a situation like COVID regarding racial disparities and increasing cases in the Black community, it’s of no surprise to me the news that is coming out of New Orleans, Chicago, New York City and Washington, D.C.,” Payton says. “We frequently see health disparities and biases in areas with larger populations of underserved minority or low socio-economic individuals. AI is supposed to help determine what kind of healthcare will be provided to millions of people. The problem is that research continues to show that minority populations are not factored into these algorithms.”

While AI’s role in healthcare has proven helpful in many regards – such as analyzing mammograms 30 times faster than traditional methods or diagnosing asthma flare-ups with 97 percent accuracy – Payton reminds us that it does not serve all communities with equity.

“When there’s unconscious or implicit bias when an individual goes to an emergency room, that patient may not receive the appropriate quality of care. COVID does not exist singularly. There are usually other comorbid conditions, such as heart disease and diabetes, at play. We know that those compounding conditions are seen disproportionately in minority communities,” Payton explains. “If outcomes are built around structural inequities, then the algorithm is going to be biased, as well. That, in turn, impacts future health outcomes, cost of care, delivery of care, and other scenarios.”

These issues will linger to arise as discussions continue around which individuals will be able to be tested, where COVID testing centers will be located, and what resources must be available in order for treatment to be delivered.

The question is, then, how do we overcome the biases we see in AI? Payton believes there are a few key areas that need to be addressed.

There needs to be inclusivity in the room. As these systems and tools are developed, we need inclusive workforces and interdisciplinary expertise that can speak to the implicit biases that are often overlooked. As Payton says, “All solutions can’t come from sameness.”

Accountability is necessary. There are frameworks out there, from a research perspective, on what levels of fairness, accountability and accuracy that AI should adhere to. This requires governance and these issues warrant attention, especially in light of a pandemic.

We have to look collectively on where AI is going and whom it’s going to benefit. Deploy human-in-the-loop processes and policies. Healthcare is one of the largest portions (at 17%) of our nation’s GDP. If we’re not addressing this issue now, we’re going to prolong progress.