THE HIDDEN BIAS IN AI HEALTHCARE ALGORITHMS: A CASE STUDY ON SYSTEMATIC DISCRIMINATION AND EMERGING REGULATIONS

Authored BY - Dr Yatin Kathuria

INTRODUCTION

In recent years, Artificial Intelligence (AI) has played a vital role in driving progress across several sectors. The healthcare industry, in specific, has experienced considerable innovations, transforming medical technology and the delivery of healthcare services. However, along with its potential benefits, AI also brings substantial risks, particularly when it comes to fairness and parity. A prominent example is a 2019 study published in Science that exposed systematic discrimination against black patients by an AI algorithm used by hospitals in the United States. This blog examines the details of this case, providing insights on how the algorithm worked, the biases it propagated, and emerging regulations governing AI systems in healthcare.

THE STUDY AND ITS FINDINGS

The 2019 study analyzed an AI algorithm used by hospitals to decide which patients would benefit most from extra medical care. The algorithm was used on a dataset comprising more than 200 million people in the U.S. The objective  behind the application of algorithm was to identify patients with complex health needs who would require additional care to improve their health outcomes. However, the study revealed that the algorithm unduly favored white patients over black patients. The study found that black patients were considerably less likely to be recognized as needing extra care. Specially, for a particular disease, black patients were less frequently identified for additional care than white patients, in spite of having similar or greater treatment requirements. This bias not only proliferated existing health inequalities but also aggravated them by technically denying black patients the additional care they required. Historically, black patients had less access to healthcare facilities, leading to lower overall healthcare expenditures as compared to white patients. As a result, the algorithm underrated the health needs of black patients, leading to systematic discrimination.

HOW THE ALGORITHM WORKED

AI algorithms typically analyze large datasets to identify patterns and make predictions. In this case, the Healthcare algorithm was designed to predict which patients would benefit most from greater medical attention. Here’s a simplified breakdown of the process:

  1. Data Collection: The algorithm collected data on patients’ healthcare utilization, including the frequency of doctor visits, hospitalizations, and overall healthcare expenditure.
  2. Cost as a Predictor: It then used healthcare expenses as the primary indicator of a patient’s health needs, functioning under the assumption that higher spending associated with greater health needs.
  3. Risk Scores: Patients were given risk scores based on their anticipated healthcare needs. Those with higher scores were flagged for extra care and medical resources.

The central flaw in this process was the dependence on healthcare expenses as a determining factor for actual health care needs, which fundamentally integrated and propagated historical biases. In US Black patients have historically faced discrimination in multiple facades including having access to healthcare because of socioeconomic factors, and limited availability of services. These barriers resulted in lesser utilization of healthcare facilities and spending, which the algorithm misconstrued as indicative of lower health needs.

 

EMERGING ETHICAL AND REGULATORY STANDARDS GOVERNING AI IN HEALTHCARE

Healthcare providers and AI developers must prioritize ethical considerations in the deployment of AI systems. This includes continuous monitoring for biases, transparent methodologies, and inclusive data practices that consider the diverse populations served by these technologies. Considering the issues and concerns associated with adoption of AI systems in health sector many of the International and National organizations have come up with ethical standards and guidance for responsible deployment of healthcare AI systems.

ETHICAL FRAMEWORKS ADOPTED BY WHO

In year 2021, WHO issued guidance on Ethics & Governance of Artificial Intelligence for Health. It is the product of 18 months of deliberation amid leading experts in ethics, digital technology, law, human rights, as well as experts from Ministries of Health. The report identifies the ethical challenges and risks with the use of artificial intelligence of health. It recognizes six key principles to ensure AI works for the public benefit in all countries. It also contains a set of recommendations that can ensure the governance of artificial intelligence for health maximizes the promise of the technology and holds all stakeholders in the public and private sector accountable to those who rely on these technologies and the communities and individuals whose health will be affected by its use.

Recently, The World Health Organization (WHO) released new guidance AI Ethics and Governance Guidance for Large Multi-Modal Models on January 18, 2024, focusing on the ethics and governance of large multi-modal models (LMMs), a rapidly evolving type of generative AI technology with significant applications in healthcare. This guidance includes over 40 recommendations aimed at governments, technology companies, and healthcare providers to ensure the responsible use of LMMs to enhance public health while managing associated risks. LMMs can process various types of inputs, such as text, videos, and images, and generate diverse outputs. They have potential applications in areas like diagnosis and clinical care, patient-guided use for symptom investigation, administrative tasks, medical education, and scientific research including drug development. Despite these benefits, WHO under the guidance recognized concerns about the accuracy, bias, and quality of the data used to train these models, which could lead to harmful outcomes if not properly managed. The WHO’s guidance emphasizes the need for robust regulatory frameworks and ethical standards, urging countries for setting standards for the development and deployment of LMMs in healthcare. It also emphasized on Implementing laws and regulations to uphold ethical obligations and human rights and conducting mandatory audits and impact assessments to ensure compliance and transparency in AI based medical services.

ICMR GUIDELINES FOR APPLICATION OF AI IN INDIAN HEALTHCARE

In India, the Indian Council of Medical Research (ICMR) has released the Ethical Guidelines for the Application of Artificial Intelligence in Biomedical Research and Healthcare in year 2023, establishing a comprehensive framework to ensure ethical and responsible AI deployment in healthcare sector. These guidelines emphasize key principles such as transparency and explain ability, Data privacy and security including Non-Discrimination and Fairness Principles to mitigate biases and promote inclusive development. By mandating that training data be accurate and representative of the intended population, and requiring external audits and continuous feedback to minimize biases, the ICMR emphasizes data quality and algorithmic fairness. The guidelines also stress the importance of including under-represented and vulnerable groups, actively promoting the inclusion of women and minorities, and ensuring equality, freedom, and dignity for all individuals. Additionally, the principle emphasis that AI technologies must be designed for universal use, free from discrimination based on race, age, caste, religion, or social status.

 

CONCLUSION

While AI holds promise for improving healthcare delivery and outcomes, it is crucial to approach its implementation with effective regulatory standards which ensures, accountability, transparency fairness and equity. There is need of balanced approach to create an environment, where we can harness the benefits of healthcare AI solutions without compromising with civil rights or propagating systemic inequities.