Swiss perspectives in 10 languages

How can we ensure safe and fair AI in healthcare?

Andre Anjos & Jung Park

Artificial intelligence is massively impacting how healthcare is delivered and we all have a role to play in making sure it’s done in a safe and bias-free way, argue researchers at the forefront of AI and medicine.

Artificial intelligence (AI) is revolutionising healthcare by improving the accuracy of diagnostics, streamlining treatments, and personalising patient care. Tools like predictive health software and medical image analysis systems are at the forefront of these innovations.

In Switzerland and around the world, hospitals are facing increasing financial challenges. AI technologies may offer a way to reduce costs by speeding up diagnosis, screening, reporting, and decision making, when properly managed and operated by medical experts. By joining forces between humans and AI we hope such organisations will be able to manage resources more efficiently and address financial pressures more effectively.

But deploying AI in clinical settings raises critical questions about reliability and equitability. While AI is often seen as a black-box solution, it must be scrutinised for potential biases that arise from how it is developed and used. Can AI systems truly offer equal care to all patients?

Why AI bias happens

Bias can creep in throughout different stages of AI developmentExternal link, such as during problem set-up, data collection, and algorithm design.

Data sources: The most common source of bias in AI is in training data, which represents information used to teach how AI models should make decisions. If the training data does not represent all groups of people or includes past biases, the AI model is likely to repeat these issues. For example, AI systems used to analyse chest X-rays have shown bias, particularly affecting women, Black patients, and low-income individuals, because these groups are underrepresented in the training dataExternal link. Another example involves AI tools for detecting skin cancer, such as melanoma, which were predominantly trained on images of individuals with lighter skin color. This potentially results in lower accuracy for patients with darker skinExternal link.

Model construction: The structure of AI models can also contribute to bias. These models are often fine-tuned for accuracy, which might sideline the importance of fairness, affecting rare but crucial cases. For example, an AI model used in the criminal justice system, known as the COMPAS algorithm, was designed to predict the likelihood of a defendant re-offending. Although it was accurate in many cases, it exhibited bias by disproportionately assigning higher risk scores to Black defendants compared to white ones. This occurred because the model considered factors like arrest history and socioeconomic background, which can be influenced by systemic biases in policing and society. As a result, Black defendants were unfairly treated by the justice systemExternal link​. In the healthcare domain, an algorithm used nationwide in the USA to identify patients with complex health needs was found to favour white patients over Black individuals because it used incurred healthcare costs as a proxy for health needs, unintentionally introducing racial biasExternal link.

Human oversight: The preferences and biases of developers can infiltrate AI systems through decisions on data selection, problem definition, and the intended goals of trained models. These biases can significantly impact the fairness of AI systems, particularly if the development team lacks diversity, which can limit the range of perspectives. This can also lead to tools that do not perform adequately for all users. For example, researchExternal link during the COVID-19 pandemic in the United States revealed that AI systems for health monitoring and diagnosis often under-sampled data from marginalized communities, such as Black, Hispanic, and Native American groups, worsening health disparities.

The impact of AI bias

Bias in AI raises significant ethical and legal issues. Incorrect diagnosis or inappropriate treatment violate the basic principles of equitability in healthcare. Article 8, al. 2 of the Swiss ConstitutionExternal link emphasizes that everyone should be treated equally under the law, underscoring the necessity for fairness and equality in AI systems. International standards, such as those from the European Union’s General Data Protection Regulation (GDPR), also stress the importance of non-discrimination and fairness in AI applicationsExternal link. These regulations emphasise the need for AI systems to be transparent, accountable, and fair to prevent discrimination, and ensure equitable outcomes in healthcare.

Studies have shown that AI models trained on language data often replicate human biases, including those related to race and gender, thereby reinforcing societal prejudicesExternal link. In healthcare, this bias can result in inaccurate medical diagnosis and treatment recommendations, disproportionately affecting underprivileged communities.

Biased AI systems can also undermine trust between patients and healthcare providers. Trust is a cornerstone of effective healthcare delivery. If users are worried about demographic fairness and accuracy of AI-driven diagnostics and treatments, both patients and healthcare professionals might be reluctant to use these technologies, potentially hindering the adoption of innovations that could otherwise benefit healthcare, reducing its associated costs and increasing access.

More

How you can help ensure fair AI in healthcare

Addressing AI bias in healthcare requires coordinated efforts from developers, healthcare providers, policymakers, and patients in the community.

Developers must train AI models on diverse datasets to avoid bias, and validate for demographic fairness and accuracyExternal link. Transparency in the development process is crucial for building trust and accountability. Methodological tools for identifying biases in AI modelling processesExternal link can be pivotal in improving demographic fairness. Healthcare providers play a significant role by integrating AI ethically into their practices, continuously monitoring its performance, and educating staff on its responsible use. Such providers can help adjust AI models to compensate for biases, which can be more cost-effective than retraining humans. This ensures that all medical decisions are as unbiased and fair as possible. Policymakers contribute by setting standards and guidelines that promote inclusive research and fair AI practicesExternal link. They must engage with stakeholders to create and enforce regulations that ensure AI systems positively impact healthcare while addressing potential biases.

To help develop and implement fair AI systems, members of the public can:

Stay informed: Keep updated on AI and its potential biases. Understanding basic concepts and specific ways AI can exhibit bias is crucial. Regularly read trusted news sources and research publications to stay informed about the latest developments in AI and healthcare. This knowledge will help you make informed decisions and contributions.

Participate in public activities: Engage actively in public discussions about AI in healthcare. Your participation in forums, consultations, and discussions can significantly influence policies to ensure AI systems are demographically fair and equitable. Advocate for transparency by requesting clear explanations from organizations on how AI models are trained, validated, and monitored. This transparency is essential for building accountability and trust in AI systems. Support research that includes diverse populations by donating, volunteering, or promoting the work of institutions focused on health equity.

Engage with policymakers: Advocate for fairness and accountability in AI by engaging with local and national policymakers. Contact your representatives to highlight the importance of incorporating diverse data in AI development. Support policies that mandate regular audits of AI systems to ensure biases are well-known and that decision about the use of AI models is informed. Participate in advocacy groups or campaigns dedicated to this cause and make your voice heard in public discussions and consultations. Promote the establishment of clear accountability structures within AI regulations by engaging in public comment periods, writing to your representatives, and joining relevant discussions.

By joining forces, we can ensure AI delivers benefits for everyone. Thoughtful and purposeful development of AI will create technologies that reflect our shared values, aiming for a future where healthcare is more affordable, accessible, and fair for all.

The authors used generative AI tools, including ChatGPT-4 and Gemini 1.5, for language improvement. They reviewed the content and take full responsibility for the final text’s accuracy and coherence.

Popular Stories

Most Discussed

In compliance with the JTI standards

More: SWI swissinfo.ch certified by the Journalism Trust Initiative

You can find an overview of ongoing debates with our journalists here . Please join us!

If you want to start a conversation about a topic raised in this article or want to report factual errors, email us at english@swissinfo.ch.

SWI swissinfo.ch - a branch of Swiss Broadcasting Corporation SRG SSR

SWI swissinfo.ch - a branch of Swiss Broadcasting Corporation SRG SSR