12867
Blog
Yuval Segev

The Governance and Security Risks Health Systems Need to Address Before AI Adoption

While AI offers tremendous potential to improve data management and streamline patient care, the technology also introduces a variety of risks that must be carefully addressed at every stage of the adoption process – from strategy and integration to change management and governance. 

Although no single governance or security framework  for clinical AI exists today, the World Health Organization’s (WHO) AI ethics framework provides a valuable roadmap. This framework highlights the importance of documented governance, risk management and compliance programs to mitigate a multitude of risks.

The WHO AI Ethics Framework: An Overview

The WHO AI ethics framework offers a structured approach to managing AI-related risks in healthcare, focusing on several critical areas:

  • Human-Centered Design: AI systems should support, not replace, human decision-making, with clinicians remaining central to healthcare delivery.
  • Transparency and Explainability: AI decisions must be transparent and explainable to build trust and allow healthcare providers to validate outputs.
  • Data Privacy and Security: Rigorous data protection standards are crucial to safeguard sensitive healthcare information.
  • Regulatory Compliance: AI systems must meet global regulations, such as HIPAA and GDPR, to uphold legal and ethical standards.

Potential Risks to Consider with Large Language Models (LLMs)

At more than 100 pages, the WHO AI ethics framework is robust. While it’s always a good idea to have a governance and security expert as part of your health system’s AI team, you don’t need to read the entire document to understand high-level themes that should be prioritized in the early stages of AI planning.

These risks specifically pertain to LLMs, a subset of AI used to assist clinical decisions, enhance patient engagement, automate administrative tasks and improve diagnostic accuracy by analyzing data and medical imaging:

  1. Overestimation of Benefits: There’s a risk of overvaluing LLM potential while underestimating challenges, such as safety, efficacy and practical utility, potentially leading to unrealistic expectations.
  2. Accessibility and Affordability: The potential costs associated with AI tools mean only well-resourced facilities can afford them, potentially widening care quality disparities.
  3. System-Wide Biases: LLMs trained on extensive datasets may unintentionally encode biases, impacting decisions across healthcare delivery.
  4. Impact on Labor: LLM integration may shift patient care workflows, reducing some administrative roles and requiring staff to adapt to AI-driven tasks.
  5. Dependence on Ill-Suited LLMs: Health systems could become overly reliant on poorly maintained LLMs, especially in low- and middle-income countries, risking patient trust and data protection. 
  6. Cybersecurity Risks: LLMs could be vulnerable to cyberattacks, which could compromise data security and erode trust in these systems.

How to Set Yourself Up for Success

In preparation for successful AI adoption, health systems must proactively address risk. The first step is understanding the most common risks, such as the fragmented landscape of AI developers and vendors. Each new AI developer and vendor increases complexity, introducing regulatory, data security and clinical alignment demands. 

To address complexities, health systems should consider consolidating AI solutions on a platform that streamlines governance, risk management and compliance. The result: A clear view of risks like over-reliance and fragmented workflows, allowing for effective bias tracking and ensuring a more secure, compliant AI implementation.

By aligning with the WHO AI ethics framework and selecting a platform-based solution with a partner who adheres to global standards, health systems can manage AI’s complexities while focusing on what matters most: delivering high-quality patient care.

Get started by downloading our resource guide spotlighting selected information from the WHO AI ethics framework and the Open Worldwide Application Security Project (OWASP) AI security guidelines. Have additional questions? We’re here to help

Explore the Latest AI Insights, Trends and Research

Yuval Segev