11801
Blog
Yuval Segev

The Responsible Path: How Risk Frameworks and AI Governance Work Together

Clinical AI represents a paradox. While its potential to revolutionize patient care is undeniable (and increasingly being proven), healthcare leaders remain acutely aware of the risks involved with implementation. 

A recent survey of 3,000 leaders from 14 countries further illustrated this point with 87% of respondents expressing concern about AI data bias widening health disparities. Further, nearly 50% noted AI has to be transparent to build trust.1 

These concerns highlight the need for a robust overall AI governance structure, with a particular emphasis on data management, which is typically addressed through dedicated risk management frameworks.

Risk management frameworks serve as the operational arm of AI governance, translating broad governance principles into specific, actionable steps that healthcare organizations can implement. They provide structured methodologies for identifying, assessing, and mitigating potential risks, ensuring that AI applications are not only innovative but also safe and trustworthy.2,3,4

Each framework offers unique guidance tailored to specific aspects of AI risk management. For instance:

  • The OWASP AI framework: Offers tools to identify and mitigate security vulnerabilities in AI systems. 
  • ISO 42001: Provides a holistic approach to managing information security across all operations, not limited to AI, ensuring a comprehensive security strategy. 
  • ISO/IEC 23894: This standard focuses on the governance of AI, providing guidelines for the development, deployment and operation of AI systems with a strong emphasis on risk management, accountability, and transparency.
  • NIST AI RMF: The NIST Artificial Intelligence Risk Management Framework (AI RMF) provides a voluntary, consensus-based approach to managing risks associated with AI, covering aspects such as fairness, bias and security.
  • WHO Best Practices: Focus on public health, offering guidelines that address both ethical and practical challenges in deploying AI in healthcare settings.

By choosing and implementing these frameworks, healthcare organizations can create a robust governance structure that aligns with their specific needs and ensures the responsible use of AI technologies.

Common AI Risk Frameworks

Similar to security standards, various international and country-specific risk frameworks exist. While these frameworks provide valuable direction, AI governance committees should carefully select the framework that best aligns with their specific implementations.

OWASP AI

The Open Worldwide Application Security Project (OWASP) is an open-source initiative providing guidance on everything from designing secure AI models to mitigating data threats. With tools, like the AI Exchange Navigator and the LLM Top 10, health systems can effectively identify vulnerabilities, implement safeguard and stay ahead of the evolving AI security landscape.

ISO 42001

This framework from the International Organization of Standardization (ISO) helps health systems establish and maintain a responsible Information Security Management System (ISMS). It encompasses a broader scope than just AI, looking holistically at security measures across all operations.

WHO Best Practices

Focused on public health, these World Health Organization (WHO) best practices offer guidelines specifically tailored to healthcare applications, addressing both ethical considerations and practical AI deployment challenges.

How Risk Frameworks Support AI Governance

AI governance should oversee the entire lifecycle of AI adoption — from selection to deployment and beyond. Risk frameworks can help governance committees build strategies aligned to several key areas of AI monitoring:

  • Defining risk comfort
  • Establishing a risk assessment process
  • Monitoring and evaluation mechanisms, specific to data
  • Promoting a culture of risk awareness for end-users
  • Providing structure for explainability and transparency, specific to AI decision-making

By integrating risk frameworks into AI governance, health systems foster a culture of responsible innovation and address concerns that have hindered AI adoption. 

References:

1 Future Health Index 2024. (2024b). In Future Health Index 2024. https://www.philips.com/c-dam/corporate/newscenter/global/future-health-index/report-pages/experience-transformation/2024/first-draft/philips-future-health-index-2024-report-better-care-for-more-people-global.pdf

2 Catron, J. (2023, December 1). The Benefits of AI in Healthcare are Vast, So Are the Risks. Clearwater. https://clearwatersecurity.com/blog/the-benefits-of-ai-in-healthcare-are-vast-so-are-the-risks/

3 AI Risk Management Framework. (n.d.). Palo Alto Networks. https://www.paloaltonetworks.co.uk/cyberpedia/ai-risk-management-framework

4 Key elements of a robust AI governance framework. (n.d.). Transcend. https://transcend.io/blog/ai-governance-framework

Explore the Latest AI Insights, Trends and Research

Yuval Segev