11777
Blog
Yuval Segev

The Risks of Not Being Proactive with Healthcare Cybersecurity When It Comes to AI

The recent cyberattack on Ascension Health highlighted the vulnerabilities of connected healthcare systems and the urgent need to prioritize cybersecurity in long-term strategies. A lack of proactive measures puts both organizational reputation and, even more concerning, patients at risk.

Recent Lessons in Healthcare Cyber Vulnerability

In May 2024, Ascension Health, one of the largest U.S. healthcare systems, experienced a ransomware attack that took its IT network offline, disrupting patient care in 15 states. Sensitive data was exposed, and critical technology like EHR and phone systems became unavailable.

This incident was part of a growing trend of cybersecurity breaches in healthcare, underscoring the need for proactive security measures. Ascension was praised for its response, including fast public disclosure, a dedicated update website and clear, frequent communication. Though leaders haven’t confirmed using a crisis response plan, it’s unlikely that such swift, high-level coordination was achieved without one in place. In fact, John Riggi, national cybersecurity advisor for the American Hospital Association, referred to Ascension’s response as a “role model” for other organizations.

As healthcare embraces new technologies like clinical AI, cybersecurity must evolve to address the unique challenges that come with it. Clinical AI depends on patient data, requiring health systems to share this information with AI developers for accurate performance. 

This presents a classic risk-reward challenge: while data is the foundation for AI’s capabilities, it simultaneously introduces considerable cybersecurity vulnerabilities. Without robust AI governance, including threat modeling and secure model training, organizations expose themselves to AI-specific risks, such as adversarial model manipulation and unintended bias in AI decision-making.

Legacy security methods, such as firewalls and virus scans, are insufficient in addressing the dynamic and sophisticated nature of emerging threats. This necessitates a more adaptive, integrated security approach. 

Managing multiple AI partners will complicate this, as each may offer different solutions that don’t always integrate seamlessly, resulting in fragmented security protocols. This lack of coordination can create dangerous gaps.

An enterprise-wide AI platform that consolidates AI solutions into a unified system can address these challenges by streamlining data integration and security monitoring. This centralized approach can help identify and mitigate threats more effectively.

However, the AI integration method is just one aspect of a proactive cybersecurity strategy. Strong governance frameworks, including regular risk assessments, data handling protocols and continuous vendor monitoring, are essential to ensure security and compliance across all AI partnerships.

Proactively Engaging Partners: What Should You Be Asking?

With so much sensitive information being exchanged, having proactive cybersecurity conversations with AI partners is critical.

When to Start: It’s never too early to engage potential and current AI partners in cybersecurity discussions. Ideally, these conversations should start at the very beginning of any business relationship, during the planning and strategy phase. Security should be as important as any other quality indicator.

What to Ask: When evaluating a potential partner, ask for references related to cybersecurity. What protocols do they have in place? Have they experienced any data breaches? If so, how were they handled? Do they have an incident response plan? A strong partner will have transparent answers and a proven track record of security compliance.

Who to Include: Involve your IT, legal and compliance teams in these conversations. Bringing different departments to the table ensures that all areas of vulnerability are covered — from technical and patient care controls to legal safeguards.

The Six Categories to Evaluate Your AI Partner for Compliance

To ensure your partners are up to par, ask them about these cybersecurity categories:

1. Regulatory Compliance

  • Certifications: Verify AI-specific certifications, such as FDA compliance for AI/ML devices, and ensure adherence to HIPAA, NIST 800-66 r2 and GDPR.
  • Security Policies: Understand their security frameworks and encryption protocols. Confirm they have a security and privacy trust center.

2. Security Measures

  • AI-Specific Protections: Look for privacy-by-design features, such as pseudonymization, and ensure strong encryption for data at rest and in transit.
  • Incident Response Plan: Review their cyber incident response plan. Ensure they have clear procedures for handling incidents and can react quickly to breaches.

3. Experience in Healthcare

  • Track Record: Review past successes, case studies and history with healthcare AI. Research any prior breaches or data issues to gauge how they will handle real-world scenarios.
  • Industry Endorsements: Look for validation from other health systems or endorsements from reputable healthcare authorities.

4. Third-Party Audits

  • AI Security Audits: Ensure that regular, independent audits cover AI-specific vulnerabilities, including data integrity and model security.
  • Regular Testing: Confirm they conduct frequent security and penetration testing to identify and address system weaknesses.

5. Data Management

  • Lifecycle Management: Ensure they handle the entire AI data lifecycle — from collection to deletion — adhering to privacy-by-design principles in healthcare.
  • Data Minimization: Verify that the AI vendor implements strict data minimization practices, limiting access to only the essential data required for operational efficiency

6. Risk Management

  • Risk Assessment: Conduct thorough risk assessments to identify vulnerabilities that could affect your organization. Continuously assess risks throughout the AI lifecycle.
  • Ongoing Monitoring: Establish ongoing performance monitoring to ensure the AI system consistently meets clinical safety and effectiveness standards. Schedule regular audits to maintain compliance.

A Proactive Approach to Cybersecurity is Non-Negotiable

As AI technology continues to evolve, healthcare organizations must adopt a comprehensive, multi-layered cybersecurity approach. Enterprise-wide platforms, like the Aidoc aiOS™, coupled with a robust governance framework, can help protect sensitive data, maintain trust and ensure compliance. Proactive engagement with partners, careful vendor evaluation and continuous monitoring are crucial to minimizing risks. 

Cybersecurity is not a one-time task but an ongoing practice that must evolve with emerging threats. By taking a proactive, enterprise-wide approach, healthcare organizations can stay ahead of potential risks and ensure they’re well-prepared for future challenges.

Explore the Latest AI Insights, Trends and Research

Yuval Segev