Last Updated: January 29, 2024
The rapid adoption of artificial intelligence (AI) in healthcare has given rise to a flurry of legislative and guidelines measures, each aiming to shape the future of AI applications in clinical environments. Here we delve into the meaning behind the following four key initiatives and interpret their impact on healthcare AI moving forward:
Overview: The EO, issued in October, is a broad order intended to provide a federalized approach to the safe and responsible use of AI. While not healthcare specific, it does task the Secretary of Health and Human Services (HHS), the Director of the National Institute of Standards and Technology (NIST), the Director of the Office of Science and Technology Policy (OSTP), and others with developing a position and response in alignment with federal priorities/guidance/values.
Key Takeaway: The EO is as sensible as it is expansive. It emphasizes speed, transparency, safety and practical application of AI, not only in healthcare but also in social media, finance and other industries. It creates a broad framework to promote a pathway for transparency in AI development.
Overview: The American Medical Association’s guidelines and principles are often seen as a standard bearer in healthcare. While they can’t enforce compliance, the principles provide crucial considerations for AI use while also emphasizing liability considerations of the use of AI applications in clinical practice.
Key Takeaway: These principles offer healthcare facilities and clinicians crucial considerations when using AI. They also set a precedent for accountability in AI application, highlighting the importance of responsible implementation and emphasizing enterprise-wide adoption for maximal healthcare utilization benefits. The AMA principles urge transparency from AI vendors in terms of regulatory approval status, clear description of any limitations or risks of use, and detailed information regarding data used to train the model. The AMA also makes recommendations about which party (physician, health system, or vendor) should be liable for clinical decisions related to AI utilization and AI-derived insights.
Overview: Stemming from a 2021 European study, the ECLAIR guidelines are amongst the first codified recommendations for what to ask AI vendors, with emphasis on transparency, bias concerns and intended use of AI software implications in radiology settings.
Key Takeaway: ECLAIR acknowledges radiology’s position as the early adopters of healthcare tech, and offers guidance in evaluating AI vendors including in-depth risk assessment, clarification of algorithm design specifications, and so on. Its emphasis on transparency and data sharing agreements aligns with the evolving landscape of AI governance.
Overview: The HTI-1 Final Rule is closely aligned with EO 14110 and establishes rules for AI use in healthcare. It emphasizes the definition of AI, reclassifies actionable clinical decision support as decision support initiation (DSI) and provides a framework on how to assess and manage your AI based predictive DSI for certified health IT vendors and modules.
Key Takeaway: A huge takeaway of HTI-1 is that it clearly defines what AI is in the scope of legislation. One critical shift in language from the draft to the final rule clarifies the scope of AI, requiring it to be “supplied by” rather than “enabled or interfaced with” certified healthcare IT vendors. This not only defines AI in legislation, but actively addresses implementation requirements for certified Health IT Vendors/Modules, and hints at its integration into broader healthcare standards.
Aidoc experts, customers and industry leaders share the latest in AI benefits and adoption.
Explore how clinical AI can transform your health system with insights rooted in real-world experiences.
Learn how to go beyond the algorithm to develop a scalable AI strategy and implementation plan.
Explore how Aidoc can help increase hospital efficiency, improve outcomes and demonstrate ROI.