Joint Commission, CHAI Publish AI Use Guidelines
FAQs:
- What is the new AI guidance from the Joint Commission and the Coalition for Health AI?
It’s the first joint framework for the responsible use of AI in healthcare, offering principles to ensure safe, transparent, and effective adoption.
- What are the key elements of responsible AI use in healthcare?
The framework highlights seven areas, including governance, patient privacy, data security, bias detection, training, and continuous monitoring.
- How does this guidance relate to HIPAA and data protection?
It reinforces the need to comply with HIPAA’s Security Rule and recommends safeguards such as encryption, access controls, and strict vendor agreements.
The Joint Commission and the Coalition for Health AI (CHAI) issued their first joint guidance on the responsible use of AI in healthcare in September, outlining principles to help hospitals and health systems adopt the technology safely.
The Responsible Use of AI in Healthcare (RUAIH) document offers guidance on applying artificial intelligence in clinical settings, focusing on patient safety, data privacy, and seamless integration into care delivery.
Responsible AI Use is Key to Reducing Risks
While AI can help with diagnostics, personalize treatments, predict outcomes, and reduce administrative workload, it also carries risks. Protecting patient health data becomes even more important due to the risks of unauthorized access and security issues that AI can pose.
The HIPAA Security Rule plays a vital role in setting guidelines for safeguarding electronic protected health information (ePHI). The RUAIH outlines recommended practices that include structured deployment, validation, and testing of AI tools, along with safeguards for patient data.
7 Key Elements of Responsible Use of AI in Healthcare
- AI Policies and Governance Structures
• Establish formal policies and governance structures to oversee AI tool implementation, ensuring accountability and risk management.
• Include individuals with relevant expertise to lead AI initiatives and regularly update the governing body on AI use and outcomes.
• PrivaPlan offers a vital resource to assist healthcare leaders, IT teams, and compliance professionals in maintaining strict adherence to federal privacy and security regulations. Learn more. - Patient Privacy and Transparency
• Implement policies to protect patient data and ensure transparency regarding AI tool usage.
• Educate patients about AI’s role in their care and obtain consent when necessary. - Data Security and Data Use Protections
• Protect data from unauthorized access through encryption, access controls, and regular security assessments.
• Define permissible uses of data in agreements with third parties and prohibit re-identification of de-identified data. - Ongoing Quality Monitoring
• Monitor AI tools post-deployment to ensure they perform safely and effectively, adapting to changes in data and algorithms.
• Develop policies for regular validation and testing of AI tools, assessing their reliability and outcomes.
• Learn how AI Ambient Scribes are transforming health care documentation. - Voluntary, Blinded Reporting of AI Safety-Related Events
• Encourage confidential reporting of AI-related safety events to facilitate learning and quality improvement without compromising patient privacy.
• Utilize existing reporting structures to track AI incidents and share insights across organizations. - Risk and Bias Assessment
• Implement processes to identify and address risks and biases in AI tools, ensuring they are evaluated for diverse populations.
• Regularly monitor AI systems to detect and mitigate biases that may affect patient care. - Education and Training
• Provide training for healthcare providers on the proper use of AI tools, emphasizing their benefits and limitations.
• Foster AI literacy among staff to promote informed adoption and safe integration into clinical workflows.
From Request to Framework: How the Guidelines Emerged
The Joint Commission began drafting the guidance in 2024 in response to hospitals and health systems requesting direction on AI implementation. CHAI, a nonprofit founded by clinicians, also contributed; its members include academic health systems, research organizations, and AI experts from institutions such as Mayo Clinic and Duke University, working to ensure AI models are thoroughly evaluated before adoption.
The guidelines were also developed with input from hospitals, health systems, technology providers, and patient advocates, and were informed by existing frameworks like the National Institute of Standards and Technology’s AI Risk Management Framework and the National Academy of Medicine’s AI Code of Conduct.
Ensure HIPAA Compliance in Generative AI
Third-Party Generative AI in Health Care: Balancing Innovation with the HIPAA Security Rule is a practical, expert-driven guide designed to help health care organizations navigate the adoption of generative AI. Backed by over 20 years of HIPAA compliance expertise, it provides a clear structure and actionable strategies for implementing AI tools that align with the HIPAA Security Rule and the National Institute of Standards and Technology (NIST) framework.