HHS Launches New AI Strategy to Modernize U.S. Healthcare and Public Health Systems
In This Post
The U.S. Department of Health and Human Services (HHS) has released an Artificial Intelligence Strategy to expand AI use across internal operations, research programs, health agencies, and public health activities. Announced on December 4, the plan outlines how AI tools may be used across the HHS workforce and federal services that support health and human services programs nationwide.
“AI is a tool to catalyze progress,” said Clark Minor, HHS’ acting Chief Artificial Intelligence Officer, who is leading the initiative. “This Strategy is about harnessing AI to empower our workforce and drive innovation across the Department.”
Five Core Pillars of HHS’ AI Strategy
To guide adoption and oversight, the strategy outlines five key pillars:
- Ensure governance and risk management for public trust.
- Design infrastructure and platforms for user needs.
- Promote workforce development and burden reduction for efficiency.
- Foster health research and reproducibility through gold standard science.
- Enable care and public health delivery modernization for better outcomes.
Collectively, these pillars are intended to make HHS more efficient while improving outcomes for patients, communities, and researchers.
“OneHHS” Collaboration to Build Shared AI Infrastructure Across Federal Agencies
For the first time in the Department’s history, this Strategy’s “OneHHS” approach invites all HHS divisions—including the Centers for Disease Control and Prevention (CDC), the Centers for Medicare & Medicaid Services (CMS), the Food and Drug Administration (FDA), the National Institutes of Health (NIH), and others— to collaborate on shared AI infrastructure and processes.
According to the department, the collaborative approach is intended to consolidate efforts across agencies, align workflows, and establish common cybersecurity and technology standards.
Next Steps: Pilots, Partnerships, and AI Development for Federal Health Programs
HHS emphasized that this strategy is an early framework — the first step in building an “AI-fueled enterprise.” Next, the department plans to:
- Launch internal pilots and enterprise-wide use cases
- Partner with private-sector innovators, research institutions, and technology companies to co-create solutions
- Expand the strategy as AI evolves, updating standards and infrastructure to match emerging risks and opportunities
The department’s recent “Caregiver AI Prize Competition” is one example of ongoing initiatives that involve private-sector participation in AI solutions for health and social-care challenges.
Potential Impact on Public Health, Research, and Federal Workforce Efficiency
According to HHS, potential areas of impact for future AI use include:
- Public-health surveillance and disease-trend identification
- Administrative task automation for federal employees
- Research capabilities that rely on data analysis and modeling
- Data governance practices related to safety, transparency, and ethical standards
The strategy outlines how AI may be incorporated into federal health programs over time and highlights the processes HHS intends to use to manage, monitor, and expand its use.
Related Guidance on AI and Healthcare Compliance
For additional insights into how AI is shaping healthcare operations, regulation, and security practices, explore these related articles.
- The AI Security Surge: Why Compliance with the HIPAA Security Rule Can’t Wait examines how heightened AI adoption is increasing data security risks and underscores the need for federal compliance.
- Joint Commission Releases AI Guidance for Healthcare outlines new expectations for AI oversight in clinical environments and patient safety. Together, these resources provide a broader context for AI’s expanding role across the healthcare landscape.
As a Leader in HIPAA Security, PrivaPlan Offers Professional Guidance in AI and Cybersecurity
While the announcement offers insight into HHS’s high-level direction for internal management, it doesn’t yet clarify how to streamline AI tools to align with the HIPAA Security or Privacy Rules, and that’s precisely where healthcare organizations need to stay alert.
As you explore or expand your use of AI, PrivaPlan’s HIPAA experts strongly encourage you to maintain strict cybersecurity controls for any systems that process or interact with sensitive data, particularly those that manage, receive, or store personal health information (PHI).
Routine, role-appropriate workforce training remains one of the most effective safeguards, especially as AI tools evolve quickly, and the workforce may overestimate what these systems can do securely.
Finally, be sure your AI systems — whether in pilot mode or fully operational — are included in your annual security risk analysis. Regular cybersecurity reviews can help prevent much more complex issues in the future.
AI is opening remarkable doors in healthcare, but it thrives best when paired with vigilance, clarity, and smart governance.
Ensure HIPAA Compliance in Generative AI
Third-Party Generative AI in Health Care: Balancing Innovation with the HIPAA Security Rule is a practical, expert-driven guide designed to help health care organizations navigate the adoption of generative AI. Backed by over 20 years of HIPAA compliance expertise, it provides a clear structure and actionable strategies for implementing AI tools that align with the HIPAA Security Rule and the National Institute of Standards and Technology (NIST) framework.


