US Joins Global Effort to Secure AI Systems

AI and the world

International Agreement for Keeping AI Safe is Released

While many Americans polished off their Thanksgiving leftovers on Sunday, the United States and 18 other countries signed off an international agreement to keep artificial intelligence (AI) safe. Released Nov. 26, the 20-page Guidelines for Secure AI System document is a collaboration between the U.S. Cybersecurity and Infrastructure Security Agency (CISA) and the UK National Cyber Security Centre (NCSC).

According to CISA, the document marks a significant step in addressing the intersection of artificial intelligence (AI), cybersecurity, and critical infrastructure. Though non-binding, one senior U.S. official described it as the first detailed international agreement on how to keep artificial intelligence safe from rogue actors, pushing for companies to create AI systems that are “secure by design.”

4 Key Areas Put Security First

Since AI is in rapid development, security can often be a secondary consideration. The new guidelines support making security a core requirement in the development phase and throughout the life cycle of the system with these four key areas:

  1. Secure design. This section contains guidelines covering understanding risks and threat modeling, as well as specific topics and trade-offs to consider in system and model design.
  2. Secure development. This section contains guidelines that include supply chain security, documentation, and asset and technical debt management.
  3. Secure deployment. This section contains guidelines that include protecting infrastructure and models from compromise, threat, or loss, developing incident management processes, and responsible release.
  4. Secure operation and maintenance. This section provides guidelines on actions particularly relevant once a system has been deployed, including logging and monitoring, update management and information sharing.

General recommendations, such as monitoring AI systems for abuse, protecting data from tampering, and vetting software suppliers, are included in the guidelines, along with prioritizing these actions:

  • taking ownership of security outcomes for customers
  • embracing radical transparency and accountability
  • building organizational structure and leadership so that secure by design is a top business priority

Work Remains on Using and Regulating AI

Still, the guidelines do not address concerns around appropriate AI use, according to a report by Reuters, or how the data that feeds these models is gathered.

Furthermore, while European lawmakers are already drafting regulations around AI, a divided U.S. Congress is slowing the passage of effective AI regulation in spite of pressure from the Biden administration to do so. That said, the White House did issue an executive order in October aimed at reducing AI risks to consumers, workers, and minority groups while bolstering national security.

As for PrivaPlan, we continue to work on a set of healthcare-related AI policies and procedures and have deployed our new AI HIPAA/State Privacy and Security Risk Assessments. To learn more, contact us at info@privaplan.com or call 877-218-7707.

Related Posts

Access PrivaPlan Toolkit

Access CMA-PrivaPlan Toolkit

Sign up for updates

Sign up. Learn about Compliance

Subscribe now for up-to-date information about privacy & security compliance! You’ll receive emails regarding news about compliance & alerts for new blog posts.