Published

AI Frameworks Provide a Roadmap for Compliance Officers

As governments around the world begin to introduce new frameworks and standards addressing the responsible design, development, deployment and operation of artificial intelligence (AI) systems, chief compliance officers (CCOs) have a critical role to play in collaborating with the business to effectively govern AI-related risks.

This article summarizes some of the latest global developments in the introduction of new AI frameworks and standards, which CCOs should familiarize themselves with, as they collectively provide a roadmap for the development of AI risk management principles.

Global guidelines for AI security

On November 27, the United States and 17 other countries endorsed and co-signed landmark global Guidelines for Secure AI System Development. The guidelines aim to help AI developers “make informed cybersecurity decisions at every stage of the development process – whether those systems have been created from scratch or built on top of tools and service provided by others,” according to the guidelines’ executive summary.

The guidelines were developed by the U.K.’s National Cyber Security Center (NCSC) and the U.S. Cybersecurity and Infrastructure Security Agency (CISA) in cooperation with industry experts and 21 other international agencies and ministries from around the world. “Implementing these guidelines will help providers build AI systems that function as intended, are available when needed, and work without revealing sensitive data to unauthorized parties,” the NCSC said.

In a prepared statement, CISA Director Jen Easterly said the release of the guidelines “marks a key milestone in our collective commitment—by governments across the world—to ensure the development and deployment of artificial intelligence capabilities that are secure by design.”

Each section of the guidelines recommends actionable risk mitigation measures and cover the following four key areas of the AI system development lifecycle:

  1. “Secure design: This section covers guidelines that apply to the design of AI systems, including understanding risks and threat modeling, as well as specific topics and trade-offs to consider on system and model design.
  2. Secure development: This section covers guidelines that apply to the development of AI systems, including supply chain security, documentation, and asset and technical debt management.
  3. Secure deployment: This section contains guidelines that apply to the deployment of AI systems, including protecting infrastructure and models from compromise, threat or loss, developing incident management processes, and responsible release.
  4. Secure operation and maintenance: This section contains guidelines that apply to the secure operation and maintenance of AI systems providing guidelines on actions particularly relevant following deployment of the AI system, including logging and monitoring, update management and information sharing.”

White House AI executive order

The global set of AI guidelines follow less than a month after President Biden issued a landmark executive order that, in part, establishes new standards to drive the safe, secure and trustworthy development of AI in the United States.

A key part of the executive order states that companies that develop AI models that pose “a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model and must share the results of all red-team safety tests,” according to a White House fact sheet.

Standards for red-team testing will be set by the National Institute of Standards and Technology (NIST). The Department of Homeland Security will then apply those standards to critical infrastructure sectors and establish an AI Safety and Security Board. “These measures will ensure AI systems are safe, secure, and trustworthy before companies make them public,” the White House fact sheet states. 

The Departments of Energy and Homeland Security will address AI systems’ threats to critical infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks. The Department of Commerce, meanwhile, has been tasked with developing guidance for how to detect and watermark AI-generated content.

European AI Alliance framework

Across the pond, the European AI Alliance – an initiative of the European Commission to establish an open policy dialogue on AI – has established an organizational framework that provides recommended steps for implementing ethical AI principles across the AI development lifecycle.

“In our pursuit of establishing a comprehensive and practical framework for AI accountability, we approach accountability as a set of responsibilities that result from the need to adhere to the ethical principles of AI,” the European AI Alliance said. “These responsibilities or obligations are addressed through a risk management approach, which lays the foundation for defining stakeholders' roles and responsibilities in risk prevention and resolution processes.”

The framework addresses the following recommendations for effective and trustworthy AI risk management:

  • Risk management processes that address sector-specific risks aligning with common risk management principles. The framework suggests activities related to strategic decision-making and guidance on the ethical development of AI systems should be defined at the organizational level – for example, development of an organizational AI governance strategy or the creation of a code of conduct.

  • AI risk management and mitigation measures designed “to easily adapt to evolving risks, use cases and regulatory environments to allow organizations to proactively address them and remain compliant.” These measures, described in greater detail in the framework, should be defined on a project level, the European AI Alliance said.

  • Recommendations for soliciting feedback from a diverse group of stakeholders, “including experts, practitioners, users, and affected communities” to get “a more holistic understanding of the risks that arise and the development of risk management strategies that serve the interests of all stakeholders,” the European AI Alliance said.

  • Recommendations for designing AI tools that can be “easily understood by both experts and non-experts in order to provide clarity and enable effective intervention when needed. Promoting transparency enables stakeholders to actively participate in risk governance and decision-making processes.”

  • Recommendations for ongoing measuring and monitoring AI risks: “Here, it is particularly important to consider how the measures impact the ultimate goal of embedding ethical principles in the AI system,” the European AI Alliance said. The framework proposes methods for quantifying ethical principles.

In addition to AI frameworks and guidance, CCOs should also be aware of regulatory developments taking place. The European Union is leading the way with its AI Act, the world’s first set of AI rules, whereas the United States has taken a more piecemeal approach. The National Conference of State Legislatures maintains an up-to-date database on states that have introduced, adopted resolutions, or enacted AI legislation, which serves as a helpful resource for CCOs.

AI and compliance

One thing is certain, new AI frameworks, standards, and regulations will be on the horizon as ethical and responsible AI initiatives take the global stage. Additionally, governments around the world have made clear that AI models that pose a threat to national security are also top-of-mind.

CCOs do not have to be technology or security experts to play a critical role in helping the business mitigate AI-related risks. Some of the most important steps that CCOs can take is to:

  • Stay abreast of AI regulatory developments;
  • Understand how the business uses AI, and check that those practices align with the company’s code of conduct or ethical principles; and
  • Collaborate closely with other business unit leaders – including the chief technology officer, chief information security officer, chief privacy officer, general counsel, and the chief marketing officer – to ensure AI risks are identified and addressed holistically across the business.

Consider establishing a cross-departmental technology committee, if one is not already in place. What’s most important is that all department heads literally and figuratively have a seat at the table when addressing how to design, develop, and use AI in a responsible way, and in a way that mitigates AI-related risks.

For more information on the intersection of compliance and artificial intelligence risk governance, check out some of our other related articles:

Read more about AI


Chat with a solutions expert to learn how you can take your compliance program to the next level of maturity.



Compliance Made Easy: Using Automation, AI and Seamless Integrations

This post explores content from the NAVEX Next session, Compliance Made Easy: Using Automation, AI and Seamless Integrations.

Previous/Next Article Chevron Icon of a previous/next arrow. Previous Post

You Don’t Need New Regulation to Have AI Enforcement Risk

This post discusses the recent FTC enforcement action about using artificial intelligence for facial recognition and how to prepare to be compliant with future regulations governing the use of AI.

Next Post Previous/Next Article Chevron Icon of a previous/next arrow.