Let’s take a moment to address the elephant in the room: AI risk. The hype surrounding generative AI, like Chat GPT, is encouraging more people and organizations to use it. This creates a clear need to address business fundamentals like protection of ethics, intellectual property, security, and data privacy. That is, business leaders should actively consider both sides of the AI coin – its ability to drive greater productivity and efficiency and how to effectively govern risks that AI may introduce.
AI is a new risk domain. This creates a prime opportunity for the chief compliance officer (CCO) to take the lead in AI areas. The CCO’s governance, risk and compliance (GRC) framework within their existing program should be the starting point from where to proactively address AI risks.
Understanding AI Risk
As more companies begin to use AI, it’s important to assess what risks this will pose. Could confidential information be shared on the internet? Is the proliferation of false information a possibility? Is there a risk of plagiarism? Will it negatively impact customer or employee experience?
These are all real, possible outcomes. Yet, with proper oversight and governance, the AI risk is well worth the reward. Just like most of us are not willing to hop in a driverless car, AI use should not be allowed to run wild within your organization. However, with a well-structured GRC program and CCO leading the charge, organizations are well-equipped to understand and evaluate their risk domain when it comes to AI.
After all, your GRC program should be well-versed in and already adhere to risk management frameworks – and in an ideal world, they should also be working with other risk stakeholders such as information technology and cybersecurity. By working closely with other risk stakeholders, aligning to established frameworks, such as the NIST AI Risk Management Framework, and establishing proper controls, organizations can realize the benefits of AI while minimizing risk.
Understanding AI Governance
Chances are, AI is already in use in some unofficial capacity, also known as “shadow AI”, at your organization, either directly or through third parties. While this can be harmless, it is important to understand the various uses and then ensure they are being used in line with company ethics and security standards. Artificial Intelligence is also likely already safely embedded in many products and software you use but should nonetheless be included in the assessment of how AI is used in your organization.
While there are many reasons the CCO should work closely with their IT and cybersecurity counterparts, if they aren’t already, AI governance is a critical catalyst to do so. It can help break down internal risk silos as AI risks are evaluated enterprise wide.
Given the breadth of impact AI can have on the business, a steering committee to govern how AI is built and applied should be assembled as the first step in governance. Also worth noting is this steering committee should remain flexible as this technology will evolve over time, as will your use of it. Involving stakeholders such as the CISO, CIO, General Counsel, and more will help holistically account for bias, privacy and security concerns, legal risks, and customer satisfaction.
AI governance is a long-term strategy that will evolve over time and must include cross-departmental involvement.
Compliance is coming
Part of the case for starting now is that more regulatory oversight is in sight. Some laws governing how AI is used are already in place, with broader regulation coming before we know it. That said, your CCO is already the right person to oversee a deluge of quickly changing and broadly applicable legal requirements. There are several frameworks already in place or in development to aid those responsible for AI governance to prepare for the future.
For example, the European AI Alliance recently released a framework for AI risk management that includes:
- Guidance in addressing sector-specific (specialized) risks that align with common (generalized) risk management principles
- Recommendations for risk management systems designed to easily adapt to evolving risks, specific use cases, and the changing regulatory environment
- Recommendations to solicit representative feedback from diverse stakeholders – including experts, practitioners, users and affected communities – to include various perspectives
- Designing risk management frameworks that are easily understood by experts and non-experts
- Suggestions for continuous monitoring and updating to account for unforeseen and evolving risks with a long-term orientation for risk management
While the framework recommendations are not binding, they will help organizations orient around a principles-based approach for AI risk management and will be well positioned for the future of AI regulation.
Unless you want to play catch up, now is the time, before regulation and enforcement of AI use, to get your proverbial house in order. Led by the organization’s experience managing regulatory change, the steering committee should stay abreast of this quickly evolving landscape to make sure the organization stays on the right side of any new rules or laws. With a technology this important and the promised regulatory attention, starting early is to your benefit.
Is AI going to steal your job?
The very real question of “is AI going to replace people?” is another topic of conversation one just cannot help but hear. AI can and should be used as a powerful tool to improve efficiency and quality – not to replace people.
But many employees still have this fear. Leading through the AI revolution means getting ahead of the narrative and educating your workforce on what positive benefits will come from using AI – and ensuring they don’t feel their job is threatened.
Like any new technology, AI can be seen as scary, causing a fear of adopting it. Conversely, it can also be viewed as exciting, with people adopting it without proper care. The CCO can help lead the business in adopting this new technology in a smart, practical way; in a way desired by the organization.
So, get ahead of the regulations, define the path forward, understand the roles leadership must play, and encourage your organization to accelerate AI rather than being afraid of it.
AI is an opportunity for the CCO to show how they enable the business to go faster, without losing control. By integrating this new risk domain into the GRC program, the CCO can help provide the business with appropriate guardrails to accelerate out of the turns and propel ahead of the competition. If you don’t, you are likely to be left in the dust.