Published

Don’t Outsmart Yourself: AI and Compliance

I’m a big fan of artificial intelligence. The older I get, the more I appreciate that real intelligence needs all the help it can get. Corporate ethics and compliance officers, however, need to pause before betting big on AI as a solution to all our needs.

You want to automate away the chores of robust compliance, and still reap all the benefits of robust compliance.  

We can begin by considering where AI (or regtech, or any other name we put on it) claims to offer the most benefit: in financial services. Client onboarding, account creation, due diligence, investor expertise, fiduciary obligations—those are complicated compliance burdens that every financial firm has. At the same time, online brokerage firms are offering “robo-adviser” services where you enter a few demographic and financial criteria, and algorithms then recommend investment options to you. That means profit margins are squeezed.

So if you’re a financial advisory firm, courting wealthy clients around the world who want to act right now, and robo-advisers are sucking away the revenue dollars you need to pay for high-touch services—well, of course you want regtech and AI that can automate as much of your compliance workload as possible. Who wouldn’t?

Chat with a solutions expert about compliance program automation

What financial firms really want to accomplish with AI is to create a more customer-centric experience. Firms need to simplify the customer experience so they can cut compliance costs and impress customers at the same time.

Beyond financial firms, other businesses want to do the same thing with AI and regtech: cut compliance costs and impress “customers,” even if the customer is an employee or some third party that crosses paths with your business. You want to automate away the chores of robust compliance, and still reap all the benefits of robust compliance.  

Where It All Goes Wrong

The shortcoming in AI is that it cannot appreciate the context of a situation, unless someone has already pre-programmed and pre-defined the context that an AI application should consider. That is, AI will always be flawless in situations where it knows exactly what to do—yet be disastrous in situations where it doesn’t know what to do. Which is when you need intelligence the most.

AI cannot handle exception requests—and exception requests are a rich vein of headache, uncertainty, and reputation risk for corporations and their compliance programs.

The perfect example of this is—wait for it—airline overbooking. AI programs have become superb arbiters of who on a flight should be bumped. The AI can analyze spending habits, travel patterns, flight connections, and innumerable other variables to find the single passenger, among many hundreds, who should be removed from a flight. Except that the AI can’t account for unique passenger variables that algorithms can’t foresee or quantify: important business appointments, family emergencies, major life events.

So if your business wants to embrace AI, know the limits that AI has.

Three Points to Consider

1. Appreciate the Difference between Big Data and AI

Big Data combines reams of information and analytics capability to provide insights the human brain might not otherwise find—but it does not make decisions. AI “makes decisions,” although really it selects actions from a predetermined set of choices, based on certain inputs.

Compliance officers (along with CTOs, CIOs, and others who decide how to use AI) need to understand where their company should draw that line: between analysis a computer program can automate, and a decision that humans should to make. For example, AI could automate much due diligence. But would you automate the triage of whistleblower retaliation complaints, or disclosure of FCPA violations?

2. Understand the Importance of Input Values

The one advantage of AI is speed; once it starts, it can run incredibly fast. The risk is that AI will start from a flawed position, and race the company to a conclusion it can’t countenance. You might assess loans in a portfolio incorrectly and misstate liquidity risk; maybe members of your social media network exclude “people you may know” who are minorities. AI works on models and algorithms. If those inputs are based on faulty data or assumptions that contradict firm values, you’ll be in trouble sooner than you think.

3. Remember Problems of Scale

I mentioned earlier that financial firms want AI to deliver a more “customer-centric” experience to each client. At a conference I attended, someone said: “Aren’t they just trying to improve customer service? Isn’t that old news?”

Well, yes; but remember the state of affairs decades ago: firms were smaller, so they could offer better client service, even in a paper-based world. So could small airlines, not worried about overbooking; or retailers, who knew specific customers by sight.

The challenge today is to scale up that high-touch environment to large volumes of customers—while each one brings a specific context, and that context might trip up your business. You need to be smart about it. Otherwise a little AI can be a dangerous thing.

Read More: CCOs Say Policies Are Getting Stronger; Adoption of Technology – Not So Much


Chat with a solutions expert to learn how you can take your compliance program to the next level of maturity.



CCOs Say Policies Are Getting Stronger; Adoption of Technology – Not So Much

A recent survey of chief compliance officers highlights the progress and challenges of today’s compliance function. The good news is great, and the not so good news can be improved upon with tools already available to organizations.

Previous/Next Article Chevron Icon of a previous/next arrow. Previous Post

The "Rock-Star Exception" Is More Expensive Than You Think

Rock stars in the workplace are great for business, unless they believe that the Code of Conduct does not apply to them. Or that inappropriate behavior is acceptable when you’re an elite employee. It’s never good to make a rock-star exception and it will actually cost you more than you think.

Next Post Previous/Next Article Chevron Icon of a previous/next arrow.

Comments