Skip to content.

AI with Boundaries – Balancing innovation and compliance

For many compliance leaders, the rise of artificial intelligence feels both exciting and uncertain. You can see the potential – faster insights, streamlined reporting, predictive analytics – but also the risk of losing control over data, ethics and accountability.

That tension was front and center during the AI with Boundaries session of the Fall into Compliance series, where Matt Kelly, Mary Shirley and Tom Fox explored one essential question: How can SMBs innovate responsibly without overcomplicating compliance?

Their shared perspective was clear: AI will not make compliance harder – only different.

A woman stands in a futuristic hallway with glowing blue and purple lights, looking at her phone. She wears glasses, a sequined top, and high-waisted pants. The corridor has a sci-fi, spaceship-like appearance.

Compliance in the age of AI – shifting the challenge, not the mission

AI isn’t replacing compliance,” said Matt Kelly. “It’s redefining where judgment and oversight matter most.”

That theme shaped the hour-long discussion. While automation can support monitoring and data analysis, the panelists agreed it can never replace the human capacity for ethical reasoning.

Shirley added a dose of reassurance for leaders feeling overwhelmed. “You don’t have to become a technology expert,” she said. “You just have to understand how AI fits into your risk landscape.”

The panel’s tone was pragmatic, not panicked. AI should be treated as an extension of compliance – another system that requires oversight, training and accountability. As Kelly noted, “You don’t need to reinvent governance. You just need to expand it.

Light-touch frameworks that work for smaller programs

If large enterprises are building AI councils and policy task forces, where should smaller organizations start? Kelly’s advice: build on what already exists. “Most companies have a risk committee, an IT oversight group or a technology steering process,” he said. “Fold AI into those discussions instead of standing up a brand-new framework.”

The panel offered simple starting points for SMB compliance teams:

  • Add AI risk questions to vendor assessments
  • Require human validation for AI-assisted reporting or document generation
  • Train employees to identify when and how AI tools are used
  • Keep documentation short and clear – policies people can actually follow

Tom Fox emphasized that communication is more important than structure. “You don’t need a 30-page AI policy,” he said. “You need awareness. People should know what tools are approved and who to ask when they’re unsure.”

Shirley echoed that idea: “Most compliance issues come from curiosity, not misconduct. If people feel they can ask questions without fear, they’ll come to you before there’s a problem.”

Those small, steady actions – training, transparency, inclusion – create what the panel called cultural guardrails around AI use.

Addressing ‘shadow AI’ and the risk of the unknown

Of all the topics discussed, shadow AI drew the most questions from attendees. The term refers to employees using AI tools, such as public chatbots or language models, without organizational approval.

The risks are obvious – data leakage, inaccurate outputs and compliance blind spots – but the solution isn’t prohibition. Kelly reminded the audience that “people use AI because they’re trying to be efficient, not malicious.” Banning it outright just pushes it further underground.

When asked whether AI governance should be included in the Code of Conduct, the panel agreed that it should – provided the culture already encourages open conversation. “If your people are already experimenting with AI, don’t ignore it,” Shirley said. “Acknowledge it, define boundaries and show you’re paying attention.”

Fox summarized it candidly: “Every company already has AI. The question is whether compliance knows about it.”

This part of the discussion felt particularly relevant for SMBs, where tools are adopted organically, not centrally. The speakers encouraged leaders to keep policies flexible and communication continuous. Transparency, they said, is the best safeguard against risk.

Evolving with purpose – governance as a living process

The panel ended on a reassuring note: AI governance doesn’t need to be perfect; it just needs to begin. Policies should evolve with use and feedback, rather than being developed in advance.

“Governance isn’t a one-and-done task,” Kelly said. “It’s iterative. You’ll revise as you learn.”

Shirley agreed, adding that responsible AI use is as much about curiosity as it is about control. “Ask questions,” she said. “Document your decisions. Keep your people in the loop.”

That balance – awareness without overreaction – was the unspoken thread that ran through the session. The leaders who succeed with AI will be those who treat it as a partnership between technology, compliance and culture.

Bringing it all together

Responsible innovation isn’t about mastering algorithms; it’s about mastering awareness. For small and mid-sized businesses, that starts with simple, repeatable practices: know what’s being used, communicate boundaries and update as you grow.

The AI with Boundaries panel left compliance leaders with one key insight – progress doesn’t require perfection. It requires participation.