
The Top Five AI Risks Companies Still Get Wrong
There is little doubt that AI risk is a major corporate concern, but despite repeated warnings about how the technology and the data that underpins it are used, companies are still worryingly exposed to a range of legal, financial, and governance risks because they do not fully understand what they might be liable for, undervalue the level of risk associated with AI use, and fail to implement effective controls.
Below are five of the most common problems companies face with regards to AI use:
Companies unaware of their level of AI responsibility
Perhaps the biggest risk companies face is treating AI as a “tool” rather than as a regulated system. Even basic AI tools process personal data, triggering obligations under the EU’s AI Act, the General Data Protection Regulation (GDPR) and other frameworks. Without clear accountability, companies risk non-compliance, reputational damage and—depending on the jurisdiction—potentially large class actions.
Although third-party developers may have contractual or technical responsibilities regarding the AI tools they provide, deployers remain firmly on the hook for how any AI is used, as well as for the data it feeds on: authorities do not pursue algorithms, vendors, or software providers—they pursue the company placing products on the market or operating regulated systems.
A useful rule of thumb companies should remember is that providers are accountable for the product, and deployers are accountable for their use.
Failure to understand AI decision-making
Adopting AI without knowing where it gets its data from or how it arrives at the decisions it does is another common—and potentially catastrophic—error that many organisations make. If a company can’t trace back how an AI system reached a decision, it can’t justify that outcome to a regulator or a customer: saying “the AI did it” is simply not going to cut it.
Companies must understand where AI influences decisions that affect regulatory outcomes. Once identified, those decision points need clear governance, documented logic, and human oversight. AI outputs should inform decisions, not replace accountability. AI must also be auditable and ensure that outputs can be traced back to inputs, assumptions, and rules, and that changes over time are recorded. This is essential for responding to regulator questions, complaints, or enforcement actions.
Undervaluing the level of AI risk
Other traps that companies fall into are confusing “low risk” with “no risk”, as well as linking the level of AI harm to the level of regulatory interest: for example, if regulators have not taken enforcement action, such practices must be compliant or at least “below the radar”.
Companies also often assume that internal or low-risk AI use falls outside privacy law, when, in fact, GDPR and the EU AI Act apply the moment personal data is used for training or inference.
Execs ignoring AI governance
For some time companies have been grappling with how to control employees’ use of AI in the workplace through the use of unauthorised “shadow” AI tech, but it seems that executives may be the most likely to flout the rules around AI governance and put the organisation at risk. This is due to several reasons: senior leaders have access to the organisation’s most commercially and legally sensitive data, as well as the highest level of decision-making authority, but often with limited safeguards as managers feel ring-fenced from the rules that more junior employees are bound by. Executives are also under intense pressure to perform and show leadership, so any tech that can save time, crunch data and inform decision-making quickly and easily becomes an over-relied upon tool (irrespective of the associated risks). The issue is so serious that many now regard AI risk as a leadership behaviour issue as much as a technology issue.
Not having compliance involved from the start
Experts say that a lot of AI implementations create legal risks because compliance is not part of the process from the outset. Many of the mistakes that companies make when deploying third-party AI tools begins at the procurement and onboarding stage and often occurs because they fail to clarify the supplier’s role under data protection law; whether the supplier acts as a processor or independent controller; whether data will be retained, reused, or used to train or improve models; where data is stored and accessed; and what security measures apply in practice.
These mistakes are often compounded by the fact that standard supplier terms frequently limit third-party liability and exclude regulatory fines, enforcement action, and indirect loss, while the companies buying in the AI tools also have no real plan (nor clue, sometimes) about who in the organisation should have access to the tech; how its use should be controlled; and what business processes it should assist with. Worse still, staff are not trained or informed about what must not be input into AI tools, including special category personal data, legally privileged material, internal investigation content, client confidential information, and sensitive commercial data.
There are several steps companies can take to minimise these problems, and compliance is at the heart of that process. For instance, compliance teams can make AI adoption safer and faster by putting clear, practical guardrails in place regarding acceptable use, as well as creating a route for escalation when higher risk use cases appear. Where personal data is involved, compliance can ensure data protection impact assessments (DPIAs) and security reviews happen early and not as a last-minute sign-off. The function can also build AI checks into procurement processes so that suppliers must provide evidence on data handling, security, audit support, and how bias and model changes are managed. Compliance can also champion training and AI literacy so staff understand what should never be entered into external tools, as well as ensure that there is human oversight when AI is used to inform decisions.
AI-powered Risk & Compliance
NAVEX One delivers transformational AI-powered compliance automation with tools for editing, analysis, due diligence and more.



