Ethical risks of AI in the workplace
Ethical risks of AI are often discussed in terms of bias, privacy, and accountability. But new research shows how AI in the workplace can create unexpected challenges. Compliance officers spend lots of time pondering how artificial intelligence might change the way your company approaches compliance and risk management. An intriguing new study, however, reminds us that we also need to consider how your corporation’s use of AI might challenge employees’ ethical behaviors, too – highlighting the risks of AI in the workplace that compliance teams can’t ignore.
The study was published in September in the esteemed research journal Nature. Through a series of clever experiments, social scientists found that when people use AI to complete various tasks, they’re more likely to engage in unethical behavior – sometimes, a lot more likely.
If that’s true (and there’s ample reason to suspect it is), compliance officers need to pay more attention to how AI systems are adopted throughout your organization – as well as the policies, training, disciplinary enforcement, and executive messaging you might need in place to assure employee conduct stays on the ethical path.
Study shows dishonest behavior increases when AI is used
This research highlights one of the most overlooked ethical risks of AI in the workplace: how delegating to AI can increase dishonest behavior. Let’s first look at the research itself. Scientists had 8,000 people perform a series of 13 tests. In each test, participants used AI to report the results, and they had various degrees of ability to change how the AI reported those results.
For example, in one test, people had to roll dice and report the number that turned up. The higher the number, the more the person was paid. When people had to report the number themselves, roughly 95% were honest about what they rolled. When people used an AI to report the number, however, only 75% had the AI report the correct number. The rest lied to the AI and reported a number that would give them more money.
In the worst example, people only had to tell the AI what goal to pursue. That is, they could tell the AI, “Report the number that’s most accurate,” or “Report the number that earns me more money.” Eighty-four percent told the AI to report a number that earned them more money, and more than one-third directed the AI to always report a number that earned them the most money.
This points to a serious ethical risk in AI in the workplace, where AI systems can unintentionally enable dishonest behavior.
A valuable clue about all this unethical behavior is within the very title of the research paper: “Delegation to artificial intelligence can increase dishonest behavior.”

Delegation to artificial intelligence can increase ethics risk
The insight is that when humans can delegate a task to AI, that’s when they’re more likely to indulge in unethical behavior. The AI system creates a layer between employee and organization, so it’s easier for the human to disconnect from the ethical dimensions of the task at hand.
When people had to report what they rolled directly to researchers, almost everyone was honest. When they had to report it through AI, 75% were honest. When they could configure the AI any way they wanted, almost everyone abandoned honest reporting. The more distance the AI put between the person and the organization, the less ethical the person was.
As the researchers themselves wrote, “Using AI creates a convenient moral distance between people and their actions – it can induce them to request behaviors they wouldn’t necessarily engage in themselves, nor potentially request from other humans.”
This moral distancing effect represents a key risk of AI in the workplace that governance teams must address in AI governance programs.
Fitting ethics risks into AI governance
This emerging issue is also a significant AI governance risk, requiring integration into policies, charters and risk assessments. The good news is that most compliance officers already play at least some role in deciding how AI is used at your organization as part of broader AI governance, risk and compliance. According to the NAVEX 2025 State of Risk and Compliance Report, 33% of compliance officers are “very involved” in AI discussions and another 32% are “somewhat involved.”
Now the question is how you fit concerns about unethical behavior – or more precisely, the potential for an AI system to tempt people into unethical behavior – into your governance program and risk assessments.
For example, if your AI governance committee has a charter (it should), that charter should require that all AI use-cases up for consideration include a discussion of how that use-case might increase the risk of unethical conduct. You could then ask questions such as:
- How could employees use this new AI system to cheat on their business goals?
- If they could somehow cheat, what monitoring, audits, or compensating controls could we use to intercept that behavior and mitigate risks of AI in the workplace?
- Should we change the incentives employees have, to reduce the temptation to cheat?
- What metrics would help us understand if employees are abusing the AI somehow? Are we able to track those metrics?
These considerations align with broader compliance strategies for mitigating AI risks addressed in NAVEX guidance.
While employees might be the primary group deserving your attention, they aren’t the only group that poses risks of AI in the workplace. You should also assess whether AI might entice suppliers, business partners, or even customers to behave unethically, too. This is especially important considering broader systemic AI risks that can ripple across organizational boundaries.
For example, if you allow an AI chatbot to interact with vendors about new contracts or unpaid invoices, what’s the risk that a deviously phrased question might prompt the chatbot to disclose confidential information? If it interacts with customers, could they potentially trick the bot into giving them multiple refunds? And so forth.
The true question here is whether the business team’s grand design for the AI system – “we want it to serve this role in our business process” – will create too much of that moral distance described by the researchers. The more AI becomes a layer between human and organization, the greater the chance the human will rationalize, “I’m not doing the bad thing; the AI is, and the company allows the AI to behave that way.” That’s the start of a slippery slope that leads to nowhere good.
Seize the issue
This is one of the most overlooked AI governance challenges. In a roundabout way, this emergent ethical risk can help ethics and compliance officers, because it’s yet another example of why the compliance function must be part of your organization’s AI governance.
You already have experience with risk assessments and ethical culture. You understand the practical details of internal controls, policy management, and monitoring to keep ethics and compliance risks in check.
Now we need to bring that expertise to the emerging risk of artificial intelligence – the sooner, the better.
FAQ on AI in the workplace and ethical risks
What are the risks of AI in the workplace?
One major concern is bias in AI systems. If the data used to train AI models is incomplete or skewed, the resulting decisions can perpetuate discrimination in hiring, promotions, or performance evaluations. These AI bias issues can lead to reputational damage and even legal consequences.
Another significant risk is data privacy and security. AI systems often rely on large volumes of sensitive information. Without proper safeguards, this data can be exposed to breaches or misused – posing serious dangers of AI in the workplace. For example, an AI-powered chatbot might inadvertently share confidential employee or customer information if not properly configured.
There are also problems with AI in the workplace related to transparency and accountability. Many AI systems operate as “black boxes,” making it difficult to understand how decisions are made. This lack of explainability can erode trust and complicate compliance with regulations.
Finally, the AI governance challenges are growing. As AI becomes more embedded in business processes, organizations must ensure their AI ethics policy and governance frameworks are solid. This includes assessing how AI use might tempt users to act unethically, as well as implementing controls to prevent misuse by employees, vendors, or customers.
In short, the AI governance risk landscape is complex and evolving. Addressing both the technical and ethical dimensions of artificial intelligence in the workplace is essential for building a responsible and resilient AI strategy.
How can AI increase unethical behavior at work?
AI can increase unethical behavior by creating what researchers call “moral distance.” When employees use AI in the workplace to perform tasks, they may feel detached from the consequences of their actions. This detachment can lead to decisions they wouldn’t necessarily make if acting directly. For example, studies show that people are more likely to lie or cheat when AI is the intermediary. This is one of the more subtle but dangerous AI effects on the workplace, and it underscores the importance of addressing ethical AI considerations in every stage of AI deployment.
What governance steps can address ethical risks of AI?
To address the ethical risks of AI in the workplace, organizations should embed AI ethics and governance into their broader compliance and risk management strategies. This includes:
- Requiring ethical risk assessments for all AI use cases
- Updating AI governance charters to include AI governance challenges like moral distancing
- Implementing training programs that reinforce accountability when using AI tools
- Monitoring AI systems for misuse or unintended consequences
These steps help mitigate AI governance risk and ensure that artificial intelligence in the workplace supports – not undermines – ethical behavior.
How can compliance teams mitigate AI governance risks?
Compliance teams play a critical role in managing the dangers of AI in the workplace. They can:
- Collaborate with IT and business leaders to evaluate AI use cases through an ethical lens
- Develop policies that address AI ethical issues, including misuse, bias, and transparency
- Introduce controls and audits to detect unethical behavior enabled by AI
- Adjust incentive structures to reduce the temptation to exploit AI systems
By integrating AI ethics policy and governance into existing compliance frameworks, teams can proactively address the problems with AI in the workplace before they escalate.