Skip to content.

Key data protection considerations for agentic AI in the UK

In January 2026, the UK Information Commissioner’s Office (ICO) published a report reflecting its “early-stage thinking on speculative opportunities and risks” of agentic AI use in the context of organizations’ data protection obligations. The ICO directed organizations not to interpret the report as guidance or formal regulatory expectations on agentic AI.  

Rather than cover the report exhaustively, here we’ll provide a summary review of the data protection and privacy risks highlighted in the ICO report. Throughout that section of the report, a key theme highlighted by the ICO is that organizations’ responsible use of agentic AI must ensure compliance with the U.K.’s General Data Protection Regulations (GDPR). 

“Many of the data protection issues with agentic AI applications are similar to those raised by other types of AI, and in particular, generative AI,” the ICO stated. “In some cases, however, the characteristics and capabilities of agentic AI could exacerbate existing data protection issues or introduce new ones.” 

“We are starting to see these risks emerge with current agentic AI systems,” the ICO continued. “We anticipate these risks growing with increasing capability and adoption of agentic AI, unless efforts are made to mitigate them."

Abstract image showing a pattern of green and blue mirrored panels with geometric shapes and reflections, interspersed with bright yellow-orange accents, creating a kaleidoscopic, futuristic appearance.

Human responsibility and controllership

The ICO reminds organizations that agentic AI systems are not, and should not be considered, legal entities that organizations may blame for errors. “In the context of data protection, AI agency does not mean the removal of human, and therefore organizational, responsibility for data processing,” the ICO stated. “Organizations must be clear on the expectations that still apply under data protection legislation. They remain responsible for using personal information in an appropriate fashion.” 

Governance

Measures to govern agents and agentic systems “must be flexible enough to handle changes in priorities, goals, tasks and the environment in which the agent is operating” and “must also consider how these systems might develop in future, as capabilities and functionality advance.” 

The ICO referenced as a proposed governance framework the Safer Agentic AI Foundations, a comprehensive guide that offers an end-to-end overview of how to secure agentic operations, focusing heavily on continuous documentation, monitoring and review. 

The ICO further stressed that placing “sole responsibility for creating, applying, and maintaining these value-derived governance frameworks on the end user would not be universally applicable or suitable.” Rather, the suppliers of those systems have a responsibility both to “employ good governance before the point of sale; and ensure that the systems they provide are suitable for customers and their tasks.” 

Automated decision-making

The GDPR requires that individuals be informed about any automated decisions made about them (i.e., decisions that don’t involve humans). Examples could be an online decision to award a loan, or an online recruitment hiring test that uses pre-programmed algorithms and criteria. 

The ICO reminded organizations that data protection obligations also apply when organizations build or deploy agentic AI. Organizations using automated decision-making (ADM) in agentic AI systems would need to consider: 

  • How the decisions impact individuals 
  • How to clearly communicate the use of automation to data subjects 
  • How to put in place systems that allow data subjects to contest decisions 
  • How to effectively and meaningfully allow for intervention in agentic AI decision-making 

On March 31, 2026, ICO published a new report, setting out its expectations for organizations that using ADM for hiring decisions and sets out where and how safeguards must be used. It also launched a consultation on the draft ADM guidance and is seeking comment until May 29. 

“Use of AI and automation is rapidly transforming recruitment across the UK, from helping sift CVs to scoring online assessments,” stated William Malcolm, executive director for regulatory risk and innovation at the ICO. “We want to support organizations to take advantage of both recent changes to the law and these new tools, but responsible innovation and adoption of this new technology require safeguards to be in place to protect jobseekers, which are foundational to public trust.

Purpose limitation and data minimization

Article 5(1)(c) of the UK GDPR requires personal information to be “adequate, relevant, and limited to what is necessary” for the purposes for which it’s being processed. In the context of agentic AI, organizations must “have a clear purpose for collecting and processing information used by an agentic AI system” and “communicate that purpose clearly,” the ICO stated in its report. This includes information collected both during the system’s operation and the development phase. 

Each stage of agentic AI development involves processing different types of personal information for various purposes. “Having a specified purpose for each stage allows an organization to understand the scope of each processing activity, evaluate its compliance with data protection, and help to evidence that,” the ICO stated. The ICO encourages organizations looking to deploy agentic AI systems to be aware of guidance it is producing on purpose limitation in the GenAI lifecycle. 

Once an organization has defined the purpose of an agentic AI system’s use, it can then establish the type and volume of information needed to fulfill that purpose. The ICO reminds organizations that they “must have a justifiable reason for the agentic AI to access and use the information.” 

Which tools and databases an agentic system may access should be carefully selected. Developers can design systems with controls, such as asking a human for permission when an agent needs to access personal information. Other approaches considered good governance practices include masking personal information, age verification, system permissions, and transparency notices, the ICO report highlighted. 

Special category data

Agentic AI systems may draw upon or generate “special category” data in unexpected ways, which would trigger enhanced requirements under the GDPR. Thus, organizations must assess whether the agentic AI system potentially can use information to infer and use special category data in pursuit of its purpose. If so, organizations should ensure they have a valid lawful basis and can satisfy Article 9 of the GDPR, the ICO highlighted. 

Additionally, individuals should be made aware of the agentic AI system’s potential to infer and use their special category data. Alternatively, organizations could adopt technical measures to restrict the agentic system’s ability to infer and use this type of data. The ICO encourages organizations to review its guidance on special category data and consent as a basis for processing

Transparency

The GDPR’s transparency principle requires organizations that process personal information to be clear, open and honest about how and why they use personal information. The use of agentic systems is no exception. “To ensure that personal information is processed transparently in agentic systems, organizations must consider their obligations and how they will meet them before they begin processing,” the ICO stated. Citing its guidance on data protection impact assessments (DPIAs), the ICO stated that, if organizations assess that deploying agentic systems will result in a high risk to personal information, they must carry out a DPIA. The ICO further encourages organizations to review its guidance on ensuring transparency in AI

Accuracy

Article 5(1)(d) of the UK GDPR requires personal information to be “accurate” and that any inaccuracies be fixed promptly. The risk of agentic AI is that it could produce inaccurate outputs, called “hallucinations,” which occurs when a large language model (LLM) produces invented or misinterpreted facts. “How these situations can be monitored and responded to appropriately is a key issue for organizations using agentic systems,” the ICO noted. It directs organizations to review its response to our call for views relating to generative AI

Individual rights and fairness

Organizations developing and deploying agentic systems must be aware of the requirement to implement data-protection-by-design if those systems will be processing personal information. Organizations processing personal information in agentic systems must consider how the processing could impact individual rights. Organizations deploying agentic AI must uphold individual rights and design agentic systems with data protection by design in mind to enable “transparency, accountability and meaningful human intervention,” the ICO stated. 

The fairness principle establishes that organizations should not process personal information in a way that is “unduly detrimental, unexpected, or misleading.” Organizations must ensure that the ADM processes agentic systems use do not deviate from the original scope envisioned by the organization, as systems react to and learn from their environments. The ICO refers organizations to its guidance on lawfulness, fairness and transparency for further insight. 

Agentic AI security

Data protection law requires organizations to protect the information they process by adopting appropriate technical and organizational measures, like those described in the ICO’s guidance on data protection principles. With security risks posed by agentic AI, the ICO recommends that organizations further refer to resources like the Open Web Application Security Project’s threats and mitigations list. 

Next steps: engagement

Organizations that want to contribute to the conversation on agentic AI use should reach out to: emergingtechnology@ico.org.uk. The ICO said it will continue to hold workshops with industry to gather further information on agentic AI, including on agentic capabilities and adoption, and how industry is mitigating data protection and privacy risks.