AI is already regulated. Organisations just haven’t caught up
3 May 2026

There is still a perception that artificial intelligence sits ahead of regulation.
From a UK GDPR perspective, it doesn’t.
The legal framework is already in place. The issue for most organisations is not a lack of regulation, it is a lack of structured governance around tools that are already in use.
This is a business risk, not just a compliance issue
AI is being embedded quickly into day-to-day operations:
- recruitment and screening
- performance management
- customer service automation
- risk scoring and prioritisation
- document drafting and summarisation
In many cases, this is happening without a clear understanding of:
- what personal data is being processed
- how decisions are being influenced
- who is accountable for the output
This creates exposure across:
- regulatory enforcement
- reputational damage
- poor decision-making based on untested outputs
Lawful basis still applies
The use of AI does not change the requirement to identify a lawful basis under Article 6 UK GDPR.
Organisations should be able to clearly articulate:
✔️ what the processing is
✔️ why it is necessary
✔️ which lawful basis applies
For example:
👉 Public authorities will typically rely on public task, where there is a clear legal function
👉 Employers may rely on legitimate interests, but only where a proper balancing test has been carried out
👉 Contractual processing may apply in service delivery contexts
Where special category data is involved, an additional Article 9 condition must also be identified.
In practice, this is often missing or assumed rather than documented.
Automated decision-making and profiling
Where AI tools influence decisions about individuals, organisations must consider whether Article 22 is engaged.
This applies where decisions are:
made solely by automated means; and
have legal or similarly significant effects
Examples may include:
☁️ automated shortlisting
☁️ eligibility decisions
☁️ risk scoring affecting service access
Where Article 22 applies, organisations must ensure:
✔️ a valid lawful basis
✔️ appropriate safeguards
✔️ meaningful human involvement
✔️ clear transparency
A superficial human check is unlikely to meet this threshold.
The governance gap
A consistent issue across organisations is the gap between deployment and governance.
Tools are being adopted because they are available and useful. Governance is often retrospective, if it happens at all.
Key gaps typically include:
❌️ no DPIA completed prior to deployment
❌️ limited understanding of data flows and processors
❌️ no defined position on acceptable use
❌️ privacy notices that do not reflect reality
❌️ lack of internal ownership
This is where risk accumulates.
Accountability and oversight
The UK GDPR requires organisations to demonstrate accountability.
This means being able to evidence:
💻 decision-making around AI deployment
💻 risk assessments (including DPIAs where required)
💻 clear roles and responsibilities
💻 ongoing review and monitoring
Without this, organisations are exposed – not because AI is inherently problematic, but because its use has not been brought within existing governance structures.
Practical steps for organisations
Organisations do not need to stop using AI. They do need to bring it under control.
Immediate actions should include:
☁️ identifying all AI tools currently in use (including informal or individual use)
☁️ mapping personal data processed through those tools
☁️ confirming the lawful basis for each use case
☁️ assessing whether Article 22 is engaged
☁️ completing DPIAs where there is high risk
☁️ updating privacy information to reflect actual processing
☁️ setting clear internal expectations on use
Final thought
AI is not a future regulatory challenge.
It is a current operational reality that sits squarely within existing data protection law
The organisations that will manage this well are not those that delay adoption but those that align innovation with governance from the outset.
Leave a comment