Contact Us
UK and EU AI regulation: What organisations need to know in 2025

UK and EU AI regulation: What organisations need to know in 2025

UK and EU AI regulation: What organisations need to know in 2025

The regulatory landscape for AI in the UK and EU has undergone fundamental transformation. The EU AI Act became the world’s first comprehensive AI legislation in August 2024, while the UK maintains its principles-based approach but signals tighter controls for frontier models. For organisations deploying or procuring AI systems, understanding both frameworks, alongside existing data protection and equality laws, is now essential for compliance and risk management.This guide covers the key provisions, enforcement trends, and best practice frameworks organisations need to navigate this evolving regulatory environment. Although brief, is intended to provide a snapshot of some of the key themes within current legislation.

The EU AI Act establishes unprecedented AI-specific requirements

Regulation (EU) 2024/1689, published in the Official Journal on 12 July 2024 and entering into force on 1 August 2024, creates the world’s first horizontal AI regulation. The Act takes a risk-based approach, classifying AI systems into prohibited, high-risk, limited-risk, and minimal-risk categories with corresponding obligations.

Prohibited practices are now enforceable. Since 2 February 2025, Article 5 bans several AI applications outright:

Real-time biometric identification in public spaces is prohibited for law enforcement except in strictly limited circumstances: targeted searches for missing persons, prevention of imminent terrorist threats, or locating suspects of serious crimes, and requires prior judicial authorisation.

High-risk AI systems face substantial compliance requirements. Annex III identifies eight categories including AI used in employment and recruitment, education, creditworthiness assessment, law enforcement, and administration of justice. Providers must implement risk management systems (Article 9), ensure training data governance (Article 10), maintain technical documentation (Article 11), enable human oversight (Article 14), and conduct conformity assessments before market placement.

Fundamental Rights Impact Assessments (FRIAs) under Article 27 require public bodies and private entities providing public services to assess AI systems’ risks to fundamental rights before deployment. The assessment must identify affected persons, specific harm risks, and mitigation measures. For organisations already conducting DPIAs under GDPR, the FRIA complements rather than replaces these requirements.

Key implementation dates to note:

  • 2 February 2025: Prohibited practices enforceable; AI literacy obligations apply
  • 2 August 2025: General-purpose AI model obligations apply; governance structures operational; national authorities designated; penalty frameworks in place
  • 2 August 2026: Remainder of the Act applies for high-risk systems
  • 2 August 2030: Public authorities deploying existing high-risk AI must comply

Penalties reach €35 million or 7% of global turnover for prohibited practice violations, with lower tiers for other non-compliance.

The UK takes a different path with sector-specific principles

The UK explicitly rejected EU-style horizontal legislation, instead establishing a pro-innovation regulatory framework through the March 2023 White Paper “AI Regulation: A Pro-Innovation Approach” (Command Paper CP 815). Rather than risk categories, the framework applies five cross-sectoral principles through existing regulators:

  1. Safety, security, and robustness – continuous risk identification and management
  2. Appropriate transparency and explainability – adequate information to relevant parties
  3. Fairness – no undermining of legal rights or unfair discrimination
  4. Accountability and governance – clear responsibility across the AI lifecycle
  5. Contestability and redress – ability to challenge harmful AI decisions

Multiple regulators hold AI responsibilities. The ICO leads on data protection aspects and has published comprehensive guidance on AI and data protection. The CMA monitors competition concerns in foundation model markets. Its April 2024 update identified an “interconnected web” of over 90 partnerships involving Google, Microsoft, Meta, Amazon, Apple, and Nvidia as a key concern. The FCA applies existing principles (Consumer Duty, SYSC rules) to AI in financial services without additional AI-specific regulation. The EHRC ensures Equality Act compliance, while Ofcom addresses AI harms under the Online Safety Act.

The Digital Regulation Cooperation Forum (ICO, CMA, FCA, Ofcom) provides coordinated guidance through its AI and Digital Hub, which operated as a pilot from April 2024 offering free informal advice on cross-regulatory AI questions.

Recent developments signal evolution. The July 2024 King’s Speech announced the government will “establish appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models.” The AI Safety Institute (renamed AI Security Institute in February 2025) conducts pre-deployment evaluations of frontier models, including work with OpenAI and Anthropic. The January 2025 AI Opportunities Action Plan accepted all 50 recommendations for AI growth zones, increased public compute capacity, and a National Data Library.

However, a comprehensive UK AI Bill is not expected before the second half of 2026. This creates a dual compliance landscape: organisations operating in both markets must meet EU requirements for EU market access while navigating UK principles-based expectations.

GDPR provisions create binding obligations for ML systems

Both UK GDPR and EU GDPR contain provisions directly applicable to AI systems, with Article 22 on automated decision-making receiving particular attention.

Article 22 establishes a general prohibition on decisions “based solely on automated processing, including profiling, which produces legal effects concerning [a person] or similarly significantly affects” them. The EDPB confirms this operates as a prohibition, not merely an opt-out right. Exceptions exist only for contract necessity, explicit consent, or legal authorisation, each requiring safeguards including the right to human intervention, the right to express a view, and the right to contest decisions.

The interpretation of “solely automated” is crucial. Processing remains caught by Article 22 even if a human inputs data, unless someone meaningfully “weighs up and interprets the result” before application. Human oversight must be genuine; controllers cannot bypass Article 22 through token involvement or “rubber-stamping.” Reviewers must have both authority and competence to change decisions, with access to all relevant data.

Data minimisation (Article 5(1)(c)) creates tension with ML’s data appetite. The principle does not prevent large training datasets, but data should be “selected and cleaned to optimise algorithm training while avoiding unnecessary processing.” Organisations should document justification for data volumes and consider anonymisation, synthetic data, or privacy-preserving techniques.

Purpose limitation (Article 5(1)(b)) presents challenges for repurposing existing data for AI training. Most organisational data was not collected for this purpose, requiring compatibility assessments under Article 6(4) considering the link between purposes, context of collection, nature of data, consequences for subjects, and available safeguards.

DPIAs are mandatory under Article 35 for systematic, extensive profiling producing legal or similarly significant effects, capturing most consequential AI applications. The CNIL considers foundation model development requires a DPIA regardless of AI Act classification.

A significant UK-EU divergence is emerging. Under the Data (Use and Access) Act 2025 (effective 19 June 2025), UK GDPR shifts automated decision-making from prima facie prohibited to prima facie permitted where non-special category data is processed with safeguards. This represents a material liberalisation from the EU position.

Equality Act obligations apply to algorithmic systems

The UK Equality Act 2010 protects nine characteristics: age, disability, gender reassignment, marriage and civil partnership, pregnancy and maternity, race, religion or belief, sex, and sexual orientation. Both direct and indirect discrimination can arise through AI systems.

Indirect discrimination through algorithms occurs when a provision, criterion, or practice (which an algorithm constitutes under Section 19) puts persons sharing a protected characteristic at particular disadvantage. This applies even without discriminatory intent and even when protected characteristics are excluded from the model. Proxy variables correlated with protected characteristics (postcode as proxy for race, for example) can produce unlawful outcomes. The only defence is showing the practice is a proportionate means of achieving a legitimate aim.

The landmark Bridges case established crucial precedent. In R (Bridges) v Chief Constable of South Wales Police [2020] EWCA Civ 1058, the Court of Appeal found South Wales Police’s automated facial recognition unlawful on multiple grounds including violation of the Public Sector Equality Duty (Section 149). The police failed to enquire whether the software had racial or sex bias, failed to verify the algorithm, and did not know the training dataset composition. The court held public authorities must proactively investigate potential algorithmic bias before deployment, not retrospectively.

Deployers bear primary liability. Employers and service providers are responsible for AI decisions in their operations. Developers may also face liability if algorithms were inherently discriminatory as designed or if they provided inadequate warnings about bias risks. Claimants must establish a prima facie case of disadvantage, at which point the burden shifts to respondents to prove objective justification.

Enforcement actions demonstrate regulatory priorities

Facial recognition has attracted the largest fines. Clearview AI has accumulated approximately €100 million in fines across European jurisdictions: €30.5 million from the Netherlands (May 2024), €20 million each from France, Italy, and Greece, and £7.5 million from the UK ICO. The company scraped billions of facial images to build a database without lawful basis, failing to inform data subjects or appoint an EU representative. The Netherlands DPA is investigating whether company directors can be held personally liable. Enforcement challenges persist as Clearview has been “systematically ignoring” EU regulators.

The first generative AI fine arrived in December 2024. The Italian Garante fined OpenAI €15 million for ChatGPT violations including processing without legal basis, inadequate transparency, failure to verify user ages, and data breach notification delays. OpenAI was also ordered to run a six-month public awareness campaign about data collection practices.

Workplace biometric surveillance faced ICO action. In February 2024, the ICO issued enforcement notices against Serco Leisure for using facial recognition and fingerprint scanning to monitor 2,000+ employees since 2017. The ICO found no lawful basis for biometric processing, noting the “imbalance of power” between employer and employees meant consent could not be freely given. Serco was ordered to stop all biometric processing immediately and destroy the data.

Algorithmic discrimination in public services has caused serious harm. The Dutch childcare benefits scandal saw a self-learning algorithm use “foreign nationality” as a fraud risk factor, falsely accusing 26,000-35,000 parents, disproportionately from ethnic minorities, of fraud. Families were ordered to repay tens of thousands of euros, pushing many into poverty. The scandal brought down the Dutch government in January 2021, with the government later admitting “institutional racism” was the root cause.

UK welfare algorithms face ongoing scrutiny. Freedom of Information requests revealed the DWP’s fraud detection algorithm showed “statistically significant outcome disparity” across protected characteristics including age, disability, and nationality. The Greater Manchester Coalition of Disabled People, supported by Foxglove, is bringing legal action claiming the system is “unfair and discriminatory.” Amnesty International’s July 2025 report called for independent review and scrapping of systems violating human rights.

AI training data practices are under investigation. Meta paused EU AI training in June 2024 following DPC and NOYB intervention, resuming only after implementing additional safeguards. The Irish DPC opened a formal investigation into X/Grok in April 2025, examining whether processing EU users’ posts for Grok training was lawful. The EDPB’s December 2024 Opinion 28/2024 confirmed that unlawful training data can result in fines, processing limitations, or orders to erase datasets or entire AI models.

Authoritative frameworks provide practical implementation guidance

The ICO’s AI guidance remains the primary UK reference, covering DPIAs, lawfulness, transparency, and fairness across the AI lifecycle. The “Explaining Decisions Made with AI” guidance (with the Alan Turing Institute) addresses explainability for different audiences. The ICO’s 2024 generative AI consultation series covered web-scraping, purpose limitation, accuracy, individual rights, and accountability allocation, with updated guidance expected.

European Commission resources include the 2019 Ethics Guidelines for Trustworthy AI establishing seven requirements (human oversight, technical robustness, privacy, transparency, non-discrimination, societal wellbeing, accountability) and the Assessment List for Trustworthy AI (ALTAI), an online self-assessment tool piloted with 350+ stakeholders.

The Alan Turing Institute published “Understanding Artificial Intelligence Ethics and Safety” as official UK public sector guidance, introducing the SUM Values Framework and FAST Track Principles. The 2023-2024 AI Ethics and Governance in Practice programme produced eight practical workbooks covering the SSAFE-D Principles (Sustainability, Safety, Accountability, Fairness, Explainability, Data-Stewardship).

The Ada Lovelace Institute developed an Algorithmic Impact Assessment framework with NHS AI Lab, the first detailed AIA proposal for healthcare globally, now being piloted by NHS England. Their work on algorithmic accountability includes supporting mandatory implementation of the Algorithmic Transparency Recording Standard for UK central government (56 published records by March 2025).

International standards are consolidating. The OECD AI Principles (updated May 2024) are adhered to by 47 countries and form the basis for G20 AI Principles. ISO/IEC 42001:2023 provides the world’s first AI management system standard with 38 specific controls. The NIST AI Risk Management Framework offers a complementary US perspective with four core functions (Govern, Map, Measure, Manage) and a companion Playbook.

International developments extend the human rights framework

The Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law, adopted May 2024 and opened for signature September 2024, represents the first legally binding international AI treaty. The UK signed alongside the EU, US, and other nations. The Convention requires documentation of AI systems, effective complaint mechanisms, iterative risk and impact assessments, and prevention measures, with the possibility of bans on certain applications. Entry into force awaits five ratifications, including three Council of Europe member states.

UK parliamentary scrutiny is intensifying. The Science, Innovation and Technology Committee’s May 2024 report concluded AI-specific legislation is required and recommended extending the Algorithmic Transparency Recording Standard to all public bodies. The January 2025 government response confirmed intention to consult on binding regulations for the most powerful AI models and announced £10 million funding for regulator AI capabilities. The Public Accounts Committee’s January 2025 findings highlighted implementation challenges: 28% of central government systems are legacy systems, 70% of departments report difficulty recruiting AI-skilled staff, and around half of digital/data campaign roles went unfilled in 2024.

The Artificial Intelligence (Regulation) Bill [HL], reintroduced March 2025, proposes creating an AI Authority, establishing regulatory sandboxes, and requiring businesses to designate an AI Officer. It remains at second reading stage.

Conclusion: What organisations should prioritise

The convergence of EU requirements, UK principles, data protection obligations, and equality duties creates a complex compliance landscape. Organisations should conduct gap analyses against EU AI Act requirements if operating in EU markets—prohibited practices are already enforceable. Fundamental Rights Impact Assessments and conformity assessments for high-risk systems must be in place before August 2026 deadlines.

For UK operations, the principles-based framework requires documented governance demonstrating safety, transparency, fairness, accountability, and contestability. The ICO’s AI guidance, DPIA requirements, and meaningful human oversight for Article 22 decisions remain immediately applicable.

Equality Act compliance demands proactive bias testing before deployment, following the Bridges precedent. Documentation of training data composition and impact assessments is essential.

The enforcement trend is clear: regulators across jurisdictions are actively investigating and penalising AI systems that violate fundamental rights, particularly in facial recognition, workplace surveillance, public sector decision-making, and training data processing. Organisations deploying AI cannot treat compliance as optional—the accumulated penalties, reputational damage, and operational disruption from enforcement actions make robust governance frameworks a business necessity.

Back to All Insights