“If we are to secure the opportunities and control the challenges of artificial intelligence, it is time to legislate and to lead”, declared Lord Holmes of Richmond during the second reading of the Artificial Intelligence (Regulation) Bill in March 2024 (1). “We need something that is principles-based and outcomes focused, with input transparent, permissioned and, wherever applicable, paid for and understood”.
Those words captured the urgency and ambition many hoped would shape the regulatory direction of the UK. Yet more than a year later, the UK remains without a binding framework. While pressure mounts from within Parliament, among creatives, and throughout the tech sector, the government’s cautious and hesitant approach stands in stark contrast to the decisive policy paths taken by Brussels and Washington (2).
Delay and Disagreement
The government first laid out its AI ambitions in the 2023 White Paper, promising legislation to follow. That timeline slipped in early 2024, with further delays now pushing expectations into mid-2026 (3) (4). With the Trump administration now back in office, geopolitical uncertainty has deepened, fuelling UK caution around committing to regulatory paths that might soon diverge from US priorities. Labour sources now suggest the AI Bill may not surface until the next King’s Speech, potentially in May 2026 (4).
Frustration within government has grown as timelines slip, particularly during debates over the Data (Use and Access) Bill. The House of Lords attempted to retrofit it into an interim AI framework, repeatedly proposing amendments, most notably Amendment 49F, which would require AI developers to disclose their use of copyrighted data and ensure their data collection practices remain transparent (6). Ministers within the Commons fundamentally reject these alterations, arguing that the legislation is not intended to address AI and that such changes risk stifling innovation (7).
This legislative ‘ping pong’, the repeated exchange of amendments between the Commons and the Lords, reflects the conflict between innovation and regulation: While the Lords frame these changes as essential to safeguarding creative industries, the government continues to favour a slower, innovation-led approach to regulation.
On the 11th of June, the dissatisfied Lords’ chamber decided to no longer press the issue of AI within the Bill, securing a limited concession from the government to publish a report on its copyright and AI proposals, with an interim report due within six months (5).
Alternative Proposals
Amid this legislative stalemate, Lord Holmes has introduced an alternative framework through his Private Members’ Billwhich proposes the creation of an AI Authority alongside transparency requirements, intellectual property controls, public engagement efforts, and alignment across sectors. Although the Bill has little chance of becoming law, it has gained cross-party support and praise from experts, reshaping the conversation around what government policy on AI could look like. It has acted as a catalyst for ongoing discussions about AI governance. However, peers remain cautious about its prospects: Lord Clement-Jones described the Bill as having a “fair wind” but admitted it was “pessimistic” that the government would formally adopt it (8).
Diverging Global Models
At the heart of the UK regulatory delay is a strategic hesitation between two global models: the risk-based, binding framework of the EU and the lighter, innovation-first approach of the US.
The EU AI Act, adopted in 2024, stands as the world’s first comprehensive regulation of artificial intelligence. It applies extraterritorially to AI providers and users operating within the EU, categorising AI systems by risk level from minimal to high and even unacceptable. High-risk systems must meet strict requirements around transparency, human oversight, and data governance. Enforcement will be centralised through a newly established EU AI Office, with significant penalties for non-compliance (9).
By contrast, the United States has chosen a principles-based, light-touch regulatory path. Federal efforts focus on voluntary guidance, agency-specific recommendations, and executive orders, with oversight fragmented across states and sectors(10). National standards are still evolving, but the overarching priority is to avoid heavy-handed regulation to maintain global leadership in AI innovation.
The UK’s “Middle Way” – or Policy Paralysis?
Publicly, the UK government has adopted a pro-innovation, non-statutory stance. As outlined in its 2023 White Paper, A Pro-Innovation Approach to AI Regulation, no new legislation or dedicated regulator would be introduced in the short term(11). Instead, existing bodies like the ICO, Ofcom, and CMA are expected to apply five broad principles to AI within their sectors: safety, transparency, fairness, accountability, and contestability.
However, support has grown thin in this approach. Lord Holmes’s Private Members’ Bill signals an increasing demand for a structured, risk-based framework, one more closely aligned with the model used by the EU. While some experts argue that the UK’s hybrid strategy offers flexibility and supports innovation, others caution that it may leave the country “caught in the middle.” Without clear, statutory rules, the UK risks:
Industry groups and other think tanks have voiced concerns that the UK’s hesitation to fully commit to either the EU or US regulatory models could undermine its global competitiveness and damage its credibility as a leader in trustworthy AI governance (12) (13).
Proposed Policy Solutions
Interim Framework for High-Risk AI Systems
To address pressing risks in critical sectors such as healthcare, law enforcement, and finance, mandatory risk assessments should be introduced. These assessments would evaluate factors including bias and discrimination, explainability or lack thereof, data provenance and potential misuse, emergent behaviours particularly from generative or foundation models, and security vulnerabilities such as adversarial attacks.
Alongside this, a temporary code of practice enforced through existing regulators would require AI developers to clearly document model purposes and limitations, ensure meaningful human oversight, and conduct thorough security and robustness testing. Additionally, an incident reporting mechanism should be established to capture AI failures or near misses (events where AI systems cause harm, behave unpredictably, or nearly do so), modelled after the ICO’s breach reporting system.
Copyright and Transparency Rules for Generative AI
The current opt-out model for copyrighted content in AI training places a disproportionate burden on individual rights holders and lacks enforceability. The UK should instead adopt a mutually beneficial and enforceable copyright framework, including:
At the same time, mandatory transparency requirements should be introduced, obliging AI developers to disclose the types and sources of training data used, including any copyrighted content. Without enforceable transparency, neither accountability nor fair compensation can be effectively ensured.
Public Accountability and Safety Investment
To safeguard ethical AI deployment within public bodies, a dedicated AI ethics panel should be established. This panel would review the design, procurement, and deployment of AI systems, particularly in sensitive areas such as healthcare, policing, welfare, and immigration, assessing risks around bias, surveillance, and accountability.
Alongside this, a National AI Safety Research Fund should formalise and expand current ad-hoc funding efforts into a permanent, statutory programme. This fund would support research on AI alignment, control mechanisms, ethical deployment, and governance, ensuring continuity, scale, and direct coordination with regulatory and public sector priorities.
Conclusion
The current UK stance on AI regulation, defined by delay, deference to innovation, and resistance to statutory clarity, is proving increasingly untenable. As global powers like the EU and the United States solidify their regulatory identities, the UK risks drifting into irrelevance: neither agile enough to lead in innovation nor robust enough to protect rights and safety. If the UK is to reclaim credibility as a leader in trustworthy AI governance, it must move beyond white papers and voluntary principles. The time for “wait and see” has passed. If the UK wishes to lead, it must legislate.
References
(1) ‘Lords Chamber – Hansard – UK Parliament’. 2025. UK Parliament. 26 June 2025. Link
( 2)‘Implementing the UK’s AI Regulatory Principles: Initial Guidance for Regulators’. n.d. GOV.UK. Accessed 17 June 2025. Link
(3) ‘Government Delays New AI Bill for Six Months’. n.d. Computing.Co.Uk. Accessed 17 June 2025. Link
(4) Courea, Eleni, and Kiran Stacey. n.d. ‘UK Ministers Delay AI Regulation amid Plans for More “Comprehensive” Bill | Artificial Intelligence (AI) | The Guardian’. The Guardian. Accessed 17 June 2025. Link
(5) Clark, Adam, John Woodhouse, and Sally Lipscombe. 2025. ‘Data (Use and Access) Bill [HL]: Progress of the Bill’, June. Link
(6) ‘Data (Use and Access) Bill [Lords] – Hansard – UK Parliament’. 2025. 17 June 2025. Link
(7) ‘Ministers Block Lords Bid to Make AI Firms Declare Use of Copyrighted Content | Artificial Intelligence (AI) | The Guardian’. n.d. Accessed 17 June 2025. Link
(8) ‘Artificial Intelligence (Regulation) Bill [HL] – Parliamentary Bills – UK Parliament’. n.d. Accessed 17 June 2025. Link
(9) ‘AI Act | Shaping Europe’s Digital Future’. 2025. European Commission. 24 June 2025. Link
(10) ‘What Is the Blueprint for an AI Bill of Rights? | OSTP. n.d. The White House (blog). Accessed 26 June 2025. Link
(11) ‘A Pro-Innovation Approach to AI Regulation’. n.d. GOV.UK. Accessed 26 June 2025. Link
(12) Bhatti, Ayesha. 2025. ‘Why The Pursuit of Sovereign AI Is Not the Right Call for the UK’. Center for Data Innovation(blog). 28 January 2025. Link
(13) CLTR-admin. 2024. ‘The UK Is Heading in the Right Direction on AI Regulation, but Must Move Faster’. CLTR(blog). 7 February 2024. Link
Joining Centre means you can be part of an organisation which is working to rebuild the centre ground of UK politics. By becoming a member, you’ll have the chance to engage with our work early, influence policy development, and connect with others who share your vision for a more centrist politics.