When the UK Artificial Intelligence Safety Institute (AISI) published its inaugural evaluation report in November 2023, it made a striking admission: no major AI system currently deployed globally had been subjected to sufficiently rigorous safety testing prior to release. For a body established just weeks earlier, in October 2023, under the auspices of the Department for Science, Innovation and Technology, it was a declaration of intent — but also, critics argued, one that arrived conspicuously late.
The broader context is one of accelerating technological disruption. The number of AI-related patent applications filed in the UK rose by 214 percent between 2018 and 2023, according to the Intellectual Property Office. Meanwhile, global investment in AI reached an estimated $91.9 billion in 2022, with the UK attracting the third-largest share of that investment among OECD nations, behind only the United States and China. Against this backdrop, the relative tardiness of the UK's regulatory response has drawn sustained criticism from academics, civil society organisations, and the technology sector alike — though for markedly different reasons.
The UK government's stated approach — set out in its March 2023 white paper, "A pro-innovation approach to AI regulation" — is deliberately non-legislative. Rather than enacting a comprehensive AI Act akin to the EU's landmark legislation, which entered into force in August 2024, the UK opted to assign oversight responsibilities to existing sectoral regulators: the Financial Conduct Authority, the Information Commissioner's Office, the Competition and Markets Authority, and others. The rationale, articulated by ministers with some persistence, is that sector-specific expertise produces more nuanced and proportionate regulation than any overarching statutory framework could achieve.
This position has attracted fierce opposition from a number of quarters. Professor Amelia Osei, a computational law scholar at the London School of Economics, argues that the multi-regulator model creates dangerous "jurisdictional lacunae" — gaps between regulatory domains through which high-risk AI applications can pass without meaningful scrutiny. "A facial recognition system used in a shopping centre sits at the intersection of data protection law, consumer rights, equalities legislation, and criminal justice policy," she observed in a lecture at King's College London in January. "No single existing regulator owns that problem."
Proponents of the government's approach counter that premature legislation risks calcifying today's technological assumptions into tomorrow's legal constraints. Dr Marcus Webb, director of the Alan Turing Institute's policy programme, contends that "regulatory agility" — the capacity to adapt frameworks in real time as technology evolves — is the defining virtue of the UK's model. "The EU passed legislation in 2024 based on a technology landscape that had already been substantially transformed by large language models," he argued. "By the time that legislation reaches full implementation, it may already be obsolescent."
The debate is further complicated by the geopolitical dimension. The UK's AI Safety Summit, held at Bletchley Park on 1–2 November 2023 — attended by representatives from 28 countries and the EU, as well as executives from major AI companies including OpenAI, Google DeepMind, and Anthropic — produced the Bletchley Declaration, a non-binding statement of intent to collaborate on frontier AI safety. Signatories notably included the United States, China, and the European Union — a rare convergence described by several commentators as "diplomatically significant but substantively thin."
Domestically, the government has committed to publishing a progress report on AI regulation by June 2025. It has also allocated £100 million to establish nine new AI Research Resource centres and announced that AISI would be renamed the AI Safety Institute — a rebranding that critics noted was symbolic rather than structural. Whether these measures constitute the foundation of a coherent governance framework, or merely the appearance of one, remains a question that both policymakers and the public are increasingly compelled to confront.
The UK's approach to AI regulation has been , assigning oversight to existing sectoral regulators rather than enacting a comprehensive statute. Critics argue this creates — gaps through which high-risk AI applications may pass without scrutiny. The government's preferred term for its own model is . Meanwhile, the Bletchley Declaration, signed by 28 nations, was characterised as by several observers.