News English · Level 4 of 4 · Advanced

News English:
C1 Advanced

Engage with a sophisticated editorial-style news analysis piece on artificial intelligence regulation in the UK. Complete 25 demanding exercises across all six question types — testing precision, inference, vocabulary, and discourse understanding.

💎 Advanced 📰 Technology & Law 25 Questions 6 Exercise Types

What is CEFR C1?

At C1 (Advanced), learners can understand a wide range of demanding, longer texts and recognise implicit meaning, nuance, and authorial stance. News articles at this level include complex syntax, nominalisations, technical vocabulary, hedged language, rhetorical devices, and intertextual references. Learners are expected to interpret attitude and register, identify logical fallacies, evaluate arguments, and understand how language choices shape meaning.

Complex syntaxNominalisationsHedged languageAuthorial stanceRhetorical devicesImplicit meaning
📰 News Analysis · C1 · Advanced
Britain's Race to Regulate Artificial Intelligence: Too Little, Too Late?

When the UK Artificial Intelligence Safety Institute (AISI) published its inaugural evaluation report in November 2023, it made a striking admission: no major AI system currently deployed globally had been subjected to sufficiently rigorous safety testing prior to release. For a body established just weeks earlier, in October 2023, under the auspices of the Department for Science, Innovation and Technology, it was a declaration of intent — but also, critics argued, one that arrived conspicuously late.

The broader context is one of accelerating technological disruption. The number of AI-related patent applications filed in the UK rose by 214 percent between 2018 and 2023, according to the Intellectual Property Office. Meanwhile, global investment in AI reached an estimated $91.9 billion in 2022, with the UK attracting the third-largest share of that investment among OECD nations, behind only the United States and China. Against this backdrop, the relative tardiness of the UK's regulatory response has drawn sustained criticism from academics, civil society organisations, and the technology sector alike — though for markedly different reasons.

"The fundamental question is not whether AI should be regulated, but whether democratic institutions are capable of regulating it before it regulates us." — Professor Amelia Osei, LSE

The UK government's stated approach — set out in its March 2023 white paper, "A pro-innovation approach to AI regulation" — is deliberately non-legislative. Rather than enacting a comprehensive AI Act akin to the EU's landmark legislation, which entered into force in August 2024, the UK opted to assign oversight responsibilities to existing sectoral regulators: the Financial Conduct Authority, the Information Commissioner's Office, the Competition and Markets Authority, and others. The rationale, articulated by ministers with some persistence, is that sector-specific expertise produces more nuanced and proportionate regulation than any overarching statutory framework could achieve.

This position has attracted fierce opposition from a number of quarters. Professor Amelia Osei, a computational law scholar at the London School of Economics, argues that the multi-regulator model creates dangerous "jurisdictional lacunae" — gaps between regulatory domains through which high-risk AI applications can pass without meaningful scrutiny. "A facial recognition system used in a shopping centre sits at the intersection of data protection law, consumer rights, equalities legislation, and criminal justice policy," she observed in a lecture at King's College London in January. "No single existing regulator owns that problem."

Proponents of the government's approach counter that premature legislation risks calcifying today's technological assumptions into tomorrow's legal constraints. Dr Marcus Webb, director of the Alan Turing Institute's policy programme, contends that "regulatory agility" — the capacity to adapt frameworks in real time as technology evolves — is the defining virtue of the UK's model. "The EU passed legislation in 2024 based on a technology landscape that had already been substantially transformed by large language models," he argued. "By the time that legislation reaches full implementation, it may already be obsolescent."

The debate is further complicated by the geopolitical dimension. The UK's AI Safety Summit, held at Bletchley Park on 1–2 November 2023 — attended by representatives from 28 countries and the EU, as well as executives from major AI companies including OpenAI, Google DeepMind, and Anthropic — produced the Bletchley Declaration, a non-binding statement of intent to collaborate on frontier AI safety. Signatories notably included the United States, China, and the European Union — a rare convergence described by several commentators as "diplomatically significant but substantively thin."

Domestically, the government has committed to publishing a progress report on AI regulation by June 2025. It has also allocated £100 million to establish nine new AI Research Resource centres and announced that AISI would be renamed the AI Safety Institute — a rebranding that critics noted was symbolic rather than structural. Whether these measures constitute the foundation of a coherent governance framework, or merely the appearance of one, remains a question that both policymakers and the public are increasingly compelled to confront.

Key Terms: auspices (support or protection of an organisation) · lacunae (gaps or missing parts in a system) · calcify (to make something fixed and unable to change) · obsolescent (becoming outdated) · non-binding (not legally required to be followed) · jurisdictional (relating to the authority or area of responsibility of a legal body)
A · MCQ
Multiple Choice Questions
Questions 1–5 · Choose the most accurate answer based on the article
Question 1
When was the UK AI Safety Institute established?
AMarch 2023
BOctober 2023
CNovember 2023
DAugust 2024
Question 2
By what percentage did AI-related patent applications in the UK increase between 2018 and 2023?
A91.9%
B114%
C214%
D300%
Question 3
What does Professor Osei mean by "jurisdictional lacunae"?
AAreas where AI regulation is too strict.
BLegal gaps between regulatory domains through which risky AI applications can pass unexamined.
CCountries that have not signed international AI agreements.
DTechnical flaws within AI systems that cause harm.
Question 4
What was the Bletchley Declaration described as by commentators?
AA legally binding international AI treaty.
BDiplomatically significant but substantively thin.
CA detailed technical framework for AI safety testing.
DAn embarrassing failure of UK diplomacy.
Question 5
What does Dr Marcus Webb argue is the defining virtue of the UK's regulatory model?
AIts comprehensive statutory framework covering all AI applications.
BRegulatory agility — the capacity to adapt frameworks as technology evolves.
CIts alignment with the EU AI Act.
DThe speed with which the legislation was passed.
B · True/False
True or False?
Questions 6–10 · Requires close reading — some statements paraphrase rather than quote directly
Question 6
The UK government chose to create a single comprehensive AI Act modelled on the EU's approach.
Question 7
Global investment in AI reached approximately $91.9 billion in 2022.
Question 8
The AI Safety Summit at Bletchley Park was attended by representatives from 28 countries.
Question 9
The article presents the UK's regulatory approach as clearly superior to the EU's model.
Question 10
The government allocated £100 million to establish nine new AI Research Resource centres.
C · Fill in the Blank
Fill in the Blank
Questions 11–15 · Use the precise word, figure, or term from the article
Question 11
The UK government's white paper on AI regulation was published in .
Question 12
The EU's AI legislation entered into force in .
Question 13
Professor Amelia Osei is a computational law scholar at the .
Question 14
The AI Safety Summit was held at on 1–2 November 2023.
Question 15
The government committed to publishing a progress report on AI regulation by .
D · Sentence Completion
Complete the Sentence
Questions 16–18 · Requires understanding of argument and implication
Question 16
Dr Webb argues that the EU's AI Act may already be "obsolescent" when fully implemented because…
A…it was drafted by politicians rather than technologists.
B…it was based on a technology landscape substantially transformed by large language models before it was enacted.
C…the UK has already surpassed the EU in AI regulation.
D…the EU failed to consult AI companies during its drafting.
Question 17
The article's headline — "Too Little, Too Late?" — suggests the author's stance is…
A…unreservedly supportive of the government's approach.
B…sceptical about whether the UK's regulatory efforts are adequate and timely.
C…indifferent to the question of AI regulation.
D…calling for immediate legislative action identical to the EU model.
Question 18
The renaming of AISI to the "AI Safety Institute" was criticised because…
A…the new name was considered confusing for the public.
B…it was considered symbolic rather than structural — a change in name without a change in substance.
C…it removed the word "Innovation" from the organisation's title.
D…it implied the organisation had failed its original safety mission.
E · Cloze
Cloze Exercise
Questions 19–22 · Choose the word that best fits the register and meaning of the passage

The UK's approach to AI regulation has been , assigning oversight to existing sectoral regulators rather than enacting a comprehensive statute. Critics argue this creates — gaps through which high-risk AI applications may pass without scrutiny. The government's preferred term for its own model is . Meanwhile, the Bletchley Declaration, signed by 28 nations, was characterised as by several observers.

F · Scrambled Sentences
Unscramble the Sentence
Questions 23–25 · These sentences use complex structures from the article — click words in the correct order
Question 23
Rearrange: no / major / AI / system / had / been / subjected / to / rigorous / safety / testing
Question 24
Rearrange: the / UK / attracted / the / third-largest / share / of / global / AI / investment
Question 25
Rearrange: whether / these / measures / constitute / a / coherent / governance / framework / remains / unclear