The UK’s AI regulation opportunity

With the US taking a hands-off approach to AI regulation and the EU setting tight rules, the UK has an opportunity: to chart a third way. This, writes Frontier Chair Sharon White, could help the country reap the benefits of innovation while staying alive to the risks.

An image of Frontier Chair Sharon White against a plain white background

.

In 1997 Tony Blair described the UK as a bridge between Europe and America. The UK did not have to choose between a close relationship with either – it could be a partner of both. 

There was a sense of an economic ‘third way’, between the unfettered free market of the US and the social protection model of Europe. The UK’s path was pragmatic: a flexible labour market, openness to business and proportionate regulation.  

The UK now has the opportunity to take a similar path on something newly vital to the global economy: AI regulation. 

The growing role of AI regulation 

The UK is already the third largest recipient of AI investment, behind the US and China. And while the full impact of the AI revolution is still to reveal itself, it’s already clear the effects will be profound.  

These effects could be positive, but the risks are becoming clear too, from the use of AI in warfare, to the supercharging of cyber-attacks and the spread of false, abusive and sexualised imagery.  

Mustafa Suleyman, CEO of Microsoft AI, talked in his book The Coming Wave about the need for “containment strategies” to balance “immense prosperity with existential risks”.   

The UK has the chance to navigate this tightrope – to take a third way that avoids the lack of regulation in the US on one side and the tight strictures of the EU on the other.  

The US approach: act now, regulate later 

The US has taken a hands-off approach to AI regulation, choosing to drive hard on innovation without imposing rules that might discourage investment. This fits with the country’s general approach, to regulate in response to litigation rather than proactively. 

While there has been more of a push on regulation at the state level, the relative absence of federal rules has helped the US become the world leader in AI – though China is catching up fast and the Middle East investing heavily.  

But in AI, lack of regulation comes with greater risks than in other sectors. The US approach has led to rapid growth, but critical voices suggest that “only regulation and a pervasive culture of safety can prevent serious harm”, as the Berkeley computer scientist Stuart Russell said in testimony to the US Senate. 

The EU approach: set the rules early 

Europe by contrast has gone early on regulation. It has set a number of rules, which vary with the level of perceived risk.  

Cognitive behavioural manipulation is banned outright. The appearance of chatbots and deepfakes has to be made clear to users when they encounter them. AI-powered video games, on the other hand, aren’t regulated.  

Special rules apply to foundational models, like ChatGPT or Gemini. Strictness varies according to computing power, but in effect, all models need to be registered with the EU in order to operate within it. 

Criticism of EU regulation

But if the EU intended to set the global rules for AI and encourage the rest of the world to follow – as it did for privacy, with GDPR  the plan has failed.  

The EU has faced significant criticism and is preparing to water down some of its regulation. Transparency rules will be limited to the highest-power models, there is now a narrower definition of ‘high risk’, and enforcement against high-risk systems that don’t comply has been pushed back by two years.  

A third way? The UK’s opportunity 

The opposing positions of the US and EU leave an opportunity for the UK. It can take a more measured approach: one that promotes innovation and doesn’t rush to legislation, but remains vigilant on safety and security, especially for children.  

Given how long it takes to legislate for new rules, it is fiendishly difficult for regulation to keep up with a technology moving as quickly as AI.  

But there are signs that the UK is successfully traversing a middle path by seeking to ensure AI follows some important principles: security, transparency, fairness and accountability.   

The country has also shown it is prepared to legislate where there’s clear evidence of harm, as in the case of GrokAI and the creation of sexual imagery.  

So as US and EU approaches diverge, the UK would be wise to continue down this middle path. Such an approach can enable the country to reap the benefits of AI innovation while staying alive to the risks.