2024 is going to be a significant year around the world on many fronts, including elections in both the UK and the US. Governments will need to set out their policy priorities and find efficient ways to achieve them.
This will involve thinking through the costs and benefits of actions across sectors of public interest, from healthcare to innovations, transport to education and more. Those discussions will be based on the debates and actions we saw in 2023, with the aim of taking them further in 2024.
One sector that stood out for the level of debate it generated in 2023 is artificial intelligence (AI), particularly around the benefits and risks it can create and the overall role it could (or should) play in society.
2023 was all action by the private sector and discussion of principles by governments; 2024 will need to move on to specific actions from government and regulators to match private sector activity and support building of trust.
2023: The year of hype and fear of AI
We started 2023 with excitement about AI already brewing after the public release of ChatGPT in late 2022. Early that year, sensational articles estimated that ChatGPT reached over 100m users in its first two months, a record pace.
Aside from becoming firmly planted in the public consciousness, AI also saw further technological advances. Developments in Deep Learning have brought us closer to human-like robots, and advances in computing technology – the main input needed for AI - have led to new solutions that might address the shortcomings we saw in GPUs last year. Innovation and excitement seemed to be everywhere.
But with the hype came also the fear. The combination of the quick uptake of the technology, its non-transparent nature, the lack of regulation, and maybe some movies from the 90s, raised extreme concerns among the public.
Statements and actions from leaders in the field did not help. Elon Musk called for a halt in the "out-of-control" development of AI, while Geoffrey Hinton, the leader of Google's AI research division, retired so he could speak freely about the risks of AI.
These two movements seemed to polarise attitudes towards AI – on the one hand, excitement about its potential power to help humanity progress; on the other, fear of the risks it can create. This is seen in the word of the year chosen by two dictionaries: Collins Dictionary chose 'AI', reflecting the main topic of discussion in 2023, while the Cambridge Dictionary went with 'hallucinate', referring to generative AI’s ability to make things up.
2024: Building trust in AI using the blocks set before
In the latter part of 2023, we saw some steps from policymakers across the globe towards AI regulation.
Last March the UK government published a white paper titled 'A pro-innovation approach to AI', setting the future approach to AI regulation. So far, no timelines have been given for further regulation and legislation, creating uncertainty in the fast-evolving market.
A final draft of the European Union's AI Act, under negotiation since April 2021, was agreed in early December and should be passed into law in the next few months.
In the US, the Biden administration issued an executive order on Safe, Secure and Trustworthy Artificial Intelligence in October 2023, the first goal of which is to support competition and innovation in AI.
Alongside the regulatory actions, the UK hosted an inaugural international AI Safety Summit last November at Bletchley Park, the fabled home of wartime code-breakers. In the resulting Bletchley Declaration, 28 countries agreed to coordinate efforts to manage AI risks. In addition, the UK launched its AI Safety Institute, a global hub that aims to 'advance the world's knowledge of AI safety by carefully examining, evaluating, and testing new types of AI'.
All these policies were geared towards setting up a framework for AI regulation to ensure guardrails around the technology. It will not be surprising if the focus of policymakers in 2024 will be to ensure that the public trusts AI.
Building trust in AI (while ensuring it is trustworthy) will be critical in 2024 and beyond. If the UK wants to reap the benefits of AI, it must ensure people are using AI. And for that to happen they need to trust AI. But, as a Cambridge University report states, this trust 'can only be ensured through meaningful legislation and regulation'. Clear AI regulation, then, might be at the top of the government's priorities in the upcoming year.
The UK's AI regulatory road is long, winding and still quite uncertain. Would a re-elected Tory government decide to proceed with more dictated AI regulations than what was set out in the pro-innovation white paper? Would a Labour-led government bring a completely different approach to AI regulation? In any case, regulation is a slow policy wheel which might take longer than a year to turn in the UK (the EU has been trying to pass its directive for 18 months and counting).
While we wait for regulations to be set, standards in AI might be the common focus across most jurisdictions this year. As part of the AI Summit, the UK launched its AI Standards Hub – an institute that it hopes will shape global AI standards. The EU AI Act is also progressing to the standards harmonisation stage this year.
Creating global standards can provide an interim security blanket. If standards are widely accepted, they can create a common language between jurisdictions about AI security, increasing global trust and interoperability of AI tools. That, in turn, would alleviate some of the safety concerns around the technology and encourage a higher uptake of AI by businesses and consumers.
Trade-offs and balances
Most of the discussions about making AI safe in the past year were about principles. But as legislative efforts move up a gear to more practical solutions to restrict AI risk, policymakers need to be clear about the many balances and trade-offs that regulation of such a game-changing technology entails: between mitigating risks and hindering innovation, between cost and effectiveness, between users' privacy and the need for data sharing; between various regulations, and more.
The balances that need to be struck make the respective weightings attached to the costs and benefits of some regulations especially important.
Frontier's work two years ago for the Department for Science, Innovation and Technology (DCMS) showed that a detailed blanket regulation, like the EU AI Act, might harm innovation, especially among SMEs.
But tech companies, be they giants or start-ups, are not the only ones that might be affected. Generative AI will sweep over the entire economy, and many non-tech companies might soon need to assess their adherence to certain regulations.
Breathless hyperbole or a prescient warning? Opinions will differ, but we all need to think hard about the regulatory trade-offs and balances needed to ensure that we end up with the policies that most effectively maximise the benefits of AI while minimising the risks.