“Yet with all technologies, AI is not only about its capabilities and promise but also about how it is used.” (Age of AI, Kissinger, Schmidt, Huttenlocher)
As we look ahead to 2024, the past year looms large in the rear-view mirror, marked as it was by giant steps forward in the capability and availability of AI. Just a year ago, ChatGPT was a fledgling marvel. Today, the integration of generative AI into apps and services shows no signs of slowing down. We also saw an explosion in hype around the promise of AI. The coming year will show whether that hype has peaked.
There is little doubt about the huge potential for generative AI to transform how businesses work. General purpose AI models such as GPT4 represent a big step change in capability and ease of use. What’s more, we already have strong evidence that individuals are very open to using AI, often without question. Every time we ask Alexa or Siri to do something, such as putting on a playlist generated by Spotify, we are engaging with an AI model. As Kissinger at al note, “…having entered [AI transformation] gradually, we are undergoing it passively, largely unaware of what it has done and is likely to do in the coming years.”
So the critical question for a business becomes: how do we harness these new technologies responsibly and mitigate the inherent risks? There's no universal elixir, no matter what the snake oil peddlers claim. The answer lies in a nuanced understanding of each use case within your business as well as in your culture and your tolerance of risk.
As you would expect of a forward-looking business like Frontier, we are already starting to provide answers for ourselves. Treading carefully, we are trialling the use of generative AI to make our desk research broader and more efficient and to improve how we write computer code. The results look promising, but we are still asking ourselves tough questions about our risk appetite and that of our clients.
During 2024 we expect the EU AI Act to enshrine the responsibility for businesses to understand the risks they are taking when they use artificial intelligence. Following intensive negotiations in 2023 to address the rise of general purpose AI models like GPT4, the Act is clear that there will be obligations on all businesses that deploy AI systems, even if they are developed by others. As these obligations will vary according to the risk profile of each deployment, a deep understanding will be required of how the specific AI works in each context.
For a data scientist like me, perched at the intersection of economics and data science, this is a thrilling time. The marriage of these disciplines holds immense potential for unlocking the answers to the intricate questions that the EU legislation will pose. We can help businesses meet their transparency obligations by using causal modelling to develop explainable AI deployments and to communicate AI decisions effectively. Doing so will foster trust with employees, customers and stakeholders. Alternatively, we can help evaluate the costs, benefits and risks of proposed AI deployments, ensuring they're grounded in a thorough understanding of the AI models' real-world behaviour.
The quote by Kissinger et al serves as a powerful reminder that AI is not a magic wand. It is a potent tool and, like any tool, its value depends on the skills of its user. Businesses and organisations need to take a holistic approach to AI to gain a nuanced understanding of the technology. In that way they can realise AI’s transformative potential while navigating the ethical, societal and looming regulatory considerations that come with it.