In recent years, Artificial Intelligence (AI) has emerged as a transformative technology with vast potential to reshape various sectors, from finance to consumer goods.
However, healthcare may be where we will most profoundly feel AI's impact. The potential for AI to drive improvements in diagnosis, treatment, and overall efficiency in healthcare is significant. Yet, the path to widespread AI adoption is strewn with challenges, particularly concerning trust and safety, regulation and other technical barriers. Last Thursday, I took part in the Westminster eForum – AI in Healthcare talk, in a panel that explored the benefits and risks associated with adopting AI into healthcare and examined government intervention's critical role in highlighting the benefits and mitigating the risks.
How can AI transform healthcare
AI's potential is particularly evident in healthcare, where its capacity to quickly and efficiently process and analyse vast datasets offers unique advantages. By identifying patterns that may not be apparent to human clinicians, AI can support more accurate and timely diagnoses, help customise treatment plans and even speed up new drug discoveries. As for patients, AI advancements can translate into earlier disease detection, improved treatment outcomes, and improved quality of life.
One example of an AI that can transform healthcare, which has gained international recognition recently, is AlphaFold. Developed by scientists David Baker, Demis Hassabis and John Jumper, AlphaFold can predict protein structures with astonishing accuracy and speed—earning these researchers the Nobel Prize in Chemistry. This tool has reinvented the process of protein analysis, a critical step in developing new drugs and treatments. Such applications demonstrate AI's ability to drive innovation and offer life-saving advancements.
AI can also significantly ease pressure on increasingly strained national health systems like the NHS. As AI takes on more repetitive or data-intensive tasks, healthcare providers could have more time and resources to direct patient care, leading to a healthier population and a more efficient healthcare system.
So why is healthcare lagging in AI adoption
Despite the potential benefits, healthcare must catch up to other sectors in AI adoption. This disparity is notable even in the UK, one of the most advanced OECD countries in terms of AI maturity within the healthcare sector. However, based on a 2023 DCMS report, healthcare is ranked only fifth, behind finance and consumer goods, in sector AI maturity.
For us economists, the discrepancy between the development of AI in healthcare and the rate of adoption signals that market barriers—such as regulatory hurdles, skill shortages, and trust issues—might be slowing progress. Government intervention may help address these obstacles.
What are the barriers to AI adoption
When it comes to healthcare, several challenges affect the successful adoption of AI:
Skills Gaps
While AI development is advancing rapidly, healthcare providers often need help with in-house know-how and infrastructure gaps. To implement AI effectively, healthcare organisations need improved IT infrastructure, in-house data scientists, and AI specialists. Currently, evidence of the gaps in specific skills needed to efficiently deploy AI is sparse.
Additionally, as AI technology evolves, so do the required skills, making it difficult for healthcare organisations to keep up. Conducting further research into the exact skills needed in healthcare AI deployment would enable more targeted interventions, helping overcome the barriers in this critical sector.
Trust and Safety Concerns
Trust is another major obstacle to AI adoption in healthcare. While there is growing public and professional interest in AI's potential, healthcare providers and end users are concerned about the risks of deploying AI in high-stakes settings, where any mistake can have devastating effects. When it comes to healthcare, the stakes are high: a misdiagnosis or incorrect recommendation from an AI tool could lead to serious harm, and biases in datasets used to train AI could exacerbate health disparities between demographics.
Increasing trust in AI systems is crucial. A recent survey commissioned by the Health Foundation highlights the complexities surrounding trust in AI. While 54% of the UK public and 76% of NHS staff support using AI in patient care, there are still concerns about AI's ability to deliver safe, unbiased, and effective outcomes. To gain widespread acceptance, new AI systems must demonstrate high reliability, safety and fairness, and the public must feel confident that these technologies will improve their health outcomes.
Regulatory Clarity and Liability Issues
Another significant barrier to AI adoption in healthcare is regulatory uncertainty. It is often unclear who is responsible if an AI system "makes a mistake". Private healthcare providers, in particular, may be hesitant to implement AI due to potential liability issues, fearing reputational damage and financial repercussions if something goes wrong.
Clear regulatory frameworks can help alleviate these concerns by establishing guidelines on AI use in healthcare and clarifying liability in cases of adverse outcomes. These frameworks could give healthcare providers the confidence to adopt AI tools, knowing that regulations clearly define the limits of their responsibilities.
What is the government's role in supporting AI adoption
The benefits of AI in healthcare are too substantial to ignore, but achieving these benefits requires a proactive and strategic approach. Government intervention is paramount in mitigating the risks and concerns associated with AI adoption.
So, what can policymakers do?
- Establish Standards and Assurances: Setting clear standards for AI in healthcare can help ensure safety, efficacy, and ethical integrity. Progress in this has already been made in the UK with the publication of the BS 30440 - Validation framework for the use of AI. However, international standards and testing of the standard effectiveness are needed. Assurance frameworks that define AI requirements, including bias testing and safety measures, would allow healthcare providers to procure AI tools with greater confidence, knowing that these tools meet rigorous standards.
- Provide Clarity about Legal Responsibility of Adverse Outcomes: Clarifying who carries the legal responsibilities of possible adverse AI outcomes can help businesses adopting AI better assess the expected costs of deploying this technology. Guidelines defining the chain of accountability will help healthcare providers estimate the value of potential risks they might face and manage those risks effectively. Having a clear framework for assessing and quantifying the risks will allow them to calculate the cost-to-benefit ratio from adopting AI. It will also reduce the overall risk, making the endeavour much more lucrative for healthcare providers.
- Support Skills Development and Public Education: Investments in AI skills training for healthcare professionals would help address skills gaps, and public education campaigns can set realistic expectations for AI in healthcare. Educating patients on AI's role in healthcare would improve public confidence, ensuring they understand the benefits and limitations of these new technologies.
Conclusion
AI has immense potential to improve healthcare, offering advancements in diagnosis, treatment, and operational efficiency. However, achieving these benefits requires a strategic approach to address the sector's unique challenges, from skills shortages to safety concerns. The government can help overcome these barriers by setting the stage for a safer, more efficient, and more equitable healthcare system for us all, empowered by AI.