On 18 January 2021, the UK Parliamentary Office of Science and Technology (“POST”)* published its AI and Healthcare Research Briefing about the use of artificial intelligence (“AI”) in the UK healthcare system (the “Briefing”).  The Briefing considers the potential impacts of AI on the cost and quality of healthcare, and the challenges posed by the wider adoption of AI, including safety, privacy and health inequalities.

The Briefing summarises the different possible applications of AI in healthcare settings, which raises unique considerations for healthcare providers.  It notes that AI, developed through machine learning algorithms, is not yet widely used within the NHS, but some AI products are at various stages of trial and evaluation.  The areas of healthcare identified by the Briefing as having the potential for AI to be incorporated include (among others): interpretation of medical imaging, planning patients’ treatment, and patient-facing applications such as voice assistants, smartphone apps and wearable devices.

The Briefing focuses on key areas of the healthcare system that would be impacted by greater incorporation of AI, including patients and healthcare workers.  According to the Briefing, some stakeholders have raised concerns that the use of AI risks dehumanising the healthcare system by eroding the doctor-patient relationship.  However, others have suggested that using AI for certain tasks, such as administration, would allow healthcare providers to spend more time with patients, and that AI can lead to more personalised healthcare.

The Briefing also considers the legal and ethical challenges posed by AI, including questions of security, privacy, health inequality and legal liability.  Some researchers have found that AI systems may present safety risks in real-world settings by giving dangerous recommendations in new circumstances.  Developing AI requires large amounts of data, raising questions regarding privacy and patients’ willingness to share their data.

The Briefing also highlights the regulatory challenges posed by medical devices that incorporate AI.  Particularly, devices that use AI and/or machine learning techniques could create difficulty from a regulatory perspective as such devices continue to learn and optimise even after they are placed on the market.  This leads to questions regarding how such systems could be monitored to ensure they remain safe and effective.

Impact of Brexit

The Briefing also notes the impact of Brexit on regulations in the healthcare space.  Following Brexit, the UK Government has indicated a departure from the EU medical devices rules and has announced that it will not implement the Medical Devices Regulation (EU) 2017/745 into national law when it takes effect on 26 May 2021.  The UK has instead introduced a new national regime that involves the application of a “UKCA mark” to devices placed on the market in Great Britain (England, Scotland and Wales).  The UK also intends to develop future regulations pursuant to the Medicines and Medical Devices Bill 2019-21 and is actively looking at ways to be more responsive to new technologies, including AI.  (See our blog here for more information on the impact of Brexit on medical devices regulation.)

Background on NHSX

The Briefing also notes the progress made by NHSX and the National Institute for Health and Care Excellence (NICE) in publishing assessment criteria for digital technologies, including AI systems.  These include:

In 2019, the UK Government established NHSX as a new body responsible for setting policy and best practice around the use of digital technologies in healthcare in England.  NHSX created the NHS AI Lab in October 2019, to bring together government, health and care providers, academics and technology companies to help address healthcare challenges.  The 2019 NHS Long Term Plan identifies the improved use of AI in the healthcare system as a priority, and NHSX has been actively publishing and authoring reports in this space.  These include (among others):

NHSX has also recently launched an ‘AI in Health and Care Survey 2021’ that aims to engage with AI developers to gain new perspectives on the progress of AI in the healthcare landscape to date.  NHSX aims to use the results of its survey to better understand the opportunities and risks of AI in the UK healthcare sector.

We anticipate that through the efforts of organizations like NHSX, the UK will position itself as a champion of digital technology in the healthcare sector.  Watch this space for more guidance and policy developments on AI in healthcare.

* POST is a body within the UK Government that produces impartial, peer-reviewed briefings, with the aim of making scientific research accessible to the UK Parliament. The briefings, in the form of “POSTnotes” and “POSTbriefs”, focus on topics including health, energy and the environment.

This article was prepared with the help of Alex Paterson, a trainee solicitor in the London office.

Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Sam Jungyun Choi Sam Jungyun Choi

Sam Jungyun Choi is an associate in the technology regulatory group in the London office. Her practice focuses on European data protection law and new policies and legislation relating to innovative technologies such as artificial intelligence, online platforms, digital health products and autonomous…

Sam Jungyun Choi is an associate in the technology regulatory group in the London office. Her practice focuses on European data protection law and new policies and legislation relating to innovative technologies such as artificial intelligence, online platforms, digital health products and autonomous vehicles. She also advises clients on matters relating to children’s privacy and policy initiatives relating to online safety.

Sam advises leading technology, software and life sciences companies on a wide range of matters relating to data protection and cybersecurity issues. Her work in this area has involved advising global companies on compliance with European data protection legislation, such as the General Data Protection Regulation (GDPR), the UK Data Protection Act, the ePrivacy Directive, and related EU and global legislation. She also advises on a variety of policy developments in Europe, including providing strategic advice on EU and national initiatives relating to artificial intelligence, data sharing, digital health, and online platforms.