Skip to content

UK

On January 6, 2021, the UK’s AI Council (an independent government advisory body) published its AI Roadmap (“Roadmap”). In addition to calling for a  Public Interest Data Bill to ‘protect against automation and collective harms’, the Roadmap acknowledges the need to counteract public suspicion of AI and makes 16 recommendations, based on three main pillars, to guide the UK Government’s AI strategy.
Continue Reading AI Update: The Future of AI Policy in the UK

On 18 January 2021, the UK Parliamentary Office of Science and Technology (“POST”)* published its AI and Healthcare Research Briefing about the use of artificial intelligence (“AI”) in the UK healthcare system (the “Briefing”).  The Briefing considers the potential impacts of AI on the cost and quality of healthcare, and the challenges posed by the wider adoption of AI, including safety, privacy and health inequalities.

The Briefing summarises the different possible applications of AI in healthcare settings, which raises unique considerations for healthcare providers.  It notes that AI, developed through machine learning algorithms, is not yet widely used within the NHS, but some AI products are at various stages of trial and evaluation.  The areas of healthcare identified by the Briefing as having the potential for AI to be incorporated include (among others): interpretation of medical imaging, planning patients’ treatment, and patient-facing applications such as voice assistants, smartphone apps and wearable devices.Continue Reading AI Update: UK Parliament Research Briefing on AI in the UK Healthcare System

On July 30, 2020, the UK Information Commissioner’s Office (“ICO”) published its final guidance on Artificial Intelligence (the “Guidance”).  The Guidance sets out a framework for auditing AI systems for compliance with data protection obligations under the GDPR and the UK Data Protection Act 2018.  The Guidance builds on the ICO’s earlier commitment to enable good data protection practice in AI, and on previous guidance and blogs issued on specific issues relating to AI (for example, on explaining decisions on AItrade-offs, and bias and discrimination, all covered in Covington blogs).
Continue Reading UK ICO publishes guidance on Artificial Intelligence

On 23 March 2020, the Department of Health and Social Care (“DHSC”) issued a plea to technology companies to come up with digital support solutions for COVID-19.  The DHSC is making £500,000 available, with funding of up to £25,000 per company.  The challenge, named TechForce19, aims to increase community
Continue Reading TechForce19: UK Government promises up to £25,000 for innovator companies that develop digital support solutions for COVID-19

Following on from the Evidence Standards Framework for DHTs published in December 2018 (the Original Standards, as reported in our previous blog post, here), the UK’s National Institute for Health and Care Excellence (NICE) recently published a newly updated version of the standards (the Updated Standards
Continue Reading UK’s NICE releases newly updated Digital Health Technologies (DHT) Evidence Standards Framework

On 20 November 2018, the UK government published its response (the “Response”) to the June 2018 consultation (the “Consultation”) regarding the proposed new Centre for Data Ethics and Innovation (“DEI”). First announced in the UK Chancellor’s Autumn 2017 Budget, the DEI will identify measures needed to strengthen the way data and AI are used and regulated, advising on addressing potential gaps in regulation and outlining best practices in the area. The DEI is described as being the first of its kind globally, and represents an opportunity for the UK to take the lead the debate on how data is regulated.
Continue Reading IoT Update: The UK Government’s Response to Centre for Data Ethics and Innovation Consultation

The UK Information Commissioner’s Office (“ICO”), which enforces data protection legislation in the UK, has ruled that the NHS Royal Free Foundation Trust (“Royal Free”), which manages a London hospital, failed to comply with the UK Data Protection Act 1998 in providing 1.6 million patient records to Google DeepMind (“DeepMind”), requiring the Royal Free to sign an undertaking committing to changes to ensure it is acting in line with the UK Data Protection Act.

On September 30,  2015, the Royal Free entered into an agreement with Google UK Limited (an affiliate of DeepMind) under which DeepMind would process approximately 1.6 million partial patient records, containing identifiable information on persons who had presented for treatment in the previous five years together with data from the Royal Free’s existing electronic records system.  On November 18, 2015, DeepMind began processing patient records for clinical safety testing of a newly-developed platform to monitor and detect acute kidney injury, formalized into a mobile app called ‘Streams’.
Continue Reading ICO Rules UK Hospital-DeepMind Trial Failed to Comply with UK Data Protection Law

The UK Government has opened a consultation, running until September 7, 2016, regarding how UK National Health Service (NHS) patient data should be safeguarded, and how it could be used for purposes other than direct care (e.g. scientific research).

The consultation comes after two parallel-track reviews of information governance and data security arrangements in the NHS found a number of shortcomings, described below.  The  Care Quality Commission (CQC) and the National Data Guardian (NDG, led by Dame Fiona Caldicott) made a range of recommendations, including new security standards, stronger inspection and enforcement around security lapses and re-identification of anonymized patient data, and an eight-point process around assuming and respecting patient consent decisions.

Following the public consultation, the new security standards could eventually be required and audited by government inspectors from the CQC, and imposed under revised standard NHS England contract terms.  CQC inspectors could potentially act on tip-offs from NHS Digital (formerly known as the NHS Health and Social Care Information Centre, ‘HSCIC’).  Those tip-offs could be based on low scores obtained by organizations in their annual NHS Information Governance Toolkit (IGT) self-assessments.  The IGT, which the reviewers said should be redesigned, applies both to NHS bodies and their commercial vendors.

The new consent model, meanwhile, could provide more streamlined, system-wide consents for use of patient data for purposes including quality assurance and research.

The CQC and the NDG’s findings and twenty-four recommendations were jointly presented in a covering letter to the UK government, available here, and fuller reports, available here and here (CQC and NDG, respectively).  This post provides a brief summary of their main findings and recommendations.  For the consultation questions themselves, see here.
Continue Reading UK Government Considering New Patient Data Security and Research Consent Standards, Sanctions

May 2015 saw a number of developments in the EU mHealth sector worthy of a brief mention.  The European Commission announced that it would work on new guidance for mHealth apps, despite the European Data Protection Supervisor and British Standards Institution publishing their own just weeks earlier.  In parallel, the French data protection authority announced a possible crackdown on mHealth app non-compliance with European data protection legislation.  This post briefly summarizes these developments.
Continue Reading May 2015 EU mHealth Round-Up