This is the fifth of our video posts on 10 questions that can help lawyers contribute to the digital health ideation process. Today’s video explores the question: who will pay for the offering?
The EU’s regulatory rules for medical devices are due to change on 26 May 2020, when the new Medical Device Regulation (“MDR”) comes into effect. The regime for in vitro diagnostic devices will change two years later from 26 May 2022 when the In Vitro Diagnostic Devices Regulation (“IVDR”) will apply.
In advance of these changes, the EU Medical Device Coordination Group (“MDCG”) has recently published guidance on the Qualification and Classification of Software in the MDR and IVDR (the “Guidance”).
The aim of the Guidance is to assist manufacturers with interpreting the new Regulations to assess whether their software meets the definition of a medical device or an in vitro diagnostic device (i.e., “qualification”); and if so, what regulatory class the software would fall under (i.e., “classification”).
The MDCG is a coordination group established under Article 103 of the MDR, comprising up to two medical device experts from each EU Member State. Its key functions include contributing to the development of guidance to ensure effective and harmonized implementation of the EU’s new medical device rules. The Guidance is not legally binding nor does it necessarily reflect the official position of the European Commission. However, given the MDCG’s important role in the regulatory landscape, the Guidance is likely to be highly persuasive.
This is the fourth of our video posts on 10 questions that can help lawyers contribute to the digital health ideation process. Today’s video explores the question: what data will be needed to substantiate the offering?
On September 26, 2019, the FDA issued two revised guidance documents addressing its evolving approach to the regulation of digital health technologies. These guidances primarily describe when digital health solutions will or will not be actively regulated by FDA as a medical device. In parallel, FDA also updated four previously final guidance documents to ensure alignment with the new approaches being adopted by the Agency.
As background, FDA issued draft guidance documents in December 2017 that sought to implement section 520(o)(1) of the Federal Food, Drug, and Cosmetic Act (“FDCA”), which was enacted by Congress in the 21st Century Cures Act of 2016 (the “Cures Act”). Those guidance documents raised a number of issues that we discussed on this previous alert.
After receiving comments from stakeholders, the Agency responded by issuing: (i) a revised draft guidance document for clinical decision support (CDS) software (“Clinical and Patient Decision Support Software” or the “CDS Draft Guidance”) and (ii) a final guidance document for other software functions exempted by the Cures Act (“Changes to Existing Medical Software Policies Resulting from Section 3060 of the 21st Century Cures Act” or the “Software Policies Guidance”).
Here are key takeaways on FDA’s newly-issued guidance: Continue Reading
This is the third of our video posts on 10 questions that can help lawyers contribute to the digital health ideation process. Today’s video explores the question: who will provide the data used in the offering?
This is the second of our video posts on 10 questions that can help lawyers contribute to the digital health ideation process. Today’s video explores the question: who will provide the various components of the offering?
On 19 September 2019, the European Parliamentary Research Service (“EPRS”)—the European Parliament’s in-house research service—released a briefing paper that summarizes the current status of the EU’s approach to developing a regulatory framework for ethical AI. Although not a policymaking body, the EPRS can provide useful insights into the direction of EU policy on an issue. The paper summarises recent calls in the EU for adopting legally binding instruments to regulate AI, in particular to set common rules on AI transparency, set common requirements for fundamental rights impact assessments, and provide an adequate legal framework for facial recognition technology.
The briefing paper follows publication of the European Commission’s high-level expert group’s Ethics Guidelines for Trustworthy Artificial Intelligence (the “Guidelines”), and the announcement by incoming Commission President Ursula von der Leyen that she will put forward legislative proposals for a “coordinated European approach to the human and ethical implications of AI” within her first 100 days in office.
Our clients increasingly apply agile product and business development methodologies when they are developing digital health solutions. “Ideation” is the part of that process and involves the rapid identification and creation of ideas for digital health solutions, which are then prototyped and tested. Covington has created a Top 10 Questions for Ideation of Digital Health Solutions that can help lawyers contribute to the digital health ideation process.
In today’s video post we discuss intended use of the digital health solution and how lawyers can play a key role in discussing this topic. Over the next nine weeks, we will post a video explaining each of our 10 questions.
On 13 August 2019, the European Commission opened a call for expression of interest to relaunch the eHealth Stakeholder Group with a view to supporting the “digital transformation of healthcare in the EU”. The eHealth Stakeholder Group was first launched in 2012 and in its first iteration (between 2012 and 2015), contributed to the development of the Digital Agenda for Europe on eHealth and the eHealth Action Plan. In 2016, the Commission relaunched the Stakeholder Group, and between 2016 and 2018, the group assisted with the Digital Single Market Strategy and the eHealth Action Plan 2012-2020.
The Commission is now seeking representatives of European umbrella organisations active in the eHealth sector to relaunch the stakeholder group for a term of three years. Selected stakeholders will be expected to provide advice and expertise contributing to policy development in particular in relation to the following areas:
- Health Data.
- Digital health services.
- Health data protection and privacy issues.
- Cybersecurity for health and care data.
- Digital tools for citizen empowerment and person centred care.
- Artificial intelligence and health.
- Other cross cutting aspects linked to the digital transformation of health and care, such as financing and investment proposals and enabling technologies.
The group will also engage with, and seek input from representatives and organisations across society including academics, healthcare professionals, patient groups and the tech industry sector.
The call is open until 27 September 2019 and the selections criteria can be viewed on the Commission’s website here.
On July 25, 2019, the UK’s Information Commissioner’s Office (“ICO”) published a blog on the trade-offs between different data protection principles when using Artificial Intelligence (“AI”). The ICO recognizes that AI systems must comply with several data protection principles and requirements, which at times may pull organizations in different directions. The blog identifies notable trade-offs that may arise, provides some practical tips for resolving these trade-offs, and offers worked examples on visualizing and mathematically minimizing trade-offs.
The ICO invites organizations with experience of considering these complex issues to provide their views. This recent blog post on trade-offs is part of its on-going Call for Input on developing a new framework for auditing AI. See also our earlier blog on the ICO’s call for input on bias and discrimination in AI systems here.