Guest Column | March 22, 2021

Medtech Key Target Of Upcoming Regulation Of Ethical AI In The EU

By Brian McElligott, Mason Hayes & Curran

Gavel Internet Law

On Oct. 20, 2020, the European Parliament approved an initial draft proposal for the regulation of ethical artificial intelligence (AI) with the first official draft expected in late April 2021. The proposal targets high-risk AI in the healthcare and medtech sectors in particular, but also sets standards for all AI products and applications. The scope of application of the proposal is very broad in that it covers all uses of AI products in the EU regardless of the origin or place of establishment of the developer or owner of the AI. It also regulates not only the developers of AI but also the deployers and users of those AI products and applications.

The proposal is expected to be in the form of a regulation, which means that it should be binding in its entirety and directly applicable in Member States. Because of the anticipated ground-breaking nature of this regulation, it is expected to be accompanied by explanatory rules and guidelines.

What Will Be Regulated?

The regulation will likely apply to “artificial intelligence,” “robotics,” and “related technologies,” including software, algorithms, and data used or produced by such technologies, developed, deployed, or used in the EU.

Each of artificial intelligence, robotics, and related technologies are defined terms and broadly cover: AI software/hardware systems (artificial intelligence), physical machines with AI capability (robotics), and other technologies such as those capable of detecting biometric, genetic, or other data (related technologies).  The healthcare sector is a key target for these proposed laws and med devices deploying this technology will be regulated.

High-Risk AI

The EU clearly has the medtech sector in its sights with this proposal and high-risk AI applications, in particular, will feel the full force of this regulation. It is expected that healthcare (including the medtech industry) will be designated a high-risk sector and medical treatments and procedures including the use of medtech will be designated high-risk uses or purposes. The practical effect of this is that medical device operators deploying AI can expect to be working against an assumption that their use of AI is high risk and subject to compliance testing.

Notwithstanding the high risk sectoral and use assumptions, there will still be a possible safe harbor for use of AI in medtech if it can be demonstrated that such use is not objectively “high risk.” High risk is to be determined by a Member State supervisory authority following a risk assessment based on objective criteria such as the specific use or purpose of the AI, the sector where it is developed, deployed, or used, and the severity of the possible injury or harm caused. So, those wishing to remain outside the supervisory authority’s remit will need to be able to demonstrate to the supervisory authority by reference to these broad criteria that its use of AI in a medtech device cannot objectively be deemed high risk. 

If, as may be likely, the medtech owner’s deployment of AI is deemed high risk, then it will need to proceed through a compliance assessment that will include (this is a non-exhaustive list):

  • Guaranteeing full human oversight at any time, including in a manner that allows full human control to be regained when needed, including through the altering or halting the use of AI in the medical device. If not already part of the medical device, this could amount to an obligation on medical device owners/operators to build into the device a remote access and control for operators to ensure control can be exercised by a human at any point.
  • An assurance of compliance with minimum cybersecurity baselines proportionate to identified risk and that prevent any technical vulnerabilities from being exploited for malicious or unlawful purposes. Medical device owners/operators will need to demonstrate how resilient their devices are in relation to cyberattacks.
  • Proof that the device will operate in an unbiased manner and that it will not discriminate on grounds such as race, gender, sexual orientation, pregnancy, national minority, ethnicity, or social origin, civil or economic status, or criminal record. For a medical device that is deployed in identifying cancers, this could mean demonstrating that the training and test data sets were broad enough so as not to give rise to poor outcomes for patients based on, e.g., gender or race.

Risk Assessment And Supervisory Authorities

The proposal envisages mandatory compliance assessments for all medical devices that are found to be high risk and voluntary certificates of ethical compliance for all other AI. The process of certification prior to market launch (as per the section above) is to be carried out locally at Member State level by supervisory authorities, which is very similar to the current regulation of data protection under the GDPR. It is proposed that an overarching group of supervisory authorities could meet at EU level with the Commission to oversee the operation of the certification and monitoring of AI.

Change In Liability Law

In this regulation proposal and a companion proposal for updating the laws on civil liability regarding the use of AI, there is a significant change proposed to the current law on product liability in the EU that will significantly impact those operating in the medical device supply chain. Currently, manufacturers shoulder almost all risk with regard to product liability under EU law. If this proposal is accepted, manufacturers, as well as those deploying AI in medical devices, will be liable to users of the devices for loss, damage, or injury they suffer. This shift in responsibility for risk is seismic and will have impacts on all those operating in the medical device field, including insurers.

Redress

Any natural or legal person shall have the right to seek redress for injury or harm caused by operators of high-risk AI in medical devices that arises from breaches of EU law and the obligations set out in this regulation.

This early draft of this very important law will prove very challenging for those developing and deploying high-risk AI. The proposed introduction of a new market certification regime for high-risk AI applications in technology like medical devices will add to the existing significant EU regulatory burden on manufacturers and owners of medical devices. The expected change to the liability regime for products like medical devices deploying AI will be of greater interest and concern to the medical device sector as a whole. Those previously comfortably beyond the reach of mandatory product liability law may now find themselves at the center of it. Early indications and leaks confirm that these key concepts are likely to be retained in the first official draft of the regulation. Watch this space for updates.

About The Author:

BrianBrian McElligott is a partner at Mason Hayes & Curran. He advises clients across a range of sectors, including pharma, medtech, and eHealth, on the formulation and implementation of effective IP development and protection strategies. With a particular interest in artificial intelligence, he is spearheading Mason Hayes & Curran’s innovation in this space and is also a member of the EU AI Alliance program and Ireland’s NSAI Top Team. Brian is also a registered Irish and European Union trademark and design agent, a member of INTA Leadership, and former chair of the Licensing Executives Society of Ireland and the Irish branch of the Anti-Counterfeiting Group.