The Use of AI in Healthcare: A Liability Issue

ADVANTAGES OF ARTIFICIAL INTELLIGENCE APPLIED TO HEALTHCARE

The attempts to generate mechanisms capable of reaching a degree of superior excellence, immune to human error, miscalculations, and modern distractions, have resulted in the creation of automated, robotic, and intelligent systems, which promise to change the world as we know it.

Artificial Intelligence (AI) has already proven its usefulness in the medical community due to its many advantages such as: improving the quality of life of dependent and elderly people, offering a precise and fast diagnosis, simplifying research on certain diseases and for the development of new drugs, and improving the control and monitoring of chronic patients through electronic and wearable devices.

For that reason, predictions for the future point to a vaster presence of robotic assistance in medical areas, such as surgery.

The creation of surgical robots has given rise to numerous scenarios that generate questions and issues never raised, requiring a specifically adapted approach.

MEDICAL LIABILITY IN THE USE OF ARTIFICIAL INTELLIGENCE

Responsibility for the acts practised by physicians has been applied since the most remote times, with punishments, including the possibility of amputation of the hands, for surgeons who acted in a careless manner (Article 218 Code of Hammurabi). Since then, medical liability has undergone continuous transformation.

To avoid medical malpractice liability, physicians must provide care considering the available resources. This situation becomes more problematic when AI is involved. Additionally, it is difficult to define roles and responsibilities due to the multiplicity of actors involved in the process of medical AI. This lack of definition can leave physicians and other healthcare professionals in a particularly vulnerable position.

Responsibility is a demanding issue, especially when considering situations in which an AI- based healthcare tool installed in a clinical setting fails or produces unexpected results. Mechanisms are needed to effectively assign responsibility to all actors in the AI workflow, consequently providing incentives for applying all possible measures to minimise errors and harm to the patient. These expectations are already a vital part of the development, evaluation and commercialisation of medicines, vaccines and medical equipment, and need to be extended to future medical AI products.

Moreover, medical professionals are usually under a regulatory responsibility to be able to account for their actions, while AI developers usually work under their companies’ ethical codes. Consequently, for medical professionals, the cost of not being able to account for their actions could mean losing their licence to practice medicine. Under the current practice, even if an AI manufacturer is found to be responsible for an error, it is difficult to place blame on one specific person since numerous developers and researchers work together on any given AI system. Besides, the ethical codes and standards of accountability that many private entities use have often been criticised for being vague and difficult to translate into enforceable practice.

Challenges in applying current law to AI applications in medicine include the multi-actor problem in medical AI, which makes it difficult to identify responsibilities among the players involved (e.g. AI developers, physicians, patients, etc.) or even to determine the exact cause of AI-related medical error, which may be due to the AI algorithm, the data used for training it, or its incorrect use and understanding in clinical practice. Another issue is the variety of governance frameworks and the lack of unified ethical and legal standards in AI industries.

EUROPEAN PERSPECTIVE

Many instruments of EU law are articulated in a technology-neutral manner and will generally also apply to AI – such as the General Data Protection Regulation. In the last years, we have increasingly seen proposals directed toward AI systems. Most prominently, the AI Act, proposed in April 2021, is currently being negotiated in the EU Council and Parliament and will probably be finalised in 2024. Furthermore, instruments addressed to online platforms, such as the Digital Markets Act or the Digital Services Act, also contain crucial constraints on and provisions for AI models. Nonetheless, what has been missing so far is an  adaptation of the civil liability framework to AI.

With that in mind, in September 2022, the European Commission (EC) advanced a proposal outlining the European approach to AI liability – the AI Liability Directive. The Directive guarantees that victims of damage caused by AI obtain equivalent protection to victims of damage caused by products in general. Additionally, it reduces the legal uncertainty of businesses developing or using AI in terms of their possible liability exposure and prevents the emergence of fragmented AI-specific adaptations of national civil liability rules.

The European Commission takes a full approach by proposing adaptations to the producer’s liability for defective products under the Product Liability Directive (PLD), which covers the producer’s no-fault liability for defective products, leading to compensation for certain types of damages suffered by individuals. The AI Liability Directive covers national liability claims based on the fault of any person, with a view to compensating any type of damage and any type of victim. It is important to consider that the existing rules of the PLD need to be updated to fit the digital age. For that reason, the European Commission published a Proposal for revising the PLD. Together, these Directives will promote trust in AI by ensuring that victims are effectively compensated if damage occurs.

The AI Liability Directive aims to provide an adequate basis for claiming compensation in connection with any fault consisting in the lack of compliance with a duty of care under EU or    national law. Therefore, in Article 4(1), a targeted rebuttable presumption of causality has been  laid down regarding the causal link between non-compliance and the output produced by the AI system or the failure of the AI system to produce an output that gave rise to the relevant damage, which can be challenging at times. Paragraphs (2) and (3) distinguish between claims brought against the provider of a high-risk AI system or a person subject to the provider’s obligations under the AI Act, and claims brought against the user of said systems.

In the same Article, paragraph (4) establishes an exception from the presumption of causality when the defendant demonstrates that sufficient evidence and expertise are reasonably accessible for the claimant to prove the causal link. This can incentivise defendants to comply with their obligations, with measures set by the AI Act to guarantee AI’s high level of transparency or with documenting and recording requirements.

Paragraph (5) forms a condition for the applicability of the presumption of causality, for cases of non-high-risk AI systems, whereby the latter is subject to the court determining that it is excessively difficult for the claimant to prove the causal link. These difficulties must be assessed considering the characteristics of certain AI systems, which render the explanation of the inner functioning of the AI system difficult in practice, negatively affecting the ability of the claimant to prove the causal link between the fault of the defendant and the AI output.

In paragraph (6), applicable only to defendants that use AI in a non-professional activity, it is   provided that the presumption of causality should only apply if the defendant has materially interfered with the conditions of the operation of the AI system or if the defendant was required   and able to determine the conditions of operation of the AI system and failed to do so. This is justified by the need to balance the interests of victims and non-professional users, by exempting from the application of the presumption of causality cases where said users do not add risk through their behaviour. Lastly, paragraph (7) states that the defendant has the right to refute the causality presumption based on paragraph (1) of Article 4.

WHY SHOULD THE EU ACT?

There are signs that some Member States are considering unilateral legislative measures to address the specific challenges posed by AI concerning liability (for example, Portugal and Italy). Given the large discrepancy between Member States’ existing liability rules, it is likely that any AI-specific national measure on liability would follow existing different national approaches and therefore increase fragmentation. Adaptations of liability rules on a purely national basis would also increase the barriers to the rollout of AI-enabled products and services across the internal market and contribute to even greater fragmentation. Therefore, it is important for regulator and healthcare organisations to work together to address these issues and ensure that the use of AI in healthcare is safe and ethical.

The Insights published herein reproduce the work carried out for this purpose by the author and therefore maintain the original language in which they were written. The opinions expressed within the article are solely the author’s and do not reflect in any way the opinions and beliefs of WhatNext.Law or of its affiliates. See our Terms of Use for more information.

Leave a Comment

We'd love to hear from you

We’re open to new ideas and suggestions. If you have an idea that you’d like to share with us, use the button bellow.