“Unmasking Deepfakes in the Courtroom”: A Legal Impact Assessment

Deepfakes are able to easily infiltrate courtrooms, challenging the very core of justice. Can our legal systems defend truth when AI fabricates reality with chilling precision?

INTRODUCTION

Deepfake technology has emerged as both a blessing and a menace. By using machine learning techniques, such as Generative Adversarial Networks (GANs), deepfakes make it easy to produce convincingly realistic audio, video and image material.

Although initially met with enthusiasm for their innovative potential, deepfakes have quickly become a cause for concern (see, for example, the highly controversial Netflix show Deep Fake Love), and the judicial system is not immune to the risks posed by this technology. Their ability to manipulate digital evidence threatens the integrity of judicial proceedings and undermines trust in the courts.

In this article, we will examine the risks posed by deepfakes to digital evidence presented in courtrooms. We will explore their scope and the legal challenges they present, as well as the adequacy of existing legal frameworks, and the role of expert analysis in ensuring justice.

DEEPFAKES

To fully understand how this technology impacts the admissibility of evidence in courts, we must assess its specific application and consider who uses and is affected by it.

In this context, users of deepfake technology can be divided into two groups. On the one hand there are malicious actors—individuals or organisations seeking to deceive the court by introducing falsified evidence—and on the other, there are legal professionals tasked with detecting and exposing such manipulations. Finding the balance between the growing sophistication of deepfake tools and the ability of courts to identify and exclude manipulated evidence is essential.

The challenges posed by deepfakes evolve throughout their lifecycle, from development to deployment. While our focus is on the use of deep fakes as evidence in the courtroom, the table below briefly points out other civil liability challenges in the main phases of its lifecycle:

DevelopmentDeployment
Whether developers of deepfake technology should bear responsibility for its misuse: Some authors argue that accountability is necessary to prevent harm.Some authors argue that innovation should not be stifled.The first understanding can be supported by regulatory frameworks such as the EU AI Act (Article 53) and the DSA (Articles 15, 18 and 23).Responsibility shifts to the users of deepfake tools, who may face legal consequences for creating or distributing harmful content, such as: Defamation charges[1]Civil liability[2]If the user is a company: Fines for using another’s data under the GDPR (Articles 82-84)Platform Bans and Removal Orders under the DSA (Article 22).  

During litigation, courts must consider how to evaluate and accept digital evidence that may have been affected by deepfakes. Our focus will be on the manipulation of digital evidence in particular. At this stage, procedural law becomes essential, as it dictates the standards for evidence admissibility.

This evidence- often including video footage, audio recordings, or images—plays a critical role in trials by influencing judges’ decisions. The increasing development of technology has brought deepfakes that are nearly indistinguishable from authentic materials.


LEGAL CHALLENGES AND RISK MITIGATION

The most pressing issue is the authenticity of this evidence since some deepfakes are becoming indistinguishable from reality. Malicious actors developing them, are, most likely, using victims’ personal data without consent or any other legal grounds. Furthermore, this use can damage the reputation and good name of the person affected and, ultimately, affect their right to a fair trial. This means that this technology interferes with Fundamental Rights which are at the centre of any jurisdiction and legal system.

As previously mentioned, deepfakes can exacerbate the harm by misrepresenting individuals in ways that damage their reputation or emotional well-being. This can result in defamation and psychological distress, as well as damage to their reputation, a right enshrined in Article 10(2) of the (European Convention on Human Rights) ECHR.

Once deepfakes are admitted in court, it becomes clear that the very existence of this technology jeopardises the right to a fair trial—enshrined in Article 6 of the ECHR and Article 14 of the International Covenant on Civil and Political Rights (ICCPR). This right was created to ensure that individuals are treated justly and equitably within the judicial system.

A fair trial includes several essential principles. Firstly, it must guarantee the right to an independent and impartial court, ensuring that no external influence compromises judicial decision-making. Secondly, it must provide the accused with adequate time and resources to prepare their defence, including access to legal representation. Additionally, the right to a fair trial ensures equality of arms, meaning that both parties in a case must have equal opportunities to present their arguments and evidence.

In our case, it is essential that courts ensure that all evidence presented is genuine in order to grant a fair trial. However, the introduction of deepfakes complicates this task, as it can become very difficult for a court, with its limited resources, to differentiate between real and deepfake content.

With these violations of rights at hand, it becomes clear that we must address them. To this end, we will discuss some possibilities for mitigating this risk.

The current procedural legal framework throughout the world was created without considering the exponential growth of new technologies. Therefore, it cannot provide an assertive and transparent answer to this problem. Even though some laws can be interpreted in a way that can mitigate this problem, none of them are sufficient to address the practical consequences posed by it.

For example, in Portugal, even if someone challenges the veracity of the evidence (Article 444 of the Portuguese Civil Procedure Code), the court is free to assess this challenge. If deepfakes are becoming increasingly realistic, as mentioned above, how can we leave this assessment to the court and not to forensic specialists?

In this regard, legislative amendments are a necessary step. For example, laws or guidelines specifically addressing the use of deepfakes in judicial contexts would serve as a deterrent.

In our view, judicial guidelines must evolve to include clear, uniform protocols for handling digital evidence. For example, establishing a chain-of-custody framework would help to maintain the integrity of evidence from its collection to its presentation in court.

Nonetheless, one of the most pressing questions in this debate is whether digital evidence should require expert assessment. Given the sophistication of deepfake technology, the answer is unequivocally yes. Courts often lack the technical expertise to independently verify the authenticity of digital evidence, particularly when it involves complex AI-generated manipulations. Forensic experts, equipped with specialised tools, are essential in identifying signs of tampering and ensuring that only reliable evidence is admitted. However, this reliance on expertise raises practical concerns, such as the cost of forensic evaluations and the potential for delays in legal proceedings. Addressing these issues requires investment in expert training and the development of efficient, cost-effective detection tools.

Therefore, to address these practical issues, courts should adopt AI-based tools for detecting deepfakes, provided these tools are certified for accuracy and transparency. The introduction of these technologies must be complemented by training programmes for judges, lawyers, and law enforcement to ensure that all stakeholders understand both the capabilities and limitations of deepfake detection methods, especially considering that the development of detection tools is a constant ‘cat-and-mouse’ scenario, as deepfakes continuously evolve to evade them.

CONCLUSION

Deepfake technology represents a huge challenge to the integrity of legal proceedings, particularly with regard to the admissibility of digital evidence. It is necessary to adapt the current legal framework to address the complexity of new technologies so that we can ensure the Fundamental Rights of the citizens which lie at the heart of the judicial system. Essential steps towards safeguarding judicial proceedings and citizens include strengthening procedural law, mandating expert evaluation of evidence, and/or adopting advanced detection technologies.


[1] Ibrahim Mammadzada; “Deepfakes and Freedom of Expression: European Perspective” (2021), Tallin University of Technology- Department of Law p. 37

[2] Ibid, p. 46

The Insights published herein reproduce the work carried out for this purpose by the author and therefore maintain the original language in which they were written. The opinions expressed within the article are solely the author’s and do not reflect in any way the opinions and beliefs of WhatNext.Law or of its affiliates. See our Terms of Use for more information.

Leave a Comment

We'd love to hear from you

We’re open to new ideas and suggestions. If you have an idea that you’d like to share with us, use the button bellow.