AI Through the Looking Glass: Addressing Evidence Authenticity in the Deepfake Era

This Insight addresses the emergence of deepfakes as legal evidence, discussing challenges, regulatory gaps and the suitability of current evidence authentication standards to uphold justice.

In light of recent advances in artificial intelligence (“AI”), the emergence of deepfakes poses a critical challenge. These sophisticated manipulations are now infiltrating courtrooms, raising concerns regarding the admissibility of audio-visual evidence and threatening the fairness of legal proceedings.

As AI is integrated into society, and particularly the justice system, it is important to deliberate on the kind of justice we envision for the cities of the future.

Deepfakes as Legal Evidence

A deepfake is a type of audio-visual content, generated through AI technology, which creates a fake reality that appears as authentic to the reasonable observer. Deepfakes are generated by two algorithms: the generator and the discriminator. The first generates altered content by studying photographs and videos of a target person and then mimicking their behaviour and speech patterns. The second compares it to the real content that the technology is trying to imitate. When the discriminator believes that the images belong with the real dataset, this means that the images are convincing.

In the European Union, there is no law regulating deepfakes. There is, however, a pending AI regulation proposal which outlines transparency obligations for certain AI systems. According to its provisions, users of AI systems which generate deceptive digital content shall label the output accordingly and disclose its artificial origin. Despite their deceptive character, deepfakes also fall under the General Data Protection Regulation if they involve data related to an identifiable individual. Additionally, deepfakes should be considered under the European Convention on Human Rights and the Charter of Fundamental Rights of the European Union (the “Charter”),  in terms of fundamental rights to privacy and freedom of expression, as well as procedural fundamental rights.

The first reported case in which deepfakes were used as legal evidence occurred in 2020, in the UK, during a custody dispute in which the mother produced a threatening audio recording to portray the father as violent. The father’s lawyer was able to uncover that the women had used a widely available software to doctor the audio recording to sound like his client. However, this was only possible because the audio was a cheap fake, a less sophisticated form of manipulated content

Unlike technologies employed in the past, AI does not rely exclusively on rules programmed by humans, as it can autonomously improve its performance. This ability introduces an ‘opacity issue’ or ‘black box problem’, where AI systems develop hidden miscodes, deceiving human perception. Moreover, the easy accessibility of deepfake technology, which only requires a smartphone and an app to be employed, contributes to its rapid development. Advances in AI and machine learning are leading to an improvement in deepfake quality, making them increasingly convincing. Yet despite these significant advances in deepfake technology, detection techniques have not kept pace.

Deepfakes’ evolving indistinguishability from authentic content presents a huge challenge for judges in determining evidence authenticity. Some advocate for scepticism towards digital evidence, emphasising the need for legal systems to ‘catch up’ in order to address manipulation effectively. The introduction of deepfakes as legal evidence will deeply impact the rule of law. Trials may be prolonged as the parties can assert that the evidence presented is fabricated; courts may erroneously accept deepfake evidence as genuine; and convicted people may exploit this opportunity to allege unfair accusation supported by fabricated evidence.

Thus, in the litigation realm, deepfakes pose a dual challenge. Their remarkable deceptiveness can easily mislead viewers into believing their authenticity, while the mere recognition of deepfakes can raise doubts about the authenticity of genuine content.

Criminal Procedural Law

The impact of deepfakes in the judiciary, when it comes to criminal cases, raises concerns about the violation of fundamental procedural rights, notably the right to a fair trial (Article 6 of the European Convention of Human Rights) and effective remedy(Article 47 of the Charter of Fundamental Rights of the European Union – CFREU), the presumption of innocence, and the right of defence. (Article 48 CFREU).

The European Court of Human Rights declares incompetence in ruling on the admissibility of specific types of evidence, leaving it to national criminal procedural law. However, the Court still assesses whether the right to a fair trial, privacy and effective remedy, as well as the principle of proportionality, have been violated throughout the proceedings.

For criminal proceedings to be fair, with respect to the adversarial principle, defendants must have the opportunity to challenge evidence admissibility. However, the technical complexities of AI systems may prevent defendants from challenging decisions based on deepfake evidence without expert assistance, which generates extra costs. The opacity of AI systems further complicates defendants and their lawyers’ ability to question the lawfulness and accuracy of evidence generated through methods that programmers themselves are unable to understand.

Since there is no specific procedure for verifying the authenticity of tampered evidence in courtrooms and the existing legal standards are outdated, it is necessary to examine whether the current norms are sufficient for detecting deepfakes.

Differing Views

Scholars are currently debating whether to enhance evidence authentication standards. While some contend that existing standards are adequate, others advocate for new, stricter, more specialised requirements.

Those opposing the need for new standards rely on two arguments: they believe that existing standards are adequate to address the challenges posed by evidence falsification and that raising them will have socioeconomic impacts.

Several methods can be used to challenge the authenticity of deepfakes under the current rules. The simplest involves having the person portrayed in the video testify to its authenticity. However, this strategy has limitations, as it relies on the witness’ credibility. Alternatively, digital forensics experts can verify the video’s origin and preservation, providing testimony. With the continuous advancement of deepfake technology, it may be necessary to use deepfake detection tools to prevent manipulated content from being presented as evidence.

Advocates for new authentication standards argue that the existing ones are inadequate, time-consuming, and costly. According to them, technological progress reduces the likelihood of witnesses confirming the authenticity of evidence. Human memory is susceptible to manipulation, as entire events can be falsely implanted or altered, and external information provided afterward can distort or even suppress a witness’ memory. Moreover, proponents argue that the rapid evolution of deepfake technology outpaces the development of detection methods, requiring more advanced forensic techniques. Only when these techniques are developed and accepted by the courts can they serve as the basis for expert testimony. Furthermore, AI tools for deepfake detection pose risks associated with accuracy, validity, and accessibility. They may produce false positives and negatives, with dangerous consequences for evidence and convictions. Furthermore, some detection methods suffer from algorithmic opacity, which may render a tool inadmissible for evidence authentication if the underlying method is not understood by judges, the parties involved, or even developers. Additionally, the cost of deepfake detectors results in unequal access, especially for defendants with limited resources.

This approach highlights the shortcomings of current authentication standards, asserting that they not only fall short in protecting parties from potential violations of their fundamental procedural rights, but also contribute to such violations.

Attempt to regulate

Ensuring the justice system’s resilience against deepfake evidence requires the conjugation of three factors: enhancement of professional knowledge, implementation of new legislation, and the evolution of evidential processes and standards.

Education for legal professionals on the growing presence of deepfakes in courtrooms is pivotal, and should be provided by experts in audio-visual technology and AI. Forensic technicians should also be trained in methods to identify deepfakes in audio-visual evidence. In the context of evidence manipulation, the chain of custody is crucial, as it documents how the content has been handled. To avoid additional costs and ensure compliance with procedural fundamental rights, the admission of audio-visual evidence in trials should be subject to an authenticity assessment by a state agency. Although this incurs significant state expenses, it safeguards against harm to individuals and society,preserving the integrity of the justice system.

Addressing the legal gap in the European framework calls for a combination of Danielle Breen’s Pictorial Evidence Theory and Taurus Myhand’s approach. Breen advocates for regulations enforcing specific video authentication methods, while Myhand recommends requiring the expert certification of forensic analysis for audio-visual evidence, including the expert’s opinion on authenticity, the methods employed, and the chain of custody of the evidence. Myhand presents a specific method of audio-visual evidence authentication that is not absolute, allowing parties to challenge its admissibility. Its enforcement, however, is contingent on inclusion in the legally mandated authentication standards proposed by Breen’s theory.

Conclusion

The recent emergence of deepfakes in courtrooms poses a threat to the rule of law and to the procedural fundamental rights established in the Charter.

Existing authentication standards struggle to match the evolving sophistication of deepfakes, underscoring the need for enhanced measures. While some argue that current standards are sufficient, others stress their inadequacy and propose the integration of specialised methods and certification, as advocated by Breen and Myhand. The combination of their approaches strikes a balance between ensuring authenticity and allowing parties to challenge the admissibility of audio-visual evidence. As technology advances, the law must remain vigilant and adaptable, constantly reassessing and refining its provisions to stay ahead of deepfake advancements and uphold the foundation of a just legal system.

Os Insights aqui publicados reproduzem o trabalho desenvolvido para este efeito pelo respetivo autor, pelo que mantêm a língua original em que foram redigidos. A responsabilidade pelas opiniões expressas no artigo são exclusiva do seu autor pelo que a sua publicação não constitui uma aprovação por parte do WhatNext.Law ou das entidades afiliadas. Consulte os nossos Termos de Utilização para mais informação.

2 comentários

Teppichreinigung Reinigungsgeräte München 26/04/2024 - 08:21

German: “Klasse Artikel! Die Infos waren echt nützlich und informativ. Es ist immer schön, auf so gründlich recherchierte Inhalte zu stoßen. Weiter so! Ich freue mich auf weitere Beiträge von dir.”English: “Fantastic article! The information was really useful and informative. It’s always nice to come across such thoroughly researched content. Keep it up! I’m looking forward to more posts from you.”

Resposta
Commercial carpet cleaning specialists 29/04/2024 - 23:44

“Maintaining clean carpets is crucial for a hygienic living environment. Teppich Reinigung München offers reliable and effective carpet cleaning services in Munich. We’re committed to delivering exceptional results for our clients. Thank you for sharing these valuable insights!”

Resposta

Deixe um Comentário

Gostaríamos muito de ouvir a tua opinião!

Estamos abertos a novas ideias e sugestões. Se tens uma ideia que gostarias de partilhar connosco, usa o botão abaixo.