Criminal Justice in the Age of Artificial Intelligence:  AI’s Potential to Predict and Influence Criminal Sentencing

Imagine a world where AI can influence criminal sentencing, a leap from imagination to reality that could dramatically alter the decision-making process and legal strategy

The field of artificial intelligence (AI) has witnessed significant progress across a multitude of sectors and promises to revolutionise collective living standards, notably within the legal sphere.

AI holds the capability to transform legal analysis by expediting the research of doctrine and case law, scrutinising documents, and assisting in the drafting and refinement of contracts. These applications are already being used by legal professionals daily. However, a more advanced and ambitious application is on the horizon: the prediction of the outcome of court decisions based on the analysis of historical data.

When diving into historical data, the scope of analysis can vary significantly. It may encompass the spectrum of decisions rendered within a specific jurisdiction or be confined to the rulings of a particular court, panel of judges or single judge. The same analysis may also consider the defence’s arguments, unique case characteristics, or the defendant’s personal circumstances, including socio-economic background and prior offences. The variables involved in such an analysis can be endless.

As such, the integration of artificial intelligence into the judicial system prompts an urgent inquiry into its prospective impact on the practice of criminal law. The decision-making paradigms of judges may undergo significant transformation due to the incorporation of AI tools, which could extend to the formulation of defence strategies, while enhancing the efficiency and consistency of legal proceedings. However, it is of paramount importance to employ these predictive instruments in accordance with the core principles of criminal law, thereby preventing any distortion of the criminal judicial system.

It seems acknowledged that the current capabilities of AI fall significantly short of the capacity to understand and apply the demanding and complex principles of criminal law. While serving as the ultima ratio within the legal framework, criminal law application by the courts extends well beyond mere law interpretation or discerning legislative intent. It encompasses a meticulous evaluation and assessment of evidence and its credibility. Moreover, it involves the assessment of highly subjective elements, such as determining the guilt of the agent and considering the individual and collective prevention needs of each case. After all, a judge must never lose sight of the fact that the application of a penalty must be aimed at affirming the deterrent effect of the law and the social reintegration of the agent. Furthermore, the penalty is limited by the agent’s guilt, understood as the specific censurability of the relevant action or omission.

However, in the face of predictive artificial intelligence tools, there will certainly be a tendency among judges and criminal lawyers to review available data in order to determine the direction of judicial trends or the likely outcomes. As so often happens in the field of AI, the main risks and advantages of using this type of tool must therefore be foreseen and addressed.

This issue has not gone unnoticed at the European Union level. As early as 2018, the European Commission for the Efficiency of Justice (CEPEJ) published the European Ethical Charter on the use of Artificial Intelligence in Judicial Systems and their Environment (https://rm.coe.int/ethical-charter-en-for-publication-4-december-2018/16808f699c).

At the time, CEPEJ completely ruled out the possibility that machines “devoid of any emotion” will one day be capable of making the act of judging more reliable. It was also argued that the term “predictive justice” should be rejected as ambiguous and misleading, as such tools are grounded in case law analysis methods, using statistical approaches that in no way reproduce legal reasoning, but may attempt to describe it. It was therefore stressed that “the instrument must be integrated into a clear ethical framework”.

Legal considerations aside, AI lacks emotional intelligence, which is crucial in the courtroom, where empathy and moral judgement play an important role in the decision-making process. Six years later, it still appears very difficult that a criminal judge’s role will ever be supplanted or profoundly influenced by artificial intelligence.

As also highlighted by CEPEJ, the inclusion in the criminal justice system of algorithmic variables such as criminal history and socio-economic context suggests that the past behaviour of a group could influence the fate of an individual. Each individual is a unique human being with a specific social background, education, skills, degree of guilt, and distinctive motivations for committing a crime. Human decisions – even those aimed at committing a crime – are often based on personal circumstances, values and social factors, which a machine would not be able to account for. Predictive tools can, therefore, be discriminatory.

Recent progress in the legislative framework addressing AI has acknowledged some of these concerns within the provisions of the AI Act, which comes into force on August 2, 2024. This groundbreaking Regulation explicitly forbids “an AI system for making risk assessments of natural persons in order to assess or predict the risk of a natural person committing a criminal offence, based solely on the profiling of a natural person or on assessing their personality traits and characteristics“. Furthermore, the AI Act classifies as “High-risk AI systems” those “intended to be used by a judicial authority or on their behalf to assist a judicial authority in researching and interpreting facts and the law and in applying the law to a concrete set of facts”.

The integration of predictive tools into the legal system must be carefully balanced not only against criminal law principles, such as the principles of guilt and necessity of penalties, but also the existing rules on the application of penalties. Such principles and rules are the cornerstones of a fair and impartial criminal legal system and must remain at the forefront when considering the implementation of any legal technology tools. To that extent, regardless of the jurisprudential trend and the results of the predictive tool, the penalty must be assessed on a case-by-case basis. It is imperative that a penalty is only applied when it is deemed essential to safeguard a legal interest and that it is assessed considering the agent’s guilt.

Enforcing these principles becomes even more crucial when taking into consideration CEPEJ’s concerns about the impartiality of judges who, rather than simply tending to follow previous judicial practice, may feel obliged to do so. As mentioned by CEPEJ, judges may be reluctant to take on the burden of ruling against the predictive legal tools, especially in systems where their positions are not permanent or where they could face personal (disciplinary, civil, or criminal) liability. In addition, one can anticipate that the results of predictive tools will be used in appeals against decisions in opposition to the trend, which could also have an impact on the performance evaluation of judges, with direct consequences for their personal lives.

On the other hand, these tools will allow criminal lawyers to explore and test the merits of a given argument or its strength in a specific legal context, as well as identify additional arguments, increasing the chances of success. The existence of a technology that can gauge the chances of a case succeeding or failing can also provide answers to the questions most commonly posed by clients, but which are also the most challenging for lawyers to answer: What is the likelihood of the case succeeding? What is the most likely conviction?

And the answers to those questions provided by AI tools will certainly influence clients’ decisions regarding whether to pursue litigation, thereby holding the potential to reshape the litigation system.

Bearing that in mind, what only human perception allows us to do should not be overlooked. Considering previous professional experiences, knowing how to read the room, assessing which strategy might be more efficient from an emotional and empathetic point of view are tasks that a machine will never be able to perform. And this could definitively change even a very strong trend in case law. In the forthcoming years, it will be essential to observe the measures implemented within the judicial framework to fully comprehend the impact of predictive tools on court verdicts. This scrutiny is crucial to determining how such tools will influence the roles and decision-making processes of judges, lawyers and their clients. The anticipation of these developments is paramount, as they hold the potential to significantly reshape the judicial landscape.

The Insights published herein reproduce the work carried out for this purpose by the author and therefore maintain the original language in which they were written. The opinions expressed within the article are solely the author’s and do not reflect in any way the opinions and beliefs of WhatNext.Law or of its affiliates. See our Terms of Use for more information.

Leave a Comment

We'd love to hear from you

We’re open to new ideas and suggestions. If you have an idea that you’d like to share with us, use the button bellow.