AI Implementation in the Financial Sector: Legal Challenges

AI in finance offers significant opportunities for improvement, but faces legal issues such as explainability standards, protection from bias, and ensuring data quality. This article aims to briefly address these challenges.

Introduction

The financial sector is one of the most data-rich domains, where the implementation of data-driven artificial intelligence (AI) can yield significant results. For example, we are seeing a dramatic increase in competition between large financial institutions. By the end of 2023, more than 70% of financial institutions were in the proof-of-concept or pilot stage of developing their own AI tools or using market solutions. While there is an established trend in the development and implementation of AI, the legal and ethical aspects of the technology raise substantial concerns among government authorities, market players and consumers.

Explainability

The ability to explain the results of AI algorithms is a critical consideration, especially when it comes to the financial sector. Transparency is a key feature in the financial sector as it has a direct impact on compliance, risk management, accountability, customer trust, and satisfaction. Explainability represents a major component of the transparency concept. However, the majority of the most capable modern AI systems are operating in the “black box” regime, where the inner mechanisms are hidden from users.

The General Data Protection Regulation establishes the “right to explanation”, which means that individuals are entitled to request “meaningful information about the logic involved” in the automated decision-making process, which is challenging in the context of black box AI systems. Thus, it is believed that legislators will have to introduce a definition and specific standards for “explainability” in relation to AI systems.

One suggested solution is to introduce different levels with specific degrees of exploitability of black box AI systems. The AI Act mentions only two cases where finance-related AI systems might be considered as high risk (with corresponding requirements for transparency and explainability), namely:

  • Evaluation of creditworthiness and establishment of individuals’ credit score; and
  • Risk assessment and pricing for personal life and health insurance.

However, given the important impact of the financial sector on the Community, this list is expected to be substantially extended in order to enhance standards of transparency and explainability for the entire financial market.

Regarding the technical aspect of explainability, it is crucial to distinguish between two types of explanations: general and specific. The first refers to the theoretical explanation of the entire AI system and includes more procedural and technical elements, such as setup information, training metadata, performance metrics, estimated global logics, and process information. This general approach is mostly applicable to regulators and oversight bodies.

The second, specific type of explanation is built around input data and explains its practical outcome. This type is closer to the user because it helps clarify the answers to the following questions:

  • What changes in the input data would have led to a different decision by the AI model in the user’s case?
  • What training data sets are most comparable to this particular outcome?
  • How confident is the provider of the AI about the correctness of the outcome?

While the European Union (EU) legal framework is generally considered to be pro-consumer and, consequently, the individual-specific approach seems to be a feasible solution, it is worth mentioning that the financial industry is a heavily regulated and supervised sector, which means that the interests of government authorities and the general approach can also be a reasonable choice.     

Embedded Bias

Despite the fact that AI is considered to be a potential solution to combat human bias, at its present level of development, AI cannot be completely free of embedded bias. In the financial sector, it may arise due to:

Incomplete or unrepresentative data used to train the AI: machine learning techniques (e.g., used for loan approval) prioritise groups with greater representation in the training data, as their predictions will be more accurate;

  • Incorrect training data: the data may support existing stereotypes (e.g., the Amazon case, where the recruiting algorithm favoured men over women based on historical hiring decisions); or
  • Human bias during the training process: various psychological, social, emotional, and cultural aspects may influence a researcher’s decision on which attributes to include or exclude in the machine learning model.
  • In addition to the inherent bias problem, AI systems are vulnerable to the risk of hallucination – the production of factually incorrect outputs but presented with a high level of confidence – which is quite similar to bias in terms of its potential negative impact. For example, in the anti-money laundering (AML) practice, false red flags based on incorrect assumptions about fraudulent behaviour can have a negative impact on customers and undermine trust in financial institutions.

One of the legal solutions to this issue at the European level is the implementation of common frameworks for the ethical use of AI systems, specifically in the financial sector. It is worth mentioning that there are significant similarities between the ethical principles of the financial and AI sectors. For example, in 2019, the EU adopted “Ethics Guidelines for Trustworthy AI”. While this document is aimed at common issues, a detailed framework covering the implementation of AI in different areas of the financial sector – from private banking and price personalisation to stock exchange and AML practices – would dramatically increase trust among stakeholders and set higher standards for financial institutions to ensure and promote human rights.  

Nutrition

The development of fintech AI requires data, algorithms, training, and review, just like any other type of AI. In this regard, the issue of appropriate data in sufficient quantity and quality is becoming increasingly sensitive.

On the one hand, the situation is relatively straightforward and feasible in the cases of AI tools for market forecasting, trading, portfolio planning and business profiling – as a source of “nutrition”, developers may exploit publicly available anonymised data on GDP, macroeconomic data, stock exchange activity, history, patterns and tendencies, corporate activities and financial reports. On the other hand, the AI tools used for private banking solutions, such as modelling customer behaviour and intentions, personalising products and pricing, compliance enhancement and AML/CFT, directly interfere with strict data protection and banking privacy regulations due to the personal nature of the information.

We observe that when financial institutions build their own AI systems, they generally face the same problem: lack of “nutrition” for training. The banking sector possesses significant datasets on clients and their activities, which should be more than enough to develop AI systems. However, due to stringent regulations on banking secrecy, privacy and data protection, these datasets are limited to each bank’s own clients’ information, preventing them from obtaining a general market overview.  

This decentralisation of data actually hinders the process of AI development in the financial sector, even though this concept was one of the trendiest recent innovative digital solutions. Therefore, market players may consider the creation of a common pool of information – e.g., an interbank database.

In order to comply with privacy and data protection regulations, information should be fully anonymised (or pseudonymised) before being sent to the interbank database. This could be done by a special purpose AI gatekeeper to ensure that common standards and mechanisms are applied for all stakeholders.

This converted data would be available to stakeholders in the common database, but only the sending banks would know the encrypted parameters (personal information). As a result, the database users would be able to get a comprehensive view of the market and train their own AI (or common AI) without violating privacy and data protection standards.

At the same time, the fragmentation of information – where each bank only knows the information of its own clients and counterparties – creates a chain of interconnected links that allows, for example, to track transactions and money for security or compliance purposes, with the subsequent possibility of deanonymisation at the request of the authorised body. This may resolve the issue of the quality and quantity of training datasets and also contribute to the observance of regulatory obligations, including AML practices and sanctions screening.

Conclusion

The implementation of AI in the financial sector has enormous prospects.  Stakeholders are currently actively seeking balanced and secure solutions to comply with strict EU regulations. Notwithstanding the fact that technological solutions are theoretically capable of mitigating most of the issues discussed above, the sector needs a firm legal basis for its further development.

The Insights published herein reproduce the work carried out for this purpose by the author and therefore maintain the original language in which they were written. The opinions expressed within the article are solely the author’s and do not reflect in any way the opinions and beliefs of WhatNext.Law or of its affiliates. See our Terms of Use for more information.

Leave a Comment

We'd love to hear from you

We’re open to new ideas and suggestions. If you have an idea that you’d like to share with us, use the button bellow.