AI and enforcement of disciplinary powers

Will Artificial Intelligence technologies be able to enforce disciplinary powers in the workplace without human control or oversight?

Artificial Intelligence (AI) technologies have been slowly but steadily finding their way into the workplace. Will these technologies, within the existent legal framework, be able to take on managerial powers and enforce disciplinary powers autonomously?

Employment relationship

One of the key aspects of an employment relationship is the reciprocal rights and obligations established between employer and employee. These rights are of statutory nature and may differ from country to country. Nonetheless, employer’s rights are normally described as a triangle of powers: managerial powers, rule-making powers, and disciplinary powers.

The employment relationship encompasses a degree of subordination, which may require the use of disciplinary actions in order to guide the employee to correct performance or behaviour by identifying the problems, causes, and solutions. Disciplinary actions can create social and economic consequences, which may ultimately result in the termination of the employment contract. As such, it is essential that the disciplinary policy is clear, fair, and proportional.

With this in mind and in light of the evolving technological context surrounding the workplace, it does not only worth but it is necessary to pose the question: can employers delegate part of their powers to Artificial Intelligence tools or services, allowing them to start disciplinary procedures against employees without human intervention?

AI in charge of disciplinary powers

The idea of implementing and developing AI systems to survey, control, and discipline workers might be justified with productivity and operational advantages. Despite the advantages that may rise from these systems, they are not exempt from controversy. Until recently, there were no rules in the USA concerning the integration and employment of AI or other autonomous systems regarding disciplinary procedures. However, reports of AI being used in abusive manners to discipline and fire employees led to discussions aiming to limit the intervention of such systems on disciplinary procedures and control of employees. In the State of California, the legislative body approved the California Assembly Bill 701 on Warehouse Distribution Centres. This Bill, despite not including any factual examples, has a clear target: Amazon.com, Inc. Amazon has given machines unparalleled control over workers and is accused of using the technology to impose unreasonable demands on them. Said Bill, defending warehouse distribution centres workers, forbids taking adverse employment action against employees for failure to meet unlawful and unrealistic quotas of productivity that are being measured by AI.

Despite this first insightful attempt to intervene on AI systems regarding discipline in the workplace, one needs to bear in mind that labour laws are generally more liberal in the USA than in Europe. Here, the importance of fundamental and social rights play an important role on the protection of employees against abusive sanctions from their employers. Generally speaking, we can identify three main constraints to the use and reliance on AI systems when it comes to disciplinary powers in the workplace across the European Union: just cause, data protection, and legal personality.

Just cause and the principle of proportionality

Just cause is the golden standard employers must adhere to. Every action that comes from disciplinary procedures must be solidly grounded on principles of justice and fairness, avoiding abusive sanctions that may lead to fines and compensation for damages.

The EU Member States have some degree of autonomy regarding the regulation of disciplinary procedures and termination of employment. However, EU Member States have a common ground when it comes to termination of an employment contract by the employer – in order to be lawful, the decision needs to be reasonable, proportionate, and fair.

A disciplinary procedure that is moved against an employee due to an apparently insignificant or dubious conflict and ends with a dismissal may be proportionate (or not) depending on how the events happened. This means that interpretation is, as in any human decision, decisive when moving a disciplinary action against an employee. This can be a major obstacle to the implementation of autonomous AI systems capable of enforcing disciplinary powers: after all, computers are great at dealing with objective and analytical realities, but they fall short when trying to analyse subjectivity or intrinsic ideas. Can we program an AI system that can interpret reality and apply a fair and adequate sanction? As long as there are no certainties or more concrete answers, there will hardly be an opening to allow the use of computer systems for this purpose.

Data Protection

“In the EU, human dignity is recognised as an absolute fundamental right. In this notion of dignity, privacy or the right to a private life, to be autonomous, in control of information about yourself, to be let alone, plays a pivotal role. Privacy is not only an individual right but also a social value”. It was in this context that the General Data Protection Regulation was approved by the European Parliament and came into force in May 25th, 2018 in the EU.

Data protection is a fundamental right of the data subjects that needs to be safeguarded. Data is a necessary part of the disciplinary action and it can be present in all the stages of the procedure: the employers need to acquire information to start the procedure, they may encounter employee’s personal and potentially sensible data, and they may require additional information to prove that the disciplinary action is fair or not. This means that, like the just cause standard, the fundamental right of data protection needs to be taken into account when moving a disciplinary procedure against an employee.

Article 22 paragraph 1 of the GDPR prohibits the use of AI or similar technology from being the sole decision-maker on actions that produce legal effects concerning people. There are some exceptions to this general rule on Article 22 paragraph 2:

“Paragraph 1 shall not apply if the decision:

a. is necessary for entering into, or performance of, a contract between the data subject and a data controller;

b. is authorised by Union or Member State law to which the controller is subject and which also lays down suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests; or

c. is based on the data subject’s explicit consent

 As we have seen in the previous section, current AI uses do not seem to guarantee that the legitimate interests of data subjects are sufficiently safeguarded. The lack of a human judgement leads to stricter control on these technologies as they tend to be extremely analytical and objective, which is not always good when a proportionate decision needs to be made when it will have legal implications.

Legal personality

In October 2020, the European Parliament issued three Resolutions (2020/2012, 2020/2014 and 2020/2015) on the ethical and legal aspects of AI software systems. These Resolutions acknowledge that AI will bring significant benefits for humans in many sectors, including the labour market. However, none of these Resolutions considered the possibility of granting legal personality to AI systems with.

From a theoretical point of view, one could think of adjusting statutory laws to accommodate such development and give legal personality to AI, thus providing the capacity of having legal rights and duties within a certain legal system. Having a legal regime that would grant rights and obligations to AI akin to legal personality would be a huge step forward on the legal autonomy of AI systems. However, the lack of legal personality means that liability for damages caused by AI must be borne by natural persons, legal persons, or organisational units with legal capacity that have the right to move disciplinary actions with the duty to do it properly. For a more in-depth look at AI and civil liability, you can check Inês Brandão’s insight on this topic.

As AI cannot be legally responsible for the disciplinary procedures and termination of labour contracts, it cannot be considered a legally autonomous entity. Furthermore, labour laws across Europe (for example in Portugal, France and Germany) specifically stipulate that the disciplinary power must be exercised by the employer or hierarchical superior of the employee.

As showcased above, AI is not yet developed enough to assure that it can analyse events that happen in the workplace and proceed to enforce employers’ disciplinary powers in ethical, fair, and proportional ways. While this technology further evolves, we cannot risk arbitrary decisions made by autonomous systems to have profound legal impacts on the lives of a company’s employees. As such, AI should not be given the – neither de facto nor de jure – ability to exercise disciplinary power autonomously in the EU. What, at most, can be discussed is the use of AI by employers as a tool to help their decisions regarding disciplinary procedures.

Os Insights aqui publicados reproduzem o trabalho desenvolvido para este efeito pelo respetivo autor, pelo que mantêm a língua original em que foram redigidos. A responsabilidade pelas opiniões expressas no artigo são exclusiva do seu autor pelo que a sua publicação não constitui uma aprovação por parte do WhatNext.Law ou das entidades afiliadas. Consulte os nossos Termos de Utilização para mais informação.

Deixe um Comentário

Gostaríamos muito de ouvir a tua opinião!

Estamos abertos a novas ideias e sugestões. Se tens uma ideia que gostarias de partilhar connosco, usa o botão abaixo.