As Artificial Intelligence (AI) agents gain increasing autonomy, their actions are proportionally more susceptible of creating damages, which raises a question of liability: who should be liable for the damages caused by AI?
Current liability framework
In the European Union (EU), the current liability framework amounts to the national liability rules of each Member State and the Product Liability Directive 85/374/EEC (the Directive). On both levels of legislation, liability claims are made against a particular person, usually the operator, owner or user, based on fault or strict liability. However, the majority of these rules were written before autonomous AI-agents came into play, which calls into question their adequacy to deal with the current problems that this technology imposes.
Various approaches have been suggested to solve this imminent problem, such as the reform and/or revision of the Directive and the adoption of ad-hoc legislation at European level. Notwithstanding, in its 2017 Resolution on Civil Law Rules on Robotics, the European Parliament (EP) suggested that the European Commission should assess the impact of creating “a specific legal status for robots in the long run, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons responsible for making good any damage they may cause.” Recently, in its 2020 Resolution with recommendations to the Commission on a civil liability regime for artificial intelligence, the EP rejected its initial idea of electronic personality. Nonetheless, its original statement had already raised the debate around whether AI-agents should be considered legal persons.
In fact, since the level of autonomy of AI is reaching a point of impenetrability that makes it very difficult and/or costly to identify the human source (e.g., the code, the piece of data, or the industrial defect) behind the damage, it is reasonable to discuss if the very system that caused the harm should be liable.
The discussion
Most proponents of creating a legal personality for AI-agents base this claim on the legal personhood of corporations. Indisputably, corporations are considered legal subjects in all modern legal systems. Such legal personality allows them to bear civil liability for the damages caused through their own activity. Yet, the acknowledgment of a corporation’s legal personality does not call for a legal recognition of a moral, ethical, or human-like status of said entity. Therefore, one can conclude that the legal personality granted to corporations is a legal fiction created by law on functional grounds (e.g., to separate assets and limit liability), as corporations are not natural persons to whom legal personality is naturally conferred to. Likewise, an AI-agent is also not a natural person. Its capabilities, despite its increasing autonomy, were designed by humans. The argument posed here is that, similarly, the legal personality of AI-agents could also be established by provisions of law on functional grounds, such as liability allocation.
Based on this claim, one of the main questions to be answered is whether AI itself would be able to have the adequate financial resources to remedy the damages it caused. Unlike corporations, which have an activity that can generate revenue after the shareholders’ initial contribution, the current technology does not allow AI-agents to attain capital on their own. Even if it did, such phenomenon would amount to several legislative challenges kindred with the legal capacity of AI. One solution for this problem is for all the parties involved in the production effort and employment of an AI-agent, such as the product designers, software developers, manufacturers, and even its owners and users, to endow it with capital, as a form of ‘compulsory insurance’. Thereupon, the victim could direct their claim a single person, even if a fictional one.
As a counterargument, the Expert Group on Liability and New Technologies appointed by the European Commission (EG), which denied the necessity to adopt the notion of electronic personhood in 2019, stated that such solution “would amount to putting a cap on liability and – as experience with corporations has shown – subsequent attempts to circumvent such restrictions by pursuing claims against natural or legal persons to whom electronic persons can be attributed, effectively ‘piercing the electronic veil’.”
Moreover, as illustrated by the EG, it can also be said that the legal personhood of corporations is not absolute, as the corporations’ separate personality may indeed be ignored for liability purposes by lifting the corporate veil. As Arthur W. Machen, Jr. put it: “individuals, not corporations, are the real subject of the rights conferred on corporations”. There is always a natural personbehind a corporation and the rights and obligations of the latter actually belong to the former. The actions of an AI-agent with legal personality, on the contrary, cannot always be traced back to the actions of a natural person. For that reason, an AI-agent would have to be deemed as an independent actor, not a mere legal fiction.
Final remarks
Although it would still be possible from a technical point of view to grant AI-agents legal personality, as the law is flexible enough to allow for such a solution, it is believed that the current state of the technology does not make it an advantageous solution.
Since an AI-agent is currently unable to gather assets, someone (or a collective) would have to take out insurance on it or provide it with the means with which damages could be compensated. Hence, the liability burden would still be on human persons, even if the actions of the AI-agent are not traceable back to human acts. Moreover, as illustrated in the EG 2019 report, “if such assets did not suffice to fully compensate the victims, they would have a strong incentive to seek compensation from the person benefiting from the operation of the system instead”. On the other hand, “if the AI’s assets were sufficient to pay the same level of compensation as under existing liability and insurance regimes, there would not be any cause for discussion – but, in that case, giving AI legal personality would be a mere formality and not really change the situation”.
Thus, the conclusion reached is that granting AI-agents legal personality would be ineffective, as there are not enough functional grounds to justify it. In the future, perhaps the development of this technology will warrant such a radical solution. But, for now, civil liability should still be addressed by natural persons or corporations, preferably on a strict liability basis as to mitigate appointing fault. Howbeit, changes in the current liability framework ought to nonetheless be promptly adopted to accommodate these emerging technologies.