Autonomous Algorithmic Collusion: are we prepared?

Algorithmic collusion will raise significant challenges for the Cities of the Future. Is EU’s current Competition Policy suitable to tackle those challenges?

Introduction

Algorithmic collusion will raise significant challenges for the Cities of the Future. Is EU’s current Competition Policy suitable to tackle those challenges?

The increasing number of pricing decisions delegated to algorithms, coupled with market characteristics brought by the digital economy and the capabilities of reinforcement learning algorithms, creates a high likelihood of autonomous algorithmic collusion – when algorithms achieve a collusive outcome without being instructed or programmed to do so.

Studies have shown[1] that reinforcement learning algorithms and Q-learning algorithms (a type of reinforcement learning) are able to learn collusive strategies when programmed to achieve the optimal strategy, in particular to maximise profits. These algorithms first engage in a learning phase where they learn how to achieve the optimal strategy – they are rewarded when they choose a successful action in line with their goal, making them more likely to choose a similar action in the future, and are punished when they choose a suboptimal action (reward-punishment scheme). Significantly, studies testing algorithms’ behaviours in different market conditions show that algorithms choose collusion as the optimal strategy for all, achieving and maintaining the collusive price through price signalling and quickly punishing deviations. Generally, the higher market transparency of digital markets, algorithms’ monitoring abilities and fast retaliation, more frequent interactions and the characteristics of these algorithms, particularly, their reward-punishment scheme, foster algorithmic collusion, making it more stable and easier to achieve. For instance, algorithms decrease the incentive to cheat, which usually creates instability in agreements, by punishing and adapting their prices so fast that it is not advantageous for their competitors to deviate. Moreover, algorithmic collusion is expected to become a widespread phenomenon, occurring in markets seen as competitive, as algorithms become able to overcome some of these “obstacles”.

Therefore, we believe that algorithmic collusion will be a real and challenging problem for the competitiveness of the markets of the Cities of the Future, which raises the question: Are current antitrust rules, in particular, Article 101 of the Treaty on the Functioning of the European Union (TFEU), capable of addressing algorithmic collusion?

Algorithmic collusion as an infringement of Article 101 TFEU

Article 101 TFEU and Regulation 1/2003 were designed to deal with human facilitation of coordination. More specifically, Article 101 TFEU prohibits agreements between undertakings, decisions by associations of undertakings and concerted practices. Furthermore, the prohibition established in that rule focuses on the means used to achieve coordination (some form of communication) rather than the end result – the collusive outcome – due to the difficulties that finding an infringement based on the end result creates, as Courts and competition authorities would have to infer the underlying strategies behind the higher collusive prices, for example.

In contrast, when algorithms learn to collude autonomously, there is no agreement and no form of communication as we are used to observing in humans. Therefore, at first sight it resembles tacit collusion, which is generally considered by scholars and treated by the Court of Justice of the European Union (CJEU) as legal, unless collusion is the only possible explanation for the undertaking’s behaviour (a condition which is hardly ever met).

Nevertheless, in my opinion,  algorithmic collusion could possibly amount to an infringement of Article 101 TFEU through the concept of concerted practices. When algorithms collude, they engage in price signalling, which is essentially a public communication of their strategies to other algorithms. In fact, through repeated interactions, the increasing and lowering of prices by algorithms sends a message to others regarding the pricing strategy they should follow and what will happen if they do not follow that strategy (algorithms essentially communicate that if other algorithms do not follow the higher collusive price, prices will be lowered as a punishment and, therefore, they should follow the higher price). Hence, high and similar prices are not due to mere rational adaptation, since algorithms are actively trying to influence each other’s prices to achieve the price they see as optimal for all. Following this perspective, if such behaviour is observed, it could be possible to establish an infringement of Article 101 TFEU for public disclosure of information, namely relating to strategy, as has been recognised by the European Commission in its Guidelines on Horizontal Cooperation Agreements. Will this approach be feasible in practice considering both the CJEU’s perspective and future developments in Artificial Intelligence (AI)?

Who should be held liable for algorithmic collusion?

Finding a liable party for autonomous algorithmic collusion could be problematic given that the undertakings using the algorithms did not instruct them to collude and the programmer did not program the algorithms to collude. Using the criteria from CJEU case-law, it is possible to hold both the undertakings using the algorithm and the algorithm’s developer and provider liable for algorithms’ actions.

-Undertakings using an algorithm can be held liable for its anticompetitive behaviour if the algorithm is considered to be in a comparable situation to that of an employee[2] or to that of an independent service provider. Although this depends on a case-by-case analysis, it is likely that the relevant criteria for one of the categories will very often be met.

-The software developer/provider can be held liable as facilitators, if the criteria established in Case AC-Treuhand are met.

Nevertheless, it is important to keep in mind that with future developments in AI, algorithms will enjoy more autonomy, weakening the link between algorithms and undertakings and developers. As such, establishing liability in these terms may not be a possibility in the Cities of the Future.

Are the remedies available under Regulation 1/2003 effective to tackle algorithmic collusion?

Regulation 1/2003 allows for the imposition of fines and behavioural and/or structural remedies. Despite these remedies not having been designed to deal with autonomous algorithmic collusion, they may, nevertheless, be effective to tackle it:

-Fines: The possibility of imposing fines can act as an effective deterrent against algorithmic collusion. If the algorithm is aware that the imposition of a fine is likely, the estimated benefits of choosing a collusive action are no longer as appealing.

-Behavioural remedies: Several behavioural remedies have been suggested for algorithmic collusion.

Injunctions – The algorithm’s behaviour can be changed through injunctions, coding it to play “competitive instead of cooperative games”[3]. This would be a good solution in cases where it is possible to separate the algorithm’s characteristics that lead to collusion from those that generate efficiencies; however, this could be a hard task for authorities due to the black-box problem.

Price freezes – If the authorities impose slower and less frequent price changes, price signalling and punishments become more disadvantageous and less effective, decreasing the likelihood of collusion.

-Structural remedies:

Divestitures – Through divestitures, authorities could rearrange market conditions that are prone to algorithmic collusion. While algorithms would likely still be able to collude in competitive markets, the creation of asymmetric competitors would make collusion harder to achieve by making it more difficult for algorithms to find a focal price and therefore to collude.[4] However, divestitures are very costly and hard to design as remedies.

Eventual resurrection of the New Competition Tool?

In 2020, the European Commission (EC) presented an initiative, the New Competition Tool (NCT), as an Inception Impact Assessment, to ensure that competition rules were able to address the challenges brought by the modern economy, and particularly the digital markets.

This initiative aimed to address structural competition problems and lack of competition, including algorithmic collusion; however, it was later abandoned. Although the initiative was not developed any further, and the Inception Impact Assessment is the only guidance available on how the EC envisioned the idea, the various Expert Reports on the NCT provide useful hints on how this toolkit would work.

The EC presented four policy options in the context of the NCT. Two of these (options 3 and 4) addressed algorithmic collusion as market structure-based competition tools, the only difference being that option 3 had a horizontal scope (similarly to other competition rules) and option 4 a limited scope, only allowing intervention in specific sectors. None of the options included the finding of an infringement to impose behavioural or structural remedies, fines or damage claims. Moreover, it would be possible to impose market-wide remedies, which are more effective in tackling algorithmic collusion (for instance, price freezes would be more efficient in preventing collusion in the future if all market players were subjected to them, and not only those found guilty of an infringement).

A toolkit like the NCT would be an excellent complement to Article 101 TFEU, allowing the authorities to remedy harmful situations where it would not be possible (or desirable) to find an infringement through that provision or where market-wide remedies would be a preferable option.

The question is: will the EC resurrect the New Competition Tool to deal with algorithmic collusion?

Final Remarks

Overall, the current and expected increase in the delegation of pricing decisions to reinforcement learning algorithms furthers the likelihood of algorithmic collusion when coupled with certain market conditions, such as the transparency of digital markets. If reinforcement learning algorithms behave in real life as they have in studies, we believe that current competition rules can tackle some cases of algorithmic collusion, even if they could benefit from a complement. However, there are no studies testing algorithmic collusion in real-life conditions and we cannot anticipate how AI will have evolved when the use of pricing algorithms becomes widespread in the markets. Will current competition rules still be suitable to tackle future and real-life algorithmic collusion?

Furthermore, considering that a toolkit like the NCT would be a good complement to current competition rules, will there be developments soon or will the EC propose a different framework? Competition authorities have adopted a “wait and see” and quite sceptical view of this topic, but is that wise? Perhaps it would be more prudent to anticipate future developments and have a suitable answer for algorithmic collusion ready for when it becomes a reality in the


[1] Timo Klein, ‘Autonomous Algorithmic Collusion: Q-learning under Sequential Pricing’ (2021) 52 RAND Journal of Economics < https://onlinelibrary.wiley.com/doi/10.1111/1756-2171.12383 > accessed 19 October 2022; Emilio Calvano and others, ‘Algorithmic Pricing: What Implications for Competition Policy’ (2019) 55 Review of Industrial Organization < https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3209781 > accessed 19 October 2022; Emilio Calvano and others, ‘Artificial Intelligence, Algorithmic Pricing and Collusion’ (2020) 110 American Economic Review < https://www.aeaweb.org/articles?id=10.1257/aer.20190623 > accessed 19 October 2022; Emilio Calvano, ‘Algorithmic Collusion with Imperfect Monitoring’ (2021) 79 International Journal of Industrial Organization; Karsten T Hansen and others, ‘Algorithmic Collusion: Supra-Competitive Prices via Independent Algorithms’ (2021) 40 Marketing Science < https://doi.org/10.1287/mksc.2020.1276 > accessed 19 October 2022

[2] Case C-22/98 Becu and others [1999] ECR I-05665; Joined Cases 100-103/80 Musique Diffusion Française [1983] ECR 01825

[3] Francisco Beneke and Mark-Oliver Mackenrodt, ‘Remedies for Algorithmic Tacit Collusion’ (2021) 152(170) Journal of Antitrust Enforcement.

[4] Francisco Beneke and Mark-Oliver Mackenrodt (n 6) 169

The Insights published herein reproduce the work carried out for this purpose by the author and therefore maintain the original language in which they were written. The opinions expressed within the article are solely the author’s and do not reflect in any way the opinions and beliefs of WhatNext.Law or of its affiliates. See our Terms of Use for more information.

Leave a Comment

We'd love to hear from you

We’re open to new ideas and suggestions. If you have an idea that you’d like to share with us, use the button bellow.