Agentic AI and EU Competition Law: Probes, Risks and a Balanced Path Forward

How will agentic AI reshape EU competition – defaults, access, interoperability, liability – as probes into Meta/Google grow? Exploring enforcement and governance to preserve innovation, contestability, and consumer welfare.

Following our previous Insight on “Generative AI and the Talent Crunch”, the focus now turns to Agentic AI – the layering of autonomous, goal-directed software agents on top of large models that can perceive, reason, plan, remember, act and learn – which is set to rewire downstream competition by reshaping distribution, search, commerce and productivity across interconnected ecosystems.

Agentic AI sharpens classic competition questions around inputs, access, defaults, interoperability and self‑preferencing while unsettling predicates of enforcement because outcomes may appear coordinated without a clear human decision‑maker. This duality is central to current policy workstreams, as seen in the OECD’s December 2025 Background Note on AI and downstream dynamics, which highlights gains in efficiency and quality and flags pathways to foreclosure, collusion, and governance deficits.

Why Agentic AI changes the competition analysis

Agentic systems integrate deeply with devices and platforms to execute multi‑step tasks with minimal supervision, magnifying the importance of distribution bottlenecks such as defaults, pre‑installation, and privileged access to messaging and app ecosystems. These systems also operate across shared tools and datasets, increasing the risk that independent firms converge on similar algorithmic counsel and outputs – even without explicit coordination. The result is pressure on familiar enforcement anchors (agreement, context, intention, effects) and the renewed salience of stack dependencies, ecosystem loops and network effects as determinants of power.

The OECD frames agentic AI as a nascent but likely structural shift, with agents that plan and act across workflows and tools, potentially transforming search, workflow automation and customer engagement, particularly when integrated with hyperscaler cloud and distribution layers. Vertical dependencies at these choke points can enable leveraging or foreclosure where compute, data or distribution are strategically controlled.

Core findings from the OECD downstream lens

The OECD’s Background Note identifies mechanisms by which AI can lower entry barriers and enable innovation – labour substitution and augmentation, incremental scaling via modular tools, and reduced search costs – while stressing the heterogeneity of impacts by sector, firm size, and access to enabling inputs. These pro‑competitive channels are counterbalanced by limits and risks: data and compute concentration, model restrictiveness, biased intermediation, and algorithmic conduct that challenges attribution and auditability.

Key themes for downstream competition include (i) gatekeeping via access and defaults on key interfaces; (ii) data/content asymmetries and the need to balance licensing with fair access; (iii) feedback loops and network effects that can hasten tipping; (iv) carefully scoped interoperability and portability; (v) scrutiny of partnerships and de facto concentrations alongside verifiable efficiencies; (vi) responsibility and attribution for agent actions, including objective justifications grounded in safety, privacy and IP; (vii) dynamic, usage‑based evidence standards; and (viii) proportionate, time‑limited remedies coordinated with the Digital Markets Act, the AI Act and data protection rules. The enforcement task is to deter foreclosure while preserving efficiency gains.

Enforcement snapshots across jurisdictions

Across jurisdictions, agencies are triangulating around input concentration, ecosystem foreclosure and partnerships that function as de facto concentrations, with the OECD lens informing priorities even as remedy toolkits must remain interoperable and innovation‑friendly.

Effects-based theories are advancing in the United States, where agencies characterise shared platforms and algorithms as “coordination infrastructures”, invoking functional counterfactual thinking: if two rivals could not lawfully use a human go-between to align prices, they also cannot lawfully achieve the same result by routing signals through software that performs an equivalent intermediary function. Active litigation against property managers using common pricing software illustrates this trajectory, amid mixed court outcomes where classic elements like agreement and harm are not substantiated.

In Europe, scrutiny covers antitrust merger control and platform regulation, with particular focus on access to inputs, defaults and interoperability in vertically integrated stacks. The Commission’s Directorate-General for Competition (DG COMP) has moved from consultation to action, opening formal investigations into (i) Meta’s policy governing third‑party AI suppliers’ access to WhatsApp Business Solutions, and (ii) Google’s use of web publishers’ and YouTube creators’ content for AI purposes, including alleged unfair contract terms and preferential access. The Commission has also launched a call for evidence on the EU’s 2030 Digital Decade objectives to assess whether funding, governance and simplification align with AI deployment. UK and French inquiries similarly flag cloud lock‑in, discriminatory licensing and cross‑layer leveraging as potential foreclosure channels and propose interoperability and data‑egress remedies to preserve contestability.

Algorithmic coordination and unilateral conduct: familiar harms, new vectors

The OECD and national authorities continue to apply familiar theories to digital settings, focusing on whether (i) common vendors or shared datasets function as hubs transmitting competitively sensitive signals; (ii) access to non‑public competitor inputs and real‑time feedback loops predictably align decisions; and (iii) defaults, optimisation objectives and data‑flow architectures are part of the conduct rather than neutral tools. This extends to hub‑and‑spoke arrangements via shared software, tacit algorithmic stabilisation in oligopolies, and personalised pricing strategies that exploit granular data.

Unilateral theories of harm are amplified by AI‑mediated ranking and curation: self‑preferencing by integrated intermediaries, discriminatory access to application programming interfaces (APIs), and bundling or tying of compute, model access and downstream copilots can foreclose rivals, particularly where switching costs are high, and interoperability is constrained. Authorities are beginning to test remedies that condition access and reduce lock‑in, including in cloud environments.

An emerging concern: attribution of liability for autonomous systems

The hardest novel problem flagged by the OECD is the attribution of liability in agentic and learning systems that can autonomously produce anti‑competitive outcomes without direct human instruction or explicit agreement. Conventional anchors – intention, agreement, concurrence of wills or direct communication – come under strain when reinforcement learning or adaptive agents discover profit‑maximising or exclusionary strategies on their own. The OECD and national authorities therefore emphasise explainability, auditability and oversight to sustain accountability.

Opacity does not create new offences. Could it instead hinder attribution and enforcement, suggesting measures such as ex post audit rights, documentation of model design and training, and technical cooperation to facilitate interpretability? Might the OECD AI Principles reinforce two complementary guards: explainability of logic, data and decision-making and contestability (i.e., meaningful ability of affected parties to challenge, appeal or correct outcomes)? Are these necessary conditions for accountable deployment where market‑relevant choices are delegated to systems?

For competition policy, the working standard is shifting towards foreseeability and controllability: could a firm have reasonably predicted or prevented its algorithm’s collusive or exclusionary behaviour, and did it implement appropriate governance? Should assessments therefore look to design choices, guardrails, escalation pathways, and human‑in‑the‑loop oversight rather than intent alone? In this framing, could negligent deployment or lack of safeguards ground responsibility even when the “actor” is an agentic system?

Agentic AI may heighten these challenges because agents can update, plan, and act across systems with limited human involvement, potentially blurring the line between tool and quasi‑actor. While measured capability indicators suggest current agents still underperform on self‑monitoring and adaptive regulation, does their growing role as intermediaries in commerce suggest that accountability and governance frameworks should be specified now – before their autonomy meaningfully expands?

Interacting insights from “Agentic AI and EU Competition Law: Probes, Risks and a Balanced Path Forward”

A balanced path forward for the EU should be built on four elements. First, maintain the EU’s “by object/by effect” starting point but adapt evidentiary standards to context: treat shared configuration, optimisation objectives and data flows as potential “facilitators”, and focus on outcomes and information architectures where human intent is opaque. This is broadly consistent with previously mentioned effects‑based theories around coordination infrastructures and the principle that using software as a go‑between is treated like using a human intermediary when it produces coordinated outcomes.

Second, prioritise usage‑based, dynamic evidence in nascent, multi‑sided markets – active users, interactions/tokens, API calls, switching and multi‑homing rates, latency/quality differentials, and controlled experiments – to test for actual or likely foreclosure, while crediting objectively verifiable efficiencies tied to safety, security, privacy, or IP compliance. This calibrates intervention to demonstrated harm and preserves welfare‑enhancing innovation.

Third, ensure remedy toolkits are interoperable with ex ante regimes. Interoperability and portability based on clear, balanced access conditions – practical to implement, transparent on eligibility, priced in line with costs, non-exclusive, supported by stable and well‑documented APIs, with proportionate service‑level and security requirements, workable data‑portability/egress terms, and effective ways to resolve disputes – together with non‑exclusivity commitments and targeted transparency safeguards, could be time‑limited, proportionate and aligned with digital and AI governance frameworks to avoid duplicative or contradictory obligations. This may be especially relevant where hyperscalers bundle compute, model access and downstream services.

Fourth, operationalise attribution through governance: compliance‑by‑design, tiered access to sensitive interfaces (e.g., messaging/business APIs), audit trails, and prompt corrective action where unintended coordination or exclusionary effects arise. This embeds foreseeability and controllability into system design, making accountability tractable without penalising legitimate product integrity and safety choices.

Practical priorities: inputs, access and intermediaries

Downstream contestability will hinge on three practical levers. First, equitable access to enabling inputs (compute, data, and models) under non‑discriminatory terms, with attention to restrictive licensing, bundling, and egress fees that raise switching costs. Authorities identify these as choke points where cross‑layer leveraging can entrench incumbents and deter entry.

Second, safeguards around defaults and distribution interfaces as agentic AI becomes an entry point for search and commerce. Ranking transparency, anti‑self‑preferencing principles, and portable identity/data pathways are critical where agentic experiences operate as single‑front‑door intermediaries.

Third, monitoring “common vendor” risks and information architectures. Where shared platforms or vendors process non‑public competitor inputs and provide optimisation counsel, agencies will test whether these structures replicate traditional coordination mechanisms. Vendors are already redesigning to compartmentalise data and limit cross‑competitor visibility despite uncertain legal boundaries – signals that liability narratives are increasingly encompassing technology providers.

Outlook

Agentic AI promises substantial efficiency and quality gains across downstream markets, from leaner operations to differentiated consumer experiences; however, it collides with competition tools designed for human decision‑makers. Expect intensified scrutiny before doctrine settles, particularly where shared platforms and non‑public data access shape market outcomes. A balanced path forward will couple verifiable technical safeguards and interoperable, proportionate remedies with room for product design and safety‑driven choices that expand consumer welfare, while anchoring accountability in foreseeability, controllability, and auditability.

Os Insights aqui publicados reproduzem o trabalho desenvolvido para este efeito pelo respetivo autor, pelo que mantêm a língua original em que foram redigidos. A responsabilidade pelas opiniões expressas no artigo são exclusiva do seu autor pelo que a sua publicação não constitui uma aprovação por parte do WhatNext.Law ou das entidades afiliadas. Consulte os nossos Termos de Utilização para mais informação.

Deixe um Comentário

Gostaríamos muito de ouvir a tua opinião!

Estamos abertos a novas ideias e sugestões. Se tens uma ideia que gostarias de partilhar connosco, usa o botão abaixo.