Emotional Data in the Metaverse – A legal look at Virtual Reality

As the Metaverse is gaining momentum also from a legal research perspective, this Insight outlines the core data protection challenges with respect to Virtual Reality and Affective Computing.

Several layers of social interaction are soon expected to occur within the context of Virtual Reality (VR), a technological development currently epitomised by the Metaverse. The Metaverse is a virtual space projected to go far beyond mere meeting rooms and videocalls, possibly changing our cities and landscapes. However, the creation of realistic avatars to replicate and enhance social interactions within the Metaverse requires an extended processing of specific physical and physiological information allowing for inferences of users’ emotional data.

This processing becomes a problem if users are not made aware of it, preventing them from exercising their rights regarding abusive and manipulative uses of their data. Considering that there is no consensus on whether emotional data can be considered personal data, the current risk is that Metaverse companies could, as of today, bypass several transparency obligations when processing this specific type of data.

The following Insight provides a brief overview of the extent to which emotional data could be effectively protected under the European Union (EU) legal framework on data protection and privacy. This overview assesses the prospective vulnerability of EU citizens in a VR environment and questions the adequacy of the current EU approach to data protection in the context of the Metaverse.

1 – The importance of Virtual Reality for the Metaverse

The Metaverse is positioning itself to be a key asset, also within the context of smart cities. Following Zuckerberg’s announcement in late 2021, companies have started to significantly embrace the initiative at several levels. The most prevalent examples in the literature regarding possible uses and applications of the Metaverse seem to be meetings, medical training programs, and job interviews.

One of the Metaverse’s objectives is to achieve the highest level of immersion possible through the combination of technologies such as Big Data, 5G, Augmented Reality and VR. When considering that immersion depends on the cumulation of interactivity, audio-visual, persistence, and emotion, the importance of VR equipment to the Metaverse’s functionality becomes undeniable. It is expected to see Metaverse companies like Meta double down on their VR equipment production, particularly with respect to how such devices process users’ emotions. Meta’s latest VR headsets, Meta Quest Pro, launched on 25 October 2022, are equipped with inward-facing sensors designed to capture natural facial expressions and perform eye tracking. Interestingly, in the unveiling of these headsets, Meta also highlighted the importance VR plays in the Metaverse’s evolution.

2 – Virtual Reality and Affective Computing

Meta’s interest in VR technology and its ability to capture specific facial and body expressions makes it pivotal to approaching a type of computing that is to be paired up with said technology. Affective Computing (AC) specifically processes users’ emotions to produce more efficient interactions between humans and VR technologies. Emotions are inferred through physical and physiological signals, such as facial expressions, pupillometry, gaze tracking, speech patterns, vocal inflections, and gestures. Certain VR devices have already been reported to be able to deduce emotions through inferential data, namely vocal inflections, as is the case of Amazon’s “Alexa”. Given the Metaverse’s goal of full reality immersion, there is an incentive to use AC to fuel the VR potential, with companies other than Meta announcing AI-based tools and methods that measure emotions to improve interaction within the Metaverse. It is of great interest from a legal (and specifically an EU) perspective to elaborate on how AC processing of emotional data should concern the current discourse on privacy and data protection in the context of the Metaverse.

3 – EU Data Protection and Privacy Concerns

Considering the right of informational self-determination, the backbone of the EU’s data protection framework, it is interesting to note how societal interactions with technology frame the exercise of said self-determination as an object of consumption. In this context, data subjects willingly and repeatedly share their data to be able to engage with or detach from technology and all the users and individuals it connects. This exchange between technological benefits and personal information takes place while data subjects remain mostly unaware of some of the purposes of the processing activities carried out on their data.

However, while it is not bold to state that the general population accepts a trade-off between the protection of their personal data and the ability to enjoy technology’s benefits, the same cannot be said about inferential data that users do not even know is being processed.  When considering AC, it is fair to believe that a piece of information about one’s emotions provides more accurate information about a data subject’s preferences and engagement with a service than any feature design or other conventional category of personal data. Moreover, emotions cannot be masked as effectively as a conventional category of data: it is, for instance, impossible to suppress certain physical responses to emotional distress, like one’s heart rate or blood pressure. Therefore, by collecting, cataloguing and allowing for the retrospective assessment of emotive states, it is not farfetched to claim that AdTech based on AC may harbour a great potential use of such information in infringing, abusive or manipulative patterns of online commercial behaviour. Said AdTech can, for example, easily sort out user preferences into

categories, which are later used to advertise on larger platforms with greater success. These concerns are but the tip of the iceberg.

4 – Specific challenges the EU legal framework faces regarding Affective Computing

The EU General Data Protection Regulation’s (GDPR) definition of personal data solely encompasses “information relating to an identified or identifiable natural person”. As regards sensitive categories of personal data, the analysis of facial expressions and eye movements hypothetically constitutes a processing of biometric data only if said analysis aims to identify a subject. By this token, classifying emotional data as sensitive is disputed at best. However, emotional data does not necessarily aim to single individuals out, which sheds doubt on how it can be protected by the GDPR’s scope. It also becomes harder for data subjects to exercise their rights over how their emotional data is processed.

Extensive interpretations of the GDPR point out that emotional data can imply the processing of personal information regardless of the processing activity’s intended purpose. This understanding stems from data such as a voice or facial expressions making subjects identifiable when combined with other elements of a processing activity. Regardless of its merits, such interpretation neglects situations in which AC solely processes data that cannot be used to identify a subject.

In turn, the upcoming EU ePrivacy Regulation, in its current proposed version, only sparsely mentions emotional data beyond its Recitals, even though said Recitals do deem emotional data to be highly sensitive. Given its status as lex specialis to the GDPR, the ePrivacy Regulation is expected to provide greater safeguards than the GDPR in protecting emotional data whenever its material scope is triggered. These safeguards do not preclude concerns of legal uncertainty, nor do they mitigate the implications of the ePrivacy Regulation’s scope not covering corporate networks open only to their own members (see Art. 2[2][c] of the proposal).  By and large, at the core of the concerns lies the underlying problem of the EU data protection framework’s sole prioritisation of the risk of individual identification. 

5 – Why should we be concerned

Looking back at Meta’s interest in VR equipment, and VR’s inevitable and intricate connection with AC, the problem at hand assumes two different dimensions. The first dimension pertains to the physical and physiological signals that are captured by VR equipment and possibly then stored in databases for further processing. From an EU legal perspective (and beyond), it is quintessential to remember that users need to be made aware of how their data is processed. It is difficult to determine whether the scope of protection granted to emotional data gives Metaverse companies leeway to decide what

to do with such data once users have been informed. An example of this is the implementation of personalised ads within the Metaverse in the form of virtual product placement and virtual spokespeople shaped entirely on a user’s emotional data.

The second problematic dimension concerns the so-called “photorealistic avatars” that Meta aims to create, whose realism can be enhanced through the information collected via VR technology. This particular issue fuels core data protection and cybersecurity concerns on how biometric and emotional data is stored and transmitted when being generated by avatars within the Metaverse itself.

6 – How should EU law look at this matter?

The Metaverse is yet to attain its full potential. This gives the European legislator time to consider the merits of shaping ad hoc regulation, in this case the EU ePrivacy Regulation, to cover some of the outlined emotional data concerns in a way that does not create murky legislative interpretations or confusion. Current and future discussions on the proposed ePrivacy Regulation ought to at least consider including provisions on emotional data, rather than limiting its guidance to the non-binding Recitals, to avoid legal uncertainty.

Considering other regulatory documents, the recent Digital Services Act (DSA) and Digital Markets Act (DMA) mostly focus on regulating the practices of very large online platforms (VLOP) that act as gatekeepers. Looking at the specific case of Meta, by qualifying as a VLOP and as a gatekeeper, it is now under heavy scrutiny by both these regulations. Does this imply that the appropriate collection and use of emotional data by the company is also safeguarded? In the absence of a straightforward answer in both legislative texts, it is worth noting how the DSA outright forbids “dark patterns”, while the DMA prohibits gatekeepers from combining personal data from their core platform service with personal data from other services provided by them or by third parties.

Considering that both the abovementioned regulations were developed with the intent of being articulated with the EU data protection framework, their proper enforcement in the face of the Metaverse’s use of VR technology requires that EU data protection legislation reconsider its risk framework basis. To respond to the novel risks identified above, it has been rightly advised  to rethink individual identifiability as the sole basis for the current EU data protection risk framework and to focus on broader, group conceptions of privacy and more holistic notions of personal data. On that note, the core weight GDPR’s principles assume within its own dispositions can open a debate on the merits of extensively interpreting the scope of the EU data protection legislation under broader perspectives.

Os Insights aqui publicados reproduzem o trabalho desenvolvido para este efeito pelo respetivo autor, pelo que mantêm a língua original em que foram redigidos. A responsabilidade pelas opiniões expressas no artigo são exclusiva do seu autor pelo que a sua publicação não constitui uma aprovação por parte do WhatNext.Law ou das entidades afiliadas. Consulte os nossos Termos de Utilização para mais informação.

Deixe um Comentário

Gostaríamos muito de ouvir a tua opinião!

Estamos abertos a novas ideias e sugestões. Se tens uma ideia que gostarias de partilhar connosco, usa o botão abaixo.