DATA IN PERSON-BASED PREDICTIVE POLICING

Predictive policing is becoming increasingly personal and one of the main concerns is the (precarious) protection of personal data.

We’re not just looking for crime. We’re looking for people” could be a famous line uttered by Tom Cruise in the movie Minority Report. Instead it is a real-life quote from a (retired) chief of a US police department – Rodney Monroe. In a context where cities are becoming increasingly smart in several aspects, policing is no exception. Monroe was referring to the latest and, most problematic approach in predictive policing (PP): law enforcement authorities (LEAs) are increasingly relying on algorithmic decisions to not only determine when and where the next crimes will occur but also to predict who will commit them or who will be a victim of them. This practice is called “person-based predictive policing”.

Person-based predictive policing results from computerized techniques that analyze data and operate on the basis of similarity and analogy. The software digitally maps social networks to identify who is most at risk of engaging in criminal activity and who is most likely to become a suspect or victim. Since this targeting is based just on connections with others, rather than the actual behavior and choices of the person, there is a lot of skepticism about the reliability of these predictions and concerns about their impact on fundamental rights. One of these concerns is protecting the data and privacy of those targeted by this policing.

A PRECARIOUS DATA PROTECTION?

It is not a novelty that the level of data required to operate algorithms presents a real risk to individual privacy. Yet this risk becomes even more pronounced when a huge variety of data is used for law enforcement purposes. The collection and processing of personal data in the EU are regulated by the General Data Protection Regulation (GDPR), and by the Law Enforcement Directive (LED), which acts as lex specialis applicable to law enforcement.

While from the perspective of LEAs, the LED meets their current needs and challenges in the Digital Age, from the perspective of the one whose fate is determined by PP technologies, it may not be very helpful. First, some of the key principles regarding data protection are much more flexible in the LED, notably the principles of data minimization and purpose limitation. While in the GDPR data must be collected to the extent necessary -Article 5(1)(c) GDPR- in the LED data should be not excessive -Article 4(1)(c) LED, which is revealing of less rigorous outlines, giving LEAs more leeway in performing their tasks. Concerning the principle of purpose limitation, on top of the typical exception for the public interest, the LED allows further processing of data for other purposes if the controller is authorized by law to process such personal data for those purposes and the processing is necessary and proportional -Article 4(2) LED. It follows from this provision that processing the data for other than the initial purpose is much easier. In addition, since Article 4(1)(a) LED only states that data shall be “processed lawfully and fairly”, one could point out as a limitation, the absence of an explicit reference to transparency. However, as there is a mention of this requirement in Recital 26, the distinct wording of the article is not significant enough to exclude this principle from data processing.

These differences seem to only consider the interest of LEAs and, unfortunately, the same can be observed with regard to automated decision-making (ADM). According to Article 11 LED, the individual should be provided with appropriate safeguards for their rights and freedoms. However, while the GDPR gives a wide range of safeguards, the LED requires only the right to human intervention; anything else is left to the Member States’ discretion. As a result, considerable inconsistencies have emerged between the solutions implemented at the domestic level in the EU. On the one hand, the scope and reliability of human intervention are problematic. It is unclear what kind of human intervention is required; also, research indicates that people tend to ignore algorithm results when they do not conform to their stereotypes. On the other hand, whether an individualized PP system falls under Article 11 is not always certain. An automated decision with a “trivial effect” is not considered sufficient, so if the algorithmic decision is taken only as a mere recommendation and the final decision (affecting the individual) is human, Article 11 does not apply. This means that a close examination of how the decision-making procedure works in practice is always necessary, which may lead to uncertainty and controversy about the impact of the algorithmic decision. 

Furthermore, if someone is subject to an ADM, under the LED, that individual has no right to be informed about the use of that technology or to obtain an explanation of the decision. Arguably, in this case, a person should not only be aware that the decision will be made by a machine, but also be informed about the rationale, criteria and consequences of such a decision. This is the view expressed by the Article 29 Working Party. Concerning the right to obtain an explanation, although Recital 38 seems to suggest that such a right exists, it is not included in the right of access of Article 14 LED and there are no other relevant legal bases to sustain this right.

The establishment of public-private partnerships (PPP) by the police in this context is also concerning. The fluidity of data flows between actors involved in PPP not only makes it difficult and challenging to identify the appropriate legal regime, but also creates the fear that data or other insights may end up in the hands of private companies. In this regard, it must be ensured that the data remains with the state.

FINAL REMARKS

The idea of a mathematical formula that presents who is likely to commit crimes is tempting. But the truth is that deploying algorithms for that purpose may compromise individuals’ rights, in particular concerning data and privacy. In the absence of formal evaluations and evidence on whether the use of person-based PP actually reduces crime rates, its implementation should be done cautiously. Furthermore, it would be less problematic if, instead of being used to surveil and arrest criminals, algorithmic predictions were implemented to help individuals. Based on the targeting made by the algorithm, local (or national) authorities could provide the social services that the individual likely to engage in crime (or his community) needs. After all, changing the underlying social conditions that lead to committing a crime is also combating crime.

Os Insights aqui publicados reproduzem o trabalho desenvolvido para este efeito pelo respetivo autor, pelo que mantêm a língua original em que foram redigidos. A responsabilidade pelas opiniões expressas no artigo são exclusiva do seu autor pelo que a sua publicação não constitui uma aprovação por parte do WhatNext.Law ou das entidades afiliadas. Consulte os nossos Termos de Utilização para mais informação.

Deixe um Comentário

Gostaríamos muito de ouvir a tua opinião!

Estamos abertos a novas ideias e sugestões. Se tens uma ideia que gostarias de partilhar connosco, usa o botão abaixo.