Ban biometric surveillance in public to safeguard rights, urge EU bodies

There have been further calls from EU institutions to outlaw biometric surveillance in public.

In a joint opinion published today, the European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS), Wojciech Wiewiórowski, have called for draft EU regulations on the use of artificial intelligence technologies to go further than the Commission’s proposal in April — urging that the planned legislation should be beefed up to include a “general ban on any use of AI for automated recognition of human features in publicly accessible spaces, such as recognition of faces, gait, fingerprints, DNA, voice, keystrokes and other biometric or behavioural signals, in any context”.

Such technologies are simply too harmful to EU citizens’ fundamental rights and freedoms — like privacy and equal treatment under the law — to permit their use, is the argument.

The EDPB is responsible for ensuring a harmonization application of the EU’s privacy rules, while the EDPS oversees EU institutions’ own compliance with data protection law and also provides legislative guidance to the Commission.

EU lawmakers’ draft proposal on regulating applications of AI contained restrictions on law enforcement’s use of biometric surveillance in public places — but with very wide-ranging exemptions which quickly attracted major criticism from digital rights and civil society groups, as well as a number of MEPs.

The EDPS himself also quickly urged a rethink. Now he’s gone further, with the EDPB joining in with the criticism.

The EDPB and the EDPS have jointly fleshed out a number of concerns with the EU’s AI proposal — while welcoming the overall “risk-based approach” taken by EU lawmakers — saying, for example, that legislators must be careful to ensure alignment with the bloc’s existing data protection framework to avoid rights risks.

“The EDPB and the EDPS strongly welcome the aim of addressing the use of AI systems within the European Union, including the use of AI systems by EU institutions, bodies or agencies. At the same time, the EDPB and EDPS are concerned by the exclusion of international law enforcement cooperation from the scope of the Proposal,” they write.

“The EDPB and EDPS also stress the need to explicitly clarify that existing EU data protection legislation (GDPR, the EUDPR and the LED) applies to any processing of personal data falling under the scope of the draft AI Regulation.”

As well as calling for the use of biometric surveillance to be banned in public, the pair have urged a total ban on AI systems using biometrics to categorize individuals into “clusters based on ethnicity, gender, political or sexual orientation, or other grounds on which discrimination is prohibited under Article 21 of the Charter of Fundamental Rights”.

That’s an interesting concern in light of Google’s push, in the adtech realm, to replace behavioral micromarketing of individuals with ads that address cohorts (or groups) of users, based on their interests — with such clusters of web users set to be defined by Google’s AI algorithms.

(It’s interesting to speculate, therefore, whether FLoCs risks creating a legal discrimination risk — based on how individual mobile users are grouped together for ad targeting purposes. Certainly, concerns have been raised over the potential for FLoCs to scale bias and predatory advertising. And it’s also interesting that Google avoided running early tests in Europe, likely owning to the EU’s data protection regime.)

In another recommendation today, the EDPB and the EDPS also express a view that the use of AI to infer emotions of a natural person is “highly undesirable and should be prohibited” —  except for what they describe as “very specified cases, such as some health purposes, where the patient emotion recognition is important”.

“The use of AI for any type of social scoring should be prohibited,” they go on — touching on one use-case that the Commission’s draft proposal does suggest should be entirely prohibited, with EU lawmakers evidently keen to avoid any China-style social credit system taking hold in the region.

However by failing to include a prohibition on biometric surveillance in public in the proposed regulation the Commission is arguably risking just such a system being developed on the sly — i.e. by not banning private actors from deploying technology that could be used to track and profile people’s behavior remotely and en masse.

Commenting in a statement, the EDPB’s chair Andrea Jelinek and the EDPS Wiewiórowski argue as much, writing [emphasis ours]:

“Deploying remote biometric identification in publicly accessible spaces means the end of anonymity in those places. Applications such as live facial recognition interfere with fundamental rights and freedoms to such an extent that they may call into question the essence of these rights and freedoms. This calls for an immediate application of the precautionary approach. A general ban on the use of facial recognition in publicly accessible areas is the necessary starting point if we want to preserve our freedoms and create a human-centric legal framework for AI. The proposed regulation should also prohibit any type of use of AI for social scoring, as it is against the EU fundamental values and can lead to discrimination.”

In their joint opinion they also express concerns about the Commission’s proposed enforcement structure for the AI regulation, arguing that data protection authorities (within Member States) should be designated as national supervisory authorities (“pursuant to Article 59 of the [AI] Proposal”) — pointing out the EU DPAs are already enforcing the GDPR (General Data Protection Regulation) and the LED (Law Enforcement Directive) on AI systems involving personal data; and arguing it would therefore be “a more harmonized regulatory approach, and contribute to the consistent interpretation of data processing provisions across the EU” if they were given competence for supervising the AI Regulation too.

They are also not happy with the Commission’s plan to give itself a predominant role in the planned European Artificial Intelligence Board (EAIB) — arguing that this “would conflict with the need for an AI European body independent from any political influence”. To ensure the Board’s independence the proposal should give it more autonomy and “ensure it can act on its own initiative”, they add.

The Commission has been contacted for comment.

The AI Regulation is one of a number of digital proposals unveiled by EU lawmakers in recent months. Negotiations between the different EU institutions — and lobbying from industry and civil society — continues as the bloc works toward adopting new digital rules.

In another recent and related development, the UK’s information commissioner warned last week over the threat posed by big data surveillance systems that are able to make use of technologies like live facial recognition — although she claimed it’s not her place to endorse or ban a technology.

But her opinion makes it clear that many applications of biometric surveillance may be incompatible with the UK’s privacy and data protection framework.

#andrea-jelinek, #artificial-intelligence, #biometrics, #data-protection, #data-protection-law, #edpb, #edps, #europe, #european-data-protection-board, #european-union, #facial-recognition, #general-data-protection-regulation, #law-enforcement, #privacy, #surveillance, #united-kingdom, #wojciech-wiewiorowski

EU bodies’ use of US cloud services from AWS, Microsoft being probed by bloc’s privacy chief

Europe’s lead data protection regulator has opened two investigations into EU institutions’ use of cloud services from U.S. cloud giants, Amazon and Microsoft, under so called Cloud II contracts inked earlier between European bodies, institutions and agencies and AWS and Microsoft.

A separate investigation has also been opened into the European Commission’s use of Microsoft Office 365 to assess compliance with earlier recommendations, the European Data Protection Supervisor (EDPS) said today.

Wojciech Wiewiórowski is probing the EU’s use of U.S. cloud services as part of a wider compliance strategy announced last October following a landmark ruling by the Court of Justice (CJEU) — aka, Schrems II — which struck down the EU-US Privacy Shield data transfer agreement and cast doubt upon the viability of alternative data transfer mechanisms in cases where EU users’ personal data is flowing to third countries where it may be at risk from mass surveillance regimes.

In October, the EU’s chief privacy regulator asked the bloc’s institutions to report on their transfers of personal data to non-EU countries. This analysis confirmed that data is flowing to third countries, the EDPS said today. And that it’s flowing to the U.S. in particular — on account of EU bodies’ reliance on large cloud service providers (many of which are U.S.-based).

That’s hardly a surprise. But the next step could be very interesting as the EDPS wants to determine whether those historical contracts (which were signed before the Schrems II ruling) align with the CJEU judgement or not.

Indeed, the EDPS warned today that they may not — which could thus require EU bodies to find alternative cloud service providers in the future (most likely ones located within the EU, to avoid any legal uncertainty). So this investigation could be the start of a regulator-induced migration in the EU away from U.S. cloud giants.

Commenting in a statement, Wiewiórowski said: “Following the outcome of the reporting exercise by the EU institutions and bodies, we identified certain types of contracts that require particular attention and this is why we have decided to launch these two investigations. I am aware that the ‘Cloud II contracts’ were signed in early 2020 before the ‘Schrems II’ judgement and that both Amazon and Microsoft have announced new measures with the aim to align themselves with the judgement. Nevertheless, these announced measures may not be sufficient to ensure full compliance with EU data protection law and hence the need to investigate this properly.”

Amazon and Microsoft have been contacted with questions regarding any special measures they have applied to these Cloud II contracts with EU bodies.

The EDPS said it wants EU institutions to lead by example. And that looks important given how, despite a public warning from the European Data Protection Board (EDPB) last year — saying there would be no regulatory grace period for implementing the implications of the Schrems II judgement — there hasn’t been any major data transfer fireworks yet.

The most likely reason for that is a fair amount of head-in-the-sand reaction and/or superficial tweaks made to contracts in the hopes of meeting the legal bar (but which haven’t yet been tested by regulatory scrutiny).

Final guidance from the EDPB is also still pending, although the Board put out detailed advice last fall.

The CJEU ruling made it plain that EU law in this area cannot simply be ignored. So as the bloc’s data regulators start scrutinizing contracts that are taking data out of the EU some of these arrangement are, inevitably, going to be found wanting — and their associated data flows ordered to stop.

To wit: A long-running complaint against Facebook’s EU-US data transfers — filed by the eponymous Max Schrems, a long-time EU privacy campaigners and lawyer, all the way back in 2013 — is slowing winding toward just such a possibility.

Last fall, following the Schrems II ruling, the Irish regulator gave Facebook a preliminary order to stop moving Europeans’ data over the pond. Facebook sought to challenge that in the Irish courts but lost its attempt to block the proceeding earlier this month. So it could now face a suspension order within months.

How Facebook might respond is anyone’s guess but Schrems suggested to TechCrunch last summer that the company will ultimately need to federate its service, storing EU users’ data inside the EU.

The Schrems II ruling does generally look like it will be good news for EU-based cloud service providers which can position themselves to solve the legal uncertainty issue (even if they aren’t as competitively priced and/or scalable as the dominant US-based cloud giants).

Fixing U.S. surveillance law, meanwhile — so that it gets independent oversight and accessible redress mechanisms for non-citizens in order to no longer be considered a threat to EU people’s data, as the CJEU judges have repeatedly found — is certainly likely to take a lot longer than ‘months’. If indeed the US authorities can ever be convinced of the need to reform their approach.

Still, if EU regulators finally start taking action on Schrems II — by ordering high profile EU-US data transfers to stop — that might help concentrate US policymakers’ minds toward surveillance reform. Otherwise local storage may be the new future normal.

#amazon, #aws, #cloud, #cloud-services, #data-protection, #data-protection-law, #data-security, #eu-us-privacy-shield, #europe, #european-commission, #european-data-protection-board, #european-union, #facebook, #lawyer, #max-schrems, #microsoft, #privacy, #surveillance-law, #united-states, #wojciech-wiewiorowski

EU’s top data protection supervisor urges ban on facial recognition in public

The European Union’s lead data protection supervisor has called for remote biometric surveillance in public places to be banned outright under incoming AI legislation.

The European Data Protection Supervisor’s (EDPS) intervention follows a proposal, put out by EU lawmakers on Wednesday, for a risk-based approach to regulating applications of artificial intelligence.

The Commission’s legislative proposal includes a partial ban on law enforcement’s use of remote biometric surveillance technologies (such as facial recognition) in public places. But the text includes wide-ranging exceptions, and digital and humans rights groups were quick to warn over loopholes they argue will lead to a drastic erosion of EU citizens’ fundamental rights. And last week a cross-party group of MEPs urged the Commission to screw its courage to the sticking place and outlaw the rights-hostile tech.

The EDPS, whose role includes issuing recommendations and guidance for the Commission, tends to agree. In a press release today Wojciech Wiewiórowski urged a rethink.

“The EDPS regrets to see that our earlier calls for a moratorium on the use of remote biometric identification systems — including facial recognition — in publicly accessible spaces have not been addressed by the Commission,” he wrote.

“The EDPS will continue to advocate for a stricter approach to automated recognition in public spaces of human features — such as of faces but also of gait, fingerprints, DNA, voice, keystrokes and other biometric or behavioural signals — whether these are used in a commercial or administrative context, or for law enforcement purposes.

“A stricter approach is necessary given that remote biometric identification, where AI may contribute to unprecedented developments, presents extremely high risks of deep and non-democratic intrusion into individuals’ private lives.”

Wiewiórowski had some warm words for the legislative proposal too, saying he welcomed the horizontal approach and the broad scope set out by the Commission. He also agreed there are merits to a risk-based approach to regulating applications of AI.

But the EDPB has made it clear that the red lines devised by EU lawmakers are a lot pinker in hue than he’d hoped for — adding a high profile voice to the critique that the Commission hasn’t lived up to its much trumpeted claim to have devised a framework that will ensure ‘trustworthy’ and ‘human-centric’ AI.

The coming debate over the final shape of the regulation is sure to include plenty of discussion over where exactly Europe’s AI red lines should be. A final version of the text isn’t expected to be agreed until next year at the earliest.

“The EDPS will undertake a meticulous and comprehensive analysis of the Commission’s proposal to support the EU co-legislators in strengthening the protection of individuals and society at large. In this context, the EDPS will focus in particular on setting precise boundaries for those tools and systems which may present risks for the fundamental rights to data protection and privacy,” Wiewiórowski added.

 

#ai-regulation, #artificial-intelligence, #biometrics, #edps, #europe, #european-union, #facial-recognition, #law-enforcement, #policy, #privacy, #surveillance, #wojciech-wiewiorowski