Ireland probes TikTok’s handling of kids’ data and transfers to China

Ireland’s Data Protection Commission (DPC) has yet another ‘Big Tech’ GDPR probe to add to its pile: The regulator said yesterday it has opened two investigations into video sharing platform TikTok.

The first covers how TikTok handles children’s data, and whether it complies with Europe’s General Data Protection Regulation.

The DPC also said it will examine TikTok’s transfers of personal data to China, where its parent entity is based — looking to see if the company meets requirements set out in the regulation covering personal data transfers to third countries.

TikTok was contacted for comment on the DPC’s investigation.

A spokesperson told us:

“The privacy and safety of the TikTok community, particularly our youngest members, is a top priority. We’ve implemented extensive policies and controls to safeguard user data and rely on approved methods for data being transferred from Europe, such as standard contractual clauses. We intend to fully cooperate with the DPC.”

The Irish regulator’s announcement of two “own volition” enquiries follows pressure from other EU data protection authorities and consumers protection groups which have raised concerns about how TikTok handles’ user data generally and children’s information specifically.

In Italy this January, TikTok was ordered to recheck the age of every user in the country after the data protection watchdog instigated an emergency procedure, using GDPR powers, following child safety concerns.

TikTok went on to comply with the order — removing more than half a million accounts where it could not verify the users were not children.

This year European consumer protection groups have also raised a number of child safety and privacy concerns about the platform. And, in May, EU lawmakers said they would review the company’s terms of service.

On children’s data, the GDPR sets limits on how kids’ information can be processed, putting an age cap on the ability of children to consent to their data being used. The age limit varies per EU Member State but there’s a hard cap for kids’ ability to consent at 13 years old (some EU countries set the age limit at 16).

In response to the announcement of the DPC’s enquiry, TikTok pointed to its use of age gating technology and other strategies it said it uses to detect and remove underage users from its platform.

It also flagged a number of recent changes it’s made around children’s accounts and data — such as flipping the default settings to make their accounts privacy by default and limiting their exposure to certain features that intentionally encourage interaction with other TikTok users if those users are over 16.

While on international data transfers it claims to use “approved methods”. However the picture is rather more complicated than TikTok’s statement implies. Transfers of Europeans’ data to China are complicated by there being no EU data adequacy agreement in place with China.

In TikTok’s case, that means, for any personal data transfers to China to be lawful, it needs to have additional “appropriate safeguards” in place to protect the information to the required EU standard.

When there is no adequacy arrangement in place, data controllers can, potentially, rely on mechanisms like Standard Contractual Clauses (SCCs) or binding corporate rules (BCRs) — and TikTok’s statement notes it uses SCCs.

But — crucially — personal data transfers out of the EU to third countries have faced significant legal uncertainty and added scrutiny since a landmark ruling by the CJEU last year which invalidated a flagship data transfer arrangement between the US and the EU and made it clear that DPAs (such as Ireland’s DPC) have a duty to step in and suspend transfers if they suspect people’s data is flowing to a third country where it might be at risk.

So while the CJEU did not invalidate mechanisms like SCCs entirely they essentially said all international transfers to third countries must be assessed on a case-by-case basis and, where a DPA has concerns, it must step in and suspend those non-secure data flows.

The CJEU ruling means just the fact of using a mechanism like SCCs doesn’t mean anything on its own re: the legality of a particular data transfer. It also amps up the pressure on EU agencies like Ireland’s DPC to be pro-active about assessing risky data flows.

Final guidance put out by the European Data Protection Board, earlier this year, provides details on the so-called ‘special measures’ that a data controller may be able to apply in order to increase the level of protection around their specific transfer so the information can be legally taken to a third country.

But these steps can include technical measures like strong encryption — and it’s not clear how a social media company like TikTok would be able to apply such a fix, given how its platform and algorithms are continuously mining users’ data to customize the content they see and in order to keep them engaged with TikTok’s ad platform.

In another recent development, China has just passed its first data protection law.

But, again, this is unlikely to change much for EU transfers. The Communist Party regime’s ongoing appropriation of personal data, through the application of sweeping digital surveillance laws, means it would be all but impossible for China to meet the EU’s stringent requirements for data adequacy. (And if the US can’t get EU adequacy it would be ‘interesting’ geopolitical optics, to put it politely, were the coveted status to be granted to China…)

One factor TikTok can take heart from is that it does likely have time on its side when it comes to the’s EU enforcement of its data protection rules.

The Irish DPC has a huge backlog of cross-border GDPR investigations into a number of tech giants.

It was only earlier this month that Irish regulator finally issued its first decision against a Facebook-owned company — announcing a $267M fine against WhatsApp for breaching GDPR transparency rules (but only doing so years after the first complaints had been lodged).

The DPC’s first decision in a cross-border GDPR case pertaining to Big Tech came at the end of last year — when it fined Twitter $550k over a data breach dating back to 2018, the year GDPR technically begun applying.

The Irish regulator still has scores of undecided cases on its desk — against tech giants including Apple and Facebook. That means that the new TikTok probes join the back of a much criticized bottleneck. And a decision on these probes isn’t likely for years.

On children’s data, TikTok may face swifter scrutiny elsewhere in Europe: The UK added some ‘gold-plaiting’ to its version of the EU GDPR in the area of children’s data — and, from this month, has said it expects platforms meet its recommended standards.

It has warned that platforms that don’t fully engage with its Age Appropriate Design Code could face penalties under the UK’s GDPR. The UK’s code has been credited with encouraging a number of recent changes by social media platforms over how they handle kids’ data and accounts.

#apps, #articles, #china, #communist-party, #data-controller, #data-protection, #data-protection-commission, #data-protection-law, #data-security, #encryption, #europe, #european-data-protection-board, #european-union, #general-data-protection-regulation, #ireland, #italy, #max-schrems, #noyb, #personal-data, #privacy, #social, #social-media, #spokesperson, #tiktok, #united-kingdom, #united-states

After years of inaction against adtech, UK’s ICO calls for browser-level controls to fix ‘cookie fatigue’

In the latest quasi-throwback toward ‘do not track‘, the UK’s data protection chief has come out in favor of a browser- and/or device-level setting to allow Internet users to set “lasting” cookie preferences — suggesting this as a fix for the barrage of consent pop-ups that continues to infest websites in the region.

European web users digesting this development in an otherwise monotonously unchanging regulatory saga, should be forgiven — not only for any sense of déjà vu they may experience — but also for wondering if they haven’t been mocked/gaslit quite enough already where cookie consent is concerned.

Last month, UK digital minister Oliver Dowden took aim at what he dubbed an “endless” parade of cookie pop-ups — suggesting the government is eyeing watering down consent requirements around web tracking as ministers consider how to diverge from European Union data protection standards, post-Brexit. (He’s slated to present the full sweep of the government’s data ‘reform’ plans later this month so watch this space.)

Today the UK’s outgoing information commissioner, Elizabeth Denham, stepped into the fray to urge her counterparts in G7 countries to knock heads together and coalesce around the idea of letting web users express generic privacy preferences at the browser/app/device level, rather than having to do it through pop-ups every time they visit a website.

In a statement announcing “an idea” she will present this week during a virtual meeting of fellow G7 data protection and privacy authorities — less pithily described in the press release as being “on how to improve the current cookie consent mechanism, making web browsing smoother and more business friendly while better protecting personal data” — Denham said: “I often hear people say they are tired of having to engage with so many cookie pop-ups. That fatigue is leading to people giving more personal data than they would like.

“The cookie mechanism is also far from ideal for businesses and other organisations running websites, as it is costly and it can lead to poor user experience. While I expect businesses to comply with current laws, my office is encouraging international collaboration to bring practical solutions in this area.”

“There are nearly two billion websites out there taking account of the world’s privacy preferences. No single country can tackle this issue alone. That is why I am calling on my G7 colleagues to use our convening power. Together we can engage with technology firms and standards organisations to develop a coordinated approach to this challenge,” she added.

Contacted for more on this “idea”, an ICO spokeswoman reshuffled the words thusly: “Instead of trying to effect change through nearly 2 billion websites, the idea is that legislators and regulators could shift their attention to the browsers, applications and devices through which users access the web.

“In place of click-through consent at a website level, users could express lasting, generic privacy preferences through browsers, software applications and device settings – enabling them to set and update preferences at a frequency of their choosing rather than on each website they visit.”

Of course a browser-baked ‘Do not track’ (DNT) signal is not a new idea. It’s around a decade old at this point. Indeed, it could be called the idea that can’t die because it’s never truly lived — as earlier attempts at embedding user privacy preferences into browser settings were scuppered by lack of industry support.

However the approach Denham is advocating, vis-a-vis “lasting” preferences, may in fact be rather different to DNT — given her call for fellow regulators to engage with the tech industry, and its “standards organizations”, and come up with “practical” and “business friendly” solutions to the regional Internet’s cookie pop-up problem.

It’s not clear what consensus — practical or, er, simply pro-industry — might result from this call. If anything.

Indeed, today’s press release may be nothing more than Denham trying to raise her own profile since she’s on the cusp of stepping out of the information commissioner’s chair. (Never waste a good international networking opportunity and all that — her counterparts in the US, Canada, Japan, France, Germany and Italy are scheduled for a virtual natter today and tomorrow where she implies she’ll try to engage them with her big idea).

Her UK replacement, meanwhile, is already lined up. So anything Denham personally champions right now, at the end of her ICO chapter, may have a very brief shelf life — unless she’s set to parachute into a comparable role at another G7 caliber data protection authority.

Nor is Denham the first person to make a revived pitch for a rethink on cookie consent mechanisms — even in recent years.

Last October, for example, a US-centric tech-publisher coalition came out with what they called a Global Privacy Standard (GPC) — aiming to build momentum for a browser-level pro-privacy signal to stop the sale of personal data, geared toward California’s Consumer Privacy Act (CCPA), though pitched as something that could have wider utility for Internet users.

By January this year they announced 40M+ users were making use of a browser or extension that supports GPC — along with a clutch of big name publishers signed up to honor it. But it’s fair to say its global impact so far remains limited. 

More recently, European privacy group noyb published a technical proposal for a European-centric automated browser-level signal that would let regional users configure advanced consent choices — enabling the more granular controls it said would be needed to fully mesh with the EU’s more comprehensive (vs CCPA) legal framework around data protection.

The proposal, for which noyb worked with the Sustainable Computing Lab at the Vienna University of Economics and Business, is called Advanced Data Protection Control (ADPC). And noyb has called on the EU to legislate for such a mechanism — suggesting there’s a window of opportunity as lawmakers there are also keen to find ways to reduce cookie fatigue (a stated aim for the still-in-train reform of the ePrivacy rules, for example).

So there are some concrete examples of what practical, less fatiguing yet still pro-privacy consent mechanisms might look like to lend a little more color to Denham’s ‘idea’ — although her remarks today don’t reference any such existing mechanisms or proposals.

(When we asked the ICO for more details on what she’s advocating for, its spokeswoman didn’t cite any specific technical proposals or implementations, historical or contemporary, either, saying only: “By working together, the G7 data protection authorities could have an outsized impact in stimulating the development of technological solutions to the cookie consent problem.”)

So Denham’s call to the G7 does seem rather low on substance vs profile-raising noise.

In any case, the really big elephant in the room here is the lack of enforcement around cookie consent breaches — including by the ICO.

Add to that, there’s the now very pressing question of how exactly the UK will ‘reform’ domestic law in this area (post-Brexit) — which makes the timing of Denham’s call look, well, interestingly opportune. (And difficult to interpret as anything other than opportunistically opaque at this point.)

The adtech industry will of course be watching developments in the UK with interest — and would surely be cheering from the rooftops if domestic data protection ‘reform’ results in amendments to UK rules that allow the vast majority of websites to avoid having to ask Brits for permission to process their personal data, say by opting them into tracking by default (under the guise of ‘fixing’ cookie friction and cookie fatigue for them).

That would certainly be mission accomplished after all these years of cookie-fatigue-generating-cookie-consent-non-compliance by surveillance capitalism’s industrial data complex.

It’s not yet clear which way the UK government will jump — but eyebrows should raise to read the ICO writing today that it expects compliance with (current) UK law when it has so roundly failed to tackle the adtech industry’s role in cynically sicking up said cookie fatigue by failing to take any action against such systemic breaches.

The bald fact is that the ICO has — for years — avoided tackling adtech abuse of data protection, despite acknowledging publicly that the sector is wildly out of control.

Instead, it has opted for a cringing ‘process of engagement’ (read: appeasement) that has condemned UK Internet users to cookie pop-up hell.

This is why the regulator is being sued for inaction — after it closed a long-standing complaint against the security abuse of people’s data in real-time bidding ad auctions with nothing to show for it… So, yes, you can be forgiven for feeling gaslit by Denham’s call for action on cookie fatigue following the ICO’s repeat inaction on the causes of cookie fatigue…

Not that the ICO is alone on that front, however.

There has been a fairly widespread failure by EU regulators to tackle systematic abuse of the bloc’s data protection rules by the adtech sector — with a number of complaints (such as this one against the IAB Europe’s self-styled ‘transparency and consent framework’) still working, painstakingly, through the various labyrinthine regulatory processes.

France’s CNIL has probably been the most active in this area — last year slapping Amazon and Google with fines of $42M and $120M for dropping tracking cookies without consent, for example. (And before you accuse CNIL of being ‘anti-American’, it has also gone after domestic adtech.)

But elsewhere — notably Ireland, where many adtech giants are regionally headquartered — the lack of enforcement against the sector has allowed for cynical, manipulative and/or meaningless consent pop-ups to proliferate as the dysfunctional ‘norm’, while investigations have failed to progress and EU citizens have been forced to become accustomed, not to regulatory closure (or indeed rapture), but to an existentially endless consent experience that’s now being (re)branded as ‘cookie fatigue’.

Yes, even with the EU’s General Data Protection Regulation (GDPR) coming into application in 2018 and beefing up (in theory) consent standards.

This is why the privacy campaign group noyb is now lodging scores of complaints against cookie consent breaches — to try to force EU regulators to actually enforce the law in this area, even as it also finds time to put up a practical technical proposal that could help shrink cookie fatigue without undermining data protection standards. 

It’s a shining example of action that has yet to inspire the lion’s share of the EU’s actual regulators to act on cookies. The tl;dr is that EU citizens are still waiting for the cookie consent reckoning — even if there is now a bit of high level talk about the need for ‘something to be done’ about all these tedious pop-ups.

The problem is that while GDPR certainly cranked up the legal risk on paper, without proper enforcement it’s just a paper tiger. And the pushing around of lots of paper is very tedious, clearly. 

Most cookie pop-ups you’ll see in the EU are thus essentially privacy theatre; at the very least they’re unnecessarily irritating because they create ongoing friction for web users who must constantly respond to nags for their data (typically to repeatedly try to deny access if they can actually find a ‘reject all’ setting).

But — even worse — many of these pervasive pop-ups are actively undermining the law (as a number of studies have shown) because the vast majority do not meet the legal standard for consent.

So the cookie consent/fatigue narrative is actually a story of faux compliance enabled by an enforcement vacuum that’s now also encouraging the watering down of privacy standards as a result of such much unpunished flouting of the law.

There is a lesson here, surely.

‘Faux consent’ pop-ups that you can easily stumble across when surfing the ‘ad-supported’ Internet in Europe include those failing to provide users with clear information about how their data will be used; or not offering people a free choice to reject tracking without being penalized (such as with no/limited access to the content they’re trying to access), or at least giving the impression that accepting is a requirement to access said content (dark pattern!); and/or otherwise manipulating a person’s choice by making it super simple to accept tracking and far, far, far more tedious to deny.

You can also still sometimes find cookie notices that don’t offer users any choice at all — and just pop up to inform that ‘by continuing to browse you consent to your data being processed’ — which, unless the cookies in question are literally essential for provision of the webpage, is basically illegal. (Europe’s top court made it abundantly clear in 2019 that active consent is a requirement for non-essential cookies.)

Nonetheless, to the untrained eye — and sadly there are a lot of them where cookie consent notices are concerned — it can look like it’s Europe’s data protection law that’s the ass because it seemingly demands all these meaningless ‘consent’ pop-ups, which just gloss over an ongoing background data grab anyway.

The truth is regulators should have slapped down these manipulative dark patterns years ago.

The problem now is that regulatory failure is encouraging political posturing — and, in a twisting double-back throw by the ICO! — regulatory thrusting around the idea that some newfangled mechanism is what’s really needed to remove all this universally inconvenient ‘friction’.

An idea like noyb’s ADPC does indeed look very useful in ironing out the widespread operational wrinkles wrapping the EU’s cookie consent rules. But when it’s the ICO suggesting a quick fix after the regulatory authority has failed so spectacularly over the long duration of complaints around this issue you’ll have to forgive us for being sceptical.

In such a context the notion of ‘cookie fatigue’ looks like it’s being suspiciously trumped up; fixed on as a convenient scapegoat to rechannel consumer frustration with hated online tracking toward high privacy standards — and away from the commercial data-pipes that demand all these intrusive, tedious cookie pop-ups in the first place — whilst neatly aligning with the UK government’s post-Brexit political priorities on ‘data’.

Worse still: The whole farcical consent pantomime — which the adtech industry has aggressively engaged in to try to sustain a privacy-hostile business model in spite of beefed up European privacy laws — could be set to end in genuine tragedy for user rights if standards end up being slashed to appease the law mockers.

The target of regulatory ire and political anger should really be the systematic law-breaking that’s held back privacy-respecting innovation and non-tracking business models — by making it harder for businesses that don’t abuse people’s data to compete.

Governments and regulators should not be trying to dismantle the principle of consent itself. Yet — at least in the UK — that does now look horribly possible.

Laws like GDPR set high standards for consent which — if they were but robustly enforced — could lead to reform of highly problematic practices like behavorial advertising combined with the out-of-control scale of programmatic advertising.

Indeed, we should already be seeing privacy-respecting forms of advertising being the norm, not the alternative — free to scale.

Instead, thanks to widespread inaction against systematic adtech breaches, there has been little incentive for publishers to reform bad practices and end the irritating ‘consent charade’ — which keeps cookie pop-ups mushrooming forth, oftentimes with ridiculously lengthy lists of data-sharing ‘partners’ (i.e. if you do actually click through the dark patterns to try to understand what is this claimed ‘choice’ you’re being offered).

As well as being a criminal waste of web users’ time, we now have the prospect of attention-seeking, politically charged regulators deciding that all this ‘friction’ justifies giving data-mining giants carte blanche to torch user rights — if the intention is to fire up the G7 to send a collect invite to the tech industry to come up with “practical” alternatives to asking people for their consent to track them — and all because authorities like the ICO have been too risk averse to actually defend users’ rights in the first place.

Dowden’s remarks last month suggest the UK government may be preparing to use cookie consent fatigue as convenient cover for watering down domestic data protection standards — at least if it can get away with the switcheroo.

Nothing in the ICO’s statement today suggests it would stand in the way of such a move.

Now that the UK is outside the EU, the UK government has said it believes it has an opportunity to deregulate domestic data protection — although it may find there are legal consequences for domestic businesses if it diverges too far from EU standards.

Denham’s call to the G7 naturally includes a few EU countries (the biggest economies in the bloc) but by targeting this group she’s also seeking to engage regulators further afield — in jurisdictions that currently lack a comprehensive data protection framework. So if the UK moves, cloaked in rhetoric of ‘Global Britain’, to water down its (EU-based) high domestic data protection standards it will be placing downward pressure on international aspirations in this area — as a counterweight to the EU’s geopolitical ambitions to drive global standards up to its level.

The risk, then, is a race to the bottom on privacy standards among Western democracies — at a time when awareness about the importance of online privacy, data protection and information security has actually never been higher.

Furthermore, any UK move to weaken data protection also risks putting pressure on the EU’s own high standards in this area — as the regional trajectory would be down not up. And that could, ultimately, give succour to forces inside the EU that lobby against its commitment to a charter of fundamental rights — by arguing such standards undermine the global competitiveness of European businesses.

So while cookies themselves — or indeed ‘cookie fatigue’ — may seem an irritatingly small concern, the stakes attached to this tug of war around people’s rights over what can happen to their personal data are very high indeed.

#advertising-tech, #amazon, #california, #canada, #cookie-consent-notices, #cookie-fatigue, #cookies, #data-protection, #data-protection-law, #data-security, #do-not-track, #elizabeth-denham, #europe, #european-union, #france, #g7, #general-data-protection-regulation, #germany, #google, #ireland, #italy, #japan, #noyb, #oliver-dowden, #online-privacy, #online-tracking, #privacy, #tc, #tracking, #uk-government, #united-kingdom, #united-states, #web-tracking

Ban biometric surveillance in public to safeguard rights, urge EU bodies

There have been further calls from EU institutions to outlaw biometric surveillance in public.

In a joint opinion published today, the European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS), Wojciech Wiewiórowski, have called for draft EU regulations on the use of artificial intelligence technologies to go further than the Commission’s proposal in April — urging that the planned legislation should be beefed up to include a “general ban on any use of AI for automated recognition of human features in publicly accessible spaces, such as recognition of faces, gait, fingerprints, DNA, voice, keystrokes and other biometric or behavioural signals, in any context”.

Such technologies are simply too harmful to EU citizens’ fundamental rights and freedoms — like privacy and equal treatment under the law — to permit their use, is the argument.

The EDPB is responsible for ensuring a harmonization application of the EU’s privacy rules, while the EDPS oversees EU institutions’ own compliance with data protection law and also provides legislative guidance to the Commission.

EU lawmakers’ draft proposal on regulating applications of AI contained restrictions on law enforcement’s use of biometric surveillance in public places — but with very wide-ranging exemptions which quickly attracted major criticism from digital rights and civil society groups, as well as a number of MEPs.

The EDPS himself also quickly urged a rethink. Now he’s gone further, with the EDPB joining in with the criticism.

The EDPB and the EDPS have jointly fleshed out a number of concerns with the EU’s AI proposal — while welcoming the overall “risk-based approach” taken by EU lawmakers — saying, for example, that legislators must be careful to ensure alignment with the bloc’s existing data protection framework to avoid rights risks.

“The EDPB and the EDPS strongly welcome the aim of addressing the use of AI systems within the European Union, including the use of AI systems by EU institutions, bodies or agencies. At the same time, the EDPB and EDPS are concerned by the exclusion of international law enforcement cooperation from the scope of the Proposal,” they write.

“The EDPB and EDPS also stress the need to explicitly clarify that existing EU data protection legislation (GDPR, the EUDPR and the LED) applies to any processing of personal data falling under the scope of the draft AI Regulation.”

As well as calling for the use of biometric surveillance to be banned in public, the pair have urged a total ban on AI systems using biometrics to categorize individuals into “clusters based on ethnicity, gender, political or sexual orientation, or other grounds on which discrimination is prohibited under Article 21 of the Charter of Fundamental Rights”.

That’s an interesting concern in light of Google’s push, in the adtech realm, to replace behavioral micromarketing of individuals with ads that address cohorts (or groups) of users, based on their interests — with such clusters of web users set to be defined by Google’s AI algorithms.

(It’s interesting to speculate, therefore, whether FLoCs risks creating a legal discrimination risk — based on how individual mobile users are grouped together for ad targeting purposes. Certainly, concerns have been raised over the potential for FLoCs to scale bias and predatory advertising. And it’s also interesting that Google avoided running early tests in Europe, likely owning to the EU’s data protection regime.)

In another recommendation today, the EDPB and the EDPS also express a view that the use of AI to infer emotions of a natural person is “highly undesirable and should be prohibited” —  except for what they describe as “very specified cases, such as some health purposes, where the patient emotion recognition is important”.

“The use of AI for any type of social scoring should be prohibited,” they go on — touching on one use-case that the Commission’s draft proposal does suggest should be entirely prohibited, with EU lawmakers evidently keen to avoid any China-style social credit system taking hold in the region.

However by failing to include a prohibition on biometric surveillance in public in the proposed regulation the Commission is arguably risking just such a system being developed on the sly — i.e. by not banning private actors from deploying technology that could be used to track and profile people’s behavior remotely and en masse.

Commenting in a statement, the EDPB’s chair Andrea Jelinek and the EDPS Wiewiórowski argue as much, writing [emphasis ours]:

“Deploying remote biometric identification in publicly accessible spaces means the end of anonymity in those places. Applications such as live facial recognition interfere with fundamental rights and freedoms to such an extent that they may call into question the essence of these rights and freedoms. This calls for an immediate application of the precautionary approach. A general ban on the use of facial recognition in publicly accessible areas is the necessary starting point if we want to preserve our freedoms and create a human-centric legal framework for AI. The proposed regulation should also prohibit any type of use of AI for social scoring, as it is against the EU fundamental values and can lead to discrimination.”

In their joint opinion they also express concerns about the Commission’s proposed enforcement structure for the AI regulation, arguing that data protection authorities (within Member States) should be designated as national supervisory authorities (“pursuant to Article 59 of the [AI] Proposal”) — pointing out the EU DPAs are already enforcing the GDPR (General Data Protection Regulation) and the LED (Law Enforcement Directive) on AI systems involving personal data; and arguing it would therefore be “a more harmonized regulatory approach, and contribute to the consistent interpretation of data processing provisions across the EU” if they were given competence for supervising the AI Regulation too.

They are also not happy with the Commission’s plan to give itself a predominant role in the planned European Artificial Intelligence Board (EAIB) — arguing that this “would conflict with the need for an AI European body independent from any political influence”. To ensure the Board’s independence the proposal should give it more autonomy and “ensure it can act on its own initiative”, they add.

The Commission has been contacted for comment.

The AI Regulation is one of a number of digital proposals unveiled by EU lawmakers in recent months. Negotiations between the different EU institutions — and lobbying from industry and civil society — continues as the bloc works toward adopting new digital rules.

In another recent and related development, the UK’s information commissioner warned last week over the threat posed by big data surveillance systems that are able to make use of technologies like live facial recognition — although she claimed it’s not her place to endorse or ban a technology.

But her opinion makes it clear that many applications of biometric surveillance may be incompatible with the UK’s privacy and data protection framework.

#andrea-jelinek, #artificial-intelligence, #biometrics, #data-protection, #data-protection-law, #edpb, #edps, #europe, #european-data-protection-board, #european-union, #facial-recognition, #general-data-protection-regulation, #law-enforcement, #privacy, #surveillance, #united-kingdom, #wojciech-wiewiorowski

Perspectives on tackling Big Tech’s market power

The need for markets-focused competition watchdogs and consumer-centric privacy regulators to think outside their respective ‘legal silos’ and find creative ways to work together to tackle the challenge of big tech market power was the impetus for a couple of fascinating panel discussions organized by the Centre for Economic Policy Research (CEPR), which were livestreamed yesterday but are available to view on-demand here.

The conversations brought together key regulatory leaders from Europe and the US — giving a glimpse of what the future shape of digital markets oversight might look like at a time when fresh blood has just been injected to chair the FTC so regulatory change is very much in the air (at least around tech antitrust).

CEPR’s discussion premise is that integration, not merely intersection, of competition and privacy/data protection law is needed to get a proper handle on platform giants that have, in many cases, leveraged their market power to force consumers to accept an abusive ‘fee’ of ongoing surveillance.

That fee both strips consumers of their privacy and helps tech giants perpetuate market dominance by locking out interesting new competition (which can’t get the same access to people’s data so operates at a baked in disadvantage).

A running theme in Europe for a number of years now, since a 2018 flagship update to the bloc’s data protection framework (GDPR), has been the ongoing under-enforcement around the EU’s ‘on-paper’ privacy rights — which, in certain markets, means regional competition authorities are now actively grappling with exactly how and where the issue of ‘data abuse’ fits into their antitrust legal frameworks.

The regulators assembled for CEPR’s discussion included, from the UK, the Competition and Markets Authority’s CEO Andrea Coscelli and the information commissioner, Elizabeth Denham; from Germany, the FCO’s Andreas Mundt; from France, Henri Piffaut, VP of the French competition authority; and from the EU, the European Data Protection Supervisor himself, Wojciech Wiewiórowski, who advises the EU’s executive body on data protection legislation (and is the watchdog for EU institutions’ own data use).

The UK’s CMA now sits outside the EU, of course — giving the national authority a higher profile role in global mergers & acquisition decisions (vs pre-brexit), and the chance to help shape key standards in the digital sphere via the investigations and procedures it chooses to pursue (and it has been moving very quickly on that front).

The CMA has a number of major antitrust probes open into tech giants — including looking into complaints against Apple’s App Store and others targeting Google’s plan to depreciate support for third party tracking cookies (aka the so-called ‘Privacy Sandbox’) — the latter being an investigation where the CMA has actively engaged the UK’s privacy watchdog (the ICO) to work with it.

Only last week the competition watchdog said it was minded to accept a set of legally binding commitments that Google has offered which could see a quasi ‘co-design’ process taking place, between the CMA, the ICO and Google, over the shape of the key technology infrastructure that ultimately replaces tracking cookies. So a pretty major development.

Germany’s FCO has also been very active against big tech this year — making full use of an update to the national competition law which gives it the power to take proactive inventions around large digital platforms with major competitive significance — with open procedures now against Amazon, Facebook and Google.

The Bundeskartellamt was already a pioneer in pushing to loop EU data protection rules into competition enforcement in digital markets in a strategic case against Facebook, as we’ve reported before. That closely watched (and long running) case — which targets Facebook’s ‘superprofiling’ of users, based on its ability to combine user data from multiple sources to flesh out a single high dimension per-user profile — is now headed to Europe’s top court (so likely has more years to run).

But during yesterday’s discussion Mundt confirmed that the FCO’s experience litigating that case helped shape key amendments to the national law that’s given him beefier powers to tackle big tech. (And he suggested it’ll be a lot easier to regulate tech giants going forward, using these new national powers.)

“Once we have designated a company to be of ‘paramount significance’ we can prohibit certain conduct much more easily than we could in the past,” he said. “We can prohibit, for example, that a company impedes other undertaking by data processing that is relevant for competition. We can prohibit that a use of service depends on the agreement to data collection with no choice — this is the Facebook case, indeed… When this law was negotiated in parliament parliament very much referred to the Facebook case and in a certain sense this entwinement of competition law and data protection law is written in a theory of harm in the German competition law.

“This makes a lot of sense. If we talk about dominance and if we assess that this dominance has come into place because of data collection and data possession and data processing you need a parameter in how far a company is allowed to gather the data to process it.”

“The past is also the future because this Facebook case… has always been a big case. And now it is up to the European Court of Justice to say something on that,” he added. “If everything works well we might get a very clear ruling saying… as far as the ECN [European Competition Network] is concerned how far we can integrate GDPR in assessing competition matters.

“So Facebook has always been a big case — it might get even bigger in a certain sense.”

France’s competition authority and its national privacy regulator (the CNIL), meanwhile, have also been joint working in recent years.

Including over a competition complaint against Apple’s pro-user privacy App Tracking Transparency feature (which last month the antitrust watchdog declined to block) — so there’s evidence there too of respective oversight bodies seeking to bridge legal silos in order to crack the code of how to effectively regulate tech giants whose market power, panellists agreed, is predicated on earlier failures of competition law enforcement that allowed tech platforms to buy up rivals and sew up access to user data, entrenching advantage at the expense of user privacy and locking out the possibility of future competitive challenge.

The contention is that monopoly power predicated upon data access also locks consumers into an abusive relationship with platform giants which can then, in the case of ad giants like Google and Facebook, extract huge costs (paid not in monetary fees but in user privacy) for continued access to services that have also become digital staples — amping up the ‘winner takes all’ characteristic seen in digital markets (which is obviously bad for competition too).

Yet, traditionally at least, Europe’s competition authorities and data protection regulators have been focused on separate workstreams.

The consensus from the CEPR panels was very much that that is both changing and must change if civil society is to get a grip on digital markets — and wrest control back from tech giants to that ensure consumers and competitors aren’t both left trampled into the dust by data-mining giants.

Denham said her motivation to dial up collaboration with other digital regulators was the UK government entertaining the idea of creating a one-stop-shop ‘Internet’ super regulator. “What scared the hell out of me was the policymakers the legislators floating the idea of one regulator for the Internet. I mean what does that mean?” she said. “So I think what the regulators did is we got to work, we got busy, we become creative, got our of our silos to try to tackle these companies — the likes of which we have never seen before.

“And I really think what we have done in the UK — and I’m excited if others think it will work in their jurisdictions — but I think that what really pushed us is that we needed to show policymakers and the public that we had our act together. I think consumers and citizens don’t really care if the solution they’re looking for comes from the CMA, the ICO, Ofcom… they just want somebody to have their back when it comes to protection of privacy and protection of markets.

“We’re trying to use our regulatory levers in the most creative way possible to make the digital markets work and protect fundamental rights.”

During the earlier panel, the CMA’s Simeon Thornton, a director at the authority, made some interesting remarks vis-a-vis its (ongoing) Google ‘Privacy Sandbox’ investigation — and the joint working it’s doing with the ICO on that case — asserting that “data protection and respecting users’ rights to privacy are very much at the heart of the commitments upon which we are currently consulting”.

“If we accept the commitments Google will be required to develop the proposals according to a number of criteria including impacts on privacy outcomes and compliance with data protection principles, and impacts on user experience and user control over the use of their personal data — alongside the overriding objective of the commitments which is to address our competition concerns,” he went on, adding: “We have worked closely with the ICO in seeking to understand the proposals and if we do accept the commitments then we will continue to work closely with the ICO in influencing the future development of those proposals.”

“If we accept the commitments that’s not the end of the CMA’s work — on the contrary that’s when, in many respects, the real work begins. Under the commitments the CMA will be closely involved in the development, implementation and monitoring of the proposals, including through the design of trials for example. It’s a substantial investment from the CMA and we will be dedicating the right people — including data scientists, for example, to the job,” he added. “The commitments ensure that Google addresses any concerns that the CMA has. And if outstanding concerns cannot be resolved with Google they explicitly provide for the CMA to reopen the case and — if necessary — impose any interim measures necessary to avoid harm to competition.

“So there’s no doubt this is a big undertaking. And it’s going to be challenging for the CMA, I’m sure of that. But personally I think this is the sort of approach that is required if we are really to tackle the sort of concerns we’re seeing in digital markets today.”

Thornton also said: “I think as regulators we do need to step up. We need to get involved before the harm materializes — rather than waiting after the event to stop it from materializing, rather than waiting until that harm is irrevocable… I think it’s a big move and it’s a challenging one but personally I think it’s a sign of the future direction of travel in a number of these sorts of cases.”

Also speaking during the regulatory panel session was FTC commissioner Rebecca Slaughter — a dissenter on the $5BN fine it hit Facebook with back in 2019 for violating an earlier consent order (as she argued the settlement provided no deterrent to address underlying privacy abuse, leaving Facebook free to continue exploiting users’ data) — as well as Chris D’Angelo, the chief deputy AG of the New York Attorney General, which is leading a major states antitrust case against Facebook.

Slaughter pointed out that the FTC already combines a consumer focus with attention on competition but said that historically there has been separation of divisions and investigations — and she agreed on the need for more joined-up working.

She also advocated for US regulators to get out of a pattern of ineffective enforcement in digital markets on issues like privacy and competition where companies have, historically, been given — at best — what amounts to wrist slaps that don’t address root causes of market abuse, perpetuating both consumer abuse and market failure. And be prepared to litigate more.

As regulators toughen up their stipulations they will need to be prepared for tech giants to push back — and therefore be prepared to sue instead of accepting a weak settlement.

“That is what is most galling to me that even where we take action, in our best faith good public servants working hard to take action, we keep coming back to the same questions, again and again,” she said. “Which means that the actions we are taking isn’t working. We need different action to keep us from having the same conversation again and again.”

Slaughter also argued that it’s important for regulators not to pile all the burden of avoiding data abuses on consumers themselves.

“I want to sound a note of caution around approaches that are centered around user control,” she said. “I think transparency and control are important. I think it is really problematic to put the burden on consumers to work through the markets and the use of data, figure out who has their data, how it’s being used, make decisions… I think you end up with notice fatigue; I think you end up with decision fatigue; you get very abusive manipulation of dark patterns to push people into decisions.

“So I really worry about a framework that is built at all around the idea of control as the central tenant or the way we solve the problem. I’ll keep coming back to the notion of what instead we need to be focusing on is where is the burden on the firms to limit their collection in the first instance, prohibit their sharing, prohibit abusive use of data and I think that that’s where we need to be focused from a policy perspective.

“I think there will be ongoing debates about privacy legislation in the US and while I’m actually a very strong advocate for a better federal framework with more tools that facilitate aggressive enforcement but I think if we had done it ten years ago we probably would have ended up with a notice and consent privacy law and I think that that would have not been a great outcome for consumers at the end of the day. So I think the debate and discussion has evolved in an important way. I also think we don’t have to wait for Congress to act.”

As regards more radical solutions to the problem of market-denting tech giants — such as breaking up sprawling and (self-servingly) interlocking services empires — the message from Europe’s most ‘digitally switched on’ regulators seemed to be don’t look to us for that; we are going to have to stay in our lanes.

So tl;dr — if antitrust and privacy regulators’ joint working just sums to more intelligent fiddling round the edges of digital market failure, and it’s break-ups of US tech giants that’s what’s really needed to reboot digital markets, then it’s going to be up to US agencies to wield the hammers. (Or, as Coscelli elegantly phrased it: “It’s probably more realistic for the US agencies to be in the lead in terms of structural separation if and when it’s appropriate — rather than an agency like ours [working from inside a mid-sized economy such as the UK’s].”)

The lack of any representative from the European Commission on the panel was an interesting omission in that regard — perhaps hinting at ongoing ‘structural separation’ between DG Comp and DG Justice where digital policymaking streams are concerned.

The current competition chief, Margrethe Vestager — who also heads up digital strategy for the bloc, as an EVP — has repeatedly expressed reluctance to impose radical ‘break up’ remedies on tech giants. She also recently preferred to waive through another Google digital merger (its acquisition of fitness wearable Fitbit) — agreeing to accept a number of ‘concessions’ and ignoring major mobilization by civil society (and indeed EU data protection agencies) urging her to block it.

Yet in an earlier CEPR discussion session, another panellist — Yale University’s Dina Srinivasan — pointed to the challenges of trying to regulate the behavior of companies when there are clear conflicts of interest, unless and until you impose structural separation as she said has been necessary in other markets (like financial services).

“In advertising we have an electronically traded market with exchanges and we have brokers on both sides. In a competitive market — when competition was working — you saw that those brokers were acting in the best interest of buyers and sellers. And as part of carrying out that function they were sort of protecting the data that belonged to buyers and sellers in that market, and not playing with the data in other ways — not trading on it, not doing conduct similar to insider trading or even front running,” she said, giving an example of how that changed as Google gained market power.

“So Google acquired DoubleClick, made promises to continue operating in that manner, the promises were not binding and on the record — the enforcement agencies or the agencies that cleared the merger didn’t make Google promise that they would abide by that moving forward and so as Google gained market power in that market there’s no regulatory requirement to continue to act in the best interests of your clients, so now it becomes a market power issue, and after they gain enough market power they can flip data ownership and say ‘okay, you know what before you owned this data and we weren’t allowed to do anything with it but now we’re going to use that data to for example sell our own advertising on exchanges’.

“But what we know from other markets — and from financial markets — is when you flip data ownership and you engage in conduct like that that allows the firm to now build market power in yet another market.”

The CMA’s Coscelli picked up on Srinivasan’s point — saying it was a “powerful” one, and that the challenges of policing “very complicated” situations involving conflicts of interests is something that regulators with merger control powers should be bearing in mind as they consider whether or not to green light tech acquisitions.

(Just one example of a merger in the digital space that the CMA is still scrutizing is Facebook’s acquisition of animated GIF platform Giphy. And it’s interesting to speculate whether, had brexit happened a little faster, the CMA might have stepped in to block Google’s Fitibit merger where the EU wouldn’t.)

Coscelli also flagged the issue of regulatory under-enforcement in digital markets as a key one, saying: “One of the reasons we are today where we are is partially historic under-enforcement by competition authorities on merger control — and that’s a theme that is extremely interesting and relevant to us because after the exit from the EU we now have a bigger role in merger control on global mergers. So it’s very important to us that we take the right decisions going forward.”

“Quite often we intervene in areas where there is under-enforcement by regulators in specific areas… If you think about it when you design systems where you have vertical regulators in specific sectors and horizontal regulators like us or the ICO we are more successful if the vertical regulators do their job and I’m sure they are more success if we do our job properly.

“I think we systematically underestimate… the ability of companies to work through whatever behavior or commitments or arrangement are offered to us, so I think these are very important points,” he added, signalling that a higher degree of attention is likely to be applied to tech mergers in Europe as a result of the CMA stepping out from the EU’s competition regulation umbrella.

Also speaking during the same panel, the EDPS warned that across Europe more broadly — i.e. beyond the small but engaged gathering of regulators brought together by CEPR — data protection and competition regulators are far from where they need to be on joint working, implying that the challenge of effectively regulating big tech across the EU is still a pretty Sisyphean one.

It’s true that the Commission is not sitting on hands in the face of tech giant market power.

At the end of last year it proposed a regime of ex ante regulations for so-called ‘gatekeeper’ platforms, under the Digital Markets Act. But the problem of how to effectively enforce pan-EU laws — when the various agencies involved in oversight are typically decentralized across Member States — is one key complication for the bloc. (The Commission’s answer with the DMA was to suggest putting itself in charge of overseeing gatekeepers but it remains to be seen what enforcement structure EU institutions will agree on.)

Clearly, the need for careful and coordinated joint working across multiple agencies with different legal competencies — if, indeed, that’s really what’s needed to properly address captured digital markets vs structural separation of Google’s search and adtech, for example, and Facebook’s various social products — steps up the EU’s regulatory challenge in digital markets.

“We can say that no effective competition nor protection of the rights in the digital economy can be ensured when the different regulators do not talk to each other and understand each other,” Wiewiórowski warned. “While we are still thinking about the cooperation it looks a little bit like everybody is afraid they will have to trade a little bit of its own possibility to assess.”

“If you think about the classical regulators isn’t it true that at some point we are reaching this border where we know how to work, we know how to behave, we need a little bit of help and a little bit of understanding of the other regulator’s work… What is interesting for me is there is — at the same time — the discussion about splitting of the task of the American regulators joining the ones on the European side. But even the statements of some of the commissioners in the European Union saying about the bigger role the Commission will play in the data protection and solving the enforcement problems of the GDPR show there is no clear understanding what are the differences between these fields.”

One thing is clear: Big tech’s dominance of digital markets won’t be unpicked overnight. But, on both sides of the Atlantic, there are now a bunch of theories on how to do it — and growing appetite to wade in.

#advertising-tech, #amazon, #andreas-mundt, #competition-and-markets-authority, #competition-law, #congress, #data-processing, #data-protection, #data-protection-law, #data-security, #digital-markets-act, #digital-rights, #doubleclick, #elizabeth-denham, #europe, #european-commission, #european-court-of-justice, #european-union, #facebook, #federal-trade-commission, #financial-services, #fitbit, #france, #general-data-protection-regulation, #germany, #human-rights, #margrethe-vestager, #policy, #privacy, #uk-government, #united-kingdom, #united-states, #yale-university

UK’s ICO warns over ‘big data’ surveillance threat of live facial recognition in public

The UK’s chief data protection regulator has warned over reckless and inappropriate use of live facial recognition (LFR) in public places.

Publishing an opinion today on the use of this biometric surveillance in public — to set out what is dubbed as the “rules of engagement” — the information commissioner, Elizabeth Denham, also noted that a number of investigations already undertaken by her office into planned applications of the tech have found problems in all cases.

“I am deeply concerned about the potential for live facial recognition (LFR) technology to be used inappropriately, excessively or even recklessly. When sensitive personal data is collected on a mass scale without people’s knowledge, choice or control, the impacts could be significant,” she warned in a blog post.

“Uses we’ve seen included addressing public safety concerns and creating biometric profiles to target people with personalised advertising.

“It is telling that none of the organisations involved in our completed investigations were able to fully justify the processing and, of those systems that went live, none were fully compliant with the requirements of data protection law. All of the organisations chose to stop, or not proceed with, the use of LFR.”

“Unlike CCTV, LFR and its algorithms can automatically identify who you are and infer sensitive details about you. It can be used to instantly profile you to serve up personalised adverts or match your image against known shoplifters as you do your weekly grocery shop,” Denham added.

“In future, there’s the potential to overlay CCTV cameras with LFR, and even to combine it with social media data or other ‘big data’ systems — LFR is supercharged CCTV.”

The use of biometric technologies to identify individuals remotely sparks major human rights concerns, including around privacy and the risk of discrimination.

Across Europe there are campaigns — such as Reclaim your Face — calling for a ban on biometric mass surveillance.

In another targeted action, back in May, Privacy International and others filed legal challenges at the controversial US facial recognition company, Clearview AI, seeking to stop it from operating in Europe altogether. (Some regional police forces have been tapping in — including in Sweden where the force was fined by the national DPA earlier this year for unlawful use of the tech.)

But while there’s major public opposition to biometric surveillance in Europe, the region’s lawmakers have so far — at best — been fiddling around the edges of the controversial issue.

A pan-EU regulation the European Commission presented in April, which proposes a risk-based framework for applications of artificial intelligence, included only a partial prohibition on law enforcement’s use of biometric surveillance in public places — with wide ranging exemptions that have drawn plenty of criticism.

There have also been calls for a total ban on the use of technologies like live facial recognition in public from MEPs across the political spectrum. The EU’s chief data protection supervisor has also urged lawmakers to at least temporarily ban the use of biometric surveillance in public.

The EU’s planned AI Regulation won’t apply in the UK, in any case, as the country is now outside the bloc. And it remains to be seen whether the UK government will seek to weaken the national data protection regime.

A recent report it commissioned to examine how the UK could revise its regulatory regime, post-Brexit, has — for example — suggested replacing the UK GDPR with a new “UK framework” — proposing changes to “free up data for innovation and in the public interest”, as it puts it, and advocating for revisions for AI and “growth sectors”. So whether the UK’s data protection regime will be put to the torch in a post-Brexit bonfire of ‘red tape’ is a key concern for rights watchers.

(The Taskforce on Innovation, Growth and Regulatory Reform report advocates, for example, for the complete removal of Article 22 of the GDPR — which gives people rights not to be subject to decisions based solely on automated processing — suggesting it be replaced with “a focus” on “whether automated profiling meets a legitimate or public interest test”, with guidance on that envisaged as coming from the Information Commissioner’s Office (ICO). But it should also be noted that the government is in the process of hiring Denham’s successor; and the digital minister has said he wants her replacement to take “a bold new approach” that “no longer sees data as a threat, but as the great opportunity of our time”. So, er, bye-bye fairness, accountability and transparency then?)

For now, those seeking to implement LFR in the UK must comply with provisions in the UK’s Data Protection Act 2018 and the UK General Data Protection Regulation (aka, its implementation of the EU GDPR which was transposed into national law before Brexit), per the ICO opinion, including data protection principles set out in UK GDPR Article 5, including lawfulness, fairness, transparency, purpose limitation, data minimisation, storage limitation, security and accountability.

Controllers must also enable individuals to exercise their rights, the opinion also said.

“Organisations will need to demonstrate high standards of governance and accountability from the outset, including being able to justify that the use of LFR is fair, necessary and proportionate in each specific context in which it is deployed. They need to demonstrate that less intrusive techniques won’t work,” wrote Denham. “These are important standards that require robust assessment.

“Organisations will also need to understand and assess the risks of using a potentially intrusive technology and its impact on people’s privacy and their lives. For example, how issues around accuracy and bias could lead to misidentification and the damage or detriment that comes with that.”

The timing of the publication of the ICO’s opinion on LFR is interesting in light of wider concerns about the direction of UK travel on data protection and privacy.

If, for example, the government intends to recruit a new, ‘more pliant’ information commissioner — who will happily rip up the rulebook on data protection and AI, including in areas like biometric surveillance — it will at least be rather awkward for them to do so with an opinion from the prior commissioner on the public record that details the dangers of reckless and inappropriate use of LFR.

Certainly, the next information commissioner won’t be able to say they weren’t given clear warning that biometric data is particularly sensitive — and can be used to estimate or infer other characteristics, such as their age, sex, gender or ethnicity.

Or that ‘Great British’ courts have previously concluded that “like fingerprints and DNA [a facial biometric template] is information of an ‘intrinsically private’ character”, as the ICO opinion notes, while underlining that LFR can cause this super sensitive data to be harvested without the person in question even being aware it’s happening. 

Denham’s opinion also hammers hard on the point about the need for public trust and confidence for any technology to succeed, warning that: “The public must have confidence that its use is lawful, fair, transparent and meets the other standards set out in data protection legislation.”

The ICO has previously published an Opinion into the use of LFR by police forces — which she said also sets “a high threshold for its use”. (And a few UK police forces — including the Met in London — have been among the early adopters of facial recognition technology, which has in turn led some into legal hot water on issues like bias.)

Disappointingly, though, for human rights advocates, the ICO opinion shies away from recommending a total ban on the use of biometric surveillance in public by private companies or public organizations — with the commissioner arguing that while there are risks with use of the technology there could also be instances where it has high utility (such as in the search for a missing child).

“It is not my role to endorse or ban a technology but, while this technology is developing and not widely deployed, we have an opportunity to ensure it does not expand without due regard for data protection,” she wrote, saying instead that in her view “data protection and people’s privacy must be at the heart of any decisions to deploy LFR”.

Denham added that (current) UK law “sets a high bar to justify the use of LFR and its algorithms in places where we shop, socialise or gather”.

“With any new technology, building public trust and confidence in the way people’s information is used is crucial so the benefits derived from the technology can be fully realised,” she reiterated, noting how a lack of trust in the US has led to some cities banning the use of LFR in certain contexts and led to some companies pausing services until rules are clearer.

“Without trust, the benefits the technology may offer are lost,” she also warned.

There is one red line that the UK government may be forgetting in its unseemly haste to (potentially) gut the UK’s data protection regime in the name of specious ‘innovation’. Because if it tries to, er, ‘liberate’ national data protection rules from core EU principles (of lawfulness, fairness, proportionality, transparency, accountability and so on) — it risks falling out of regulatory alignment with the EU, which would then force the European Commission to tear up a EU-UK data adequacy arrangement (on which the ink is still drying).

The UK having a data adequacy agreement from the EU is dependent on the UK having essentially equivalent protections for people’s data. Without this coveted data adequacy status UK companies will immediately face far greater legal hurdles to processing the data of EU citizens (as the US now does, in the wake of the demise of Safe Harbor and Privacy Shield). There could even be situations where EU data protection agencies order EU-UK data flows to be suspended altogether…

Obviously such a scenario would be terrible for UK business and ‘innovation’ — even before you consider the wider issue of public trust in technologies and whether the Great British public itself wants to have its privacy rights torched.

Given all this, you really have to wonder whether anyone inside the UK government has thought this ‘regulatory reform’ stuff through. For now, the ICO is at least still capable of thinking for them.

 

#artificial-intelligence, #biometrics, #clearview-ai, #data-protection, #data-protection-law, #elizabeth-denham, #europe, #european-commission, #european-union, #facial-recognition, #general-data-protection-regulation, #information-commissioners-office, #law-enforcement, #privacy, #privacy-international, #safe-harbor, #surveillance, #tc, #uk-government, #united-kingdom

UK PM Boris Johnson’s Tories guilty of spamming voters

The governing party of the UK has been fined £10k by the national data protection watchdog for sending spam.

The Information Commissioner’s (ICO) Office has sanctioned the Conservative Party following an investigation triggered by complaints from 51 recipients of unwanted marketing emails sent in the name of prime minister, Boris Johnson.

The emails in question were sent during eight days in July 2019 after Johnson had been elected as Party leader (and also therefore became UK PM) — urging the recipients to click on a link that directed them to a website for joining the Conservative Party.

Direct marketing is regulated in the UK by PECR (the Privacy and Electronic Communications Regulations) — which requires senders to obtain individual consent to distribute digital marketing missives.

But the ICO’s investigation found that the Conservative Party lacked written policies addressing PECR and appeared to be operating under the misguided assumption that their “legitimate interests” overrode the legal requirements related to sending this type of direct marketing.

The Party had also switched bulk email provider — during which unsubscribe records were apparently lost. But ofc that’s not an excuse for breaking the law. (Indeed, record-keeping is a core requirement of UK data protection law, especially since the EU General Data Protection Regulation was transposed into national law back in 2018.) And the ICO found the Tories were unable to adequately explain what had gone wrong.

In another damningly twist, the Conservative Party had been subject to what the ICO calls “detailed engagement” at the time it was spamming people.

This was a result of wider action by the regulator, looking into the ecosystem and ethics around online political ads in the wake of the Cambridge Analytica scandal — and the Party had already been warned of inadequate standards in its compliance with data protection and privacy law. But it went ahead and spammed people anyway. 

So while ‘only’ 51 complaints were received by the ICO from individual recipients of Boris Johnson’s spam, the ICO found the Tories could not fully demonstrate they had the proper consents for over a million (1,190,280) direct marketing emails sent between July 24 and 31 2019. (The ICO takes that view that at least 549,030 of those, which were send to non-Party members, were “inherently likely” to have the same compliance issues as were identified with the emails sent to the 51 complainants.)

Moreover, the Party continued to have scant regard for the law as it spun up its spam engines ahead of the 2019 General Election — which saw Johnson gain a landslide majority of 80 seats in a winter ballot.

“During the course of the Commissioner’s investigation, the Party proceeded to engage in an industrial-scale direct marketing email exercise during the 2019 General Election campaign, sending nearly 23M emails,” the ICO notes. “This generated a further 95 complaints to the Commissioner, which are likely to have resulted from the Party’s failure to address the compliance issues identified in the Commissioner’s investigation into the July 2019 email campaign and the wider audit of the Party’s processing of personal data.”

Its report also chronicles “extensive delays” by the Conservative Party in responding to its requests for information and clarification — so while it was not found to have obstructed the investigation the regulator does write that its conduct “cannot be characterised as a mitigating factor”.

While the ICO penalty is an embarrassing slap for Boris Johnson’s Tories, a data audit of all the main UK political parties it put out last year spared no blushes — with all parties found wanting in how they handle and safeguard voter information.

However it’s only the Conservatives’ fast and loose attitude toward people’s data and privacy online that could have contributed to them being able to consolidate power at the last election.

#boris-johnson, #cambridge-analytica, #computing, #conservative-party, #data-protection, #data-protection-law, #data-security, #digital-marketing, #email, #european-union, #general-data-protection-regulation, #general-election, #leader, #marketing, #spamming, #tc, #united-kingdom

EU bodies’ use of US cloud services from AWS, Microsoft being probed by bloc’s privacy chief

Europe’s lead data protection regulator has opened two investigations into EU institutions’ use of cloud services from U.S. cloud giants, Amazon and Microsoft, under so called Cloud II contracts inked earlier between European bodies, institutions and agencies and AWS and Microsoft.

A separate investigation has also been opened into the European Commission’s use of Microsoft Office 365 to assess compliance with earlier recommendations, the European Data Protection Supervisor (EDPS) said today.

Wojciech Wiewiórowski is probing the EU’s use of U.S. cloud services as part of a wider compliance strategy announced last October following a landmark ruling by the Court of Justice (CJEU) — aka, Schrems II — which struck down the EU-US Privacy Shield data transfer agreement and cast doubt upon the viability of alternative data transfer mechanisms in cases where EU users’ personal data is flowing to third countries where it may be at risk from mass surveillance regimes.

In October, the EU’s chief privacy regulator asked the bloc’s institutions to report on their transfers of personal data to non-EU countries. This analysis confirmed that data is flowing to third countries, the EDPS said today. And that it’s flowing to the U.S. in particular — on account of EU bodies’ reliance on large cloud service providers (many of which are U.S.-based).

That’s hardly a surprise. But the next step could be very interesting as the EDPS wants to determine whether those historical contracts (which were signed before the Schrems II ruling) align with the CJEU judgement or not.

Indeed, the EDPS warned today that they may not — which could thus require EU bodies to find alternative cloud service providers in the future (most likely ones located within the EU, to avoid any legal uncertainty). So this investigation could be the start of a regulator-induced migration in the EU away from U.S. cloud giants.

Commenting in a statement, Wiewiórowski said: “Following the outcome of the reporting exercise by the EU institutions and bodies, we identified certain types of contracts that require particular attention and this is why we have decided to launch these two investigations. I am aware that the ‘Cloud II contracts’ were signed in early 2020 before the ‘Schrems II’ judgement and that both Amazon and Microsoft have announced new measures with the aim to align themselves with the judgement. Nevertheless, these announced measures may not be sufficient to ensure full compliance with EU data protection law and hence the need to investigate this properly.”

Amazon and Microsoft have been contacted with questions regarding any special measures they have applied to these Cloud II contracts with EU bodies.

The EDPS said it wants EU institutions to lead by example. And that looks important given how, despite a public warning from the European Data Protection Board (EDPB) last year — saying there would be no regulatory grace period for implementing the implications of the Schrems II judgement — there hasn’t been any major data transfer fireworks yet.

The most likely reason for that is a fair amount of head-in-the-sand reaction and/or superficial tweaks made to contracts in the hopes of meeting the legal bar (but which haven’t yet been tested by regulatory scrutiny).

Final guidance from the EDPB is also still pending, although the Board put out detailed advice last fall.

The CJEU ruling made it plain that EU law in this area cannot simply be ignored. So as the bloc’s data regulators start scrutinizing contracts that are taking data out of the EU some of these arrangement are, inevitably, going to be found wanting — and their associated data flows ordered to stop.

To wit: A long-running complaint against Facebook’s EU-US data transfers — filed by the eponymous Max Schrems, a long-time EU privacy campaigners and lawyer, all the way back in 2013 — is slowing winding toward just such a possibility.

Last fall, following the Schrems II ruling, the Irish regulator gave Facebook a preliminary order to stop moving Europeans’ data over the pond. Facebook sought to challenge that in the Irish courts but lost its attempt to block the proceeding earlier this month. So it could now face a suspension order within months.

How Facebook might respond is anyone’s guess but Schrems suggested to TechCrunch last summer that the company will ultimately need to federate its service, storing EU users’ data inside the EU.

The Schrems II ruling does generally look like it will be good news for EU-based cloud service providers which can position themselves to solve the legal uncertainty issue (even if they aren’t as competitively priced and/or scalable as the dominant US-based cloud giants).

Fixing U.S. surveillance law, meanwhile — so that it gets independent oversight and accessible redress mechanisms for non-citizens in order to no longer be considered a threat to EU people’s data, as the CJEU judges have repeatedly found — is certainly likely to take a lot longer than ‘months’. If indeed the US authorities can ever be convinced of the need to reform their approach.

Still, if EU regulators finally start taking action on Schrems II — by ordering high profile EU-US data transfers to stop — that might help concentrate US policymakers’ minds toward surveillance reform. Otherwise local storage may be the new future normal.

#amazon, #aws, #cloud, #cloud-services, #data-protection, #data-protection-law, #data-security, #eu-us-privacy-shield, #europe, #european-commission, #european-data-protection-board, #european-union, #facebook, #lawyer, #max-schrems, #microsoft, #privacy, #surveillance-law, #united-states, #wojciech-wiewiorowski

Facebook ordered not to apply controversial WhatsApp T&Cs in Germany

The Hamburg data protection agency has banned Facebook from processing the additional WhatsApp user data that the tech giant is granting itself access to under a mandatory update to WhatsApp’s terms of service.

The controversial WhatsApp privacy policy update has caused widespread confusion around the world since being announced — and already been delayed by Facebook for several months after a major user backlash saw rivals messaging apps benefitting from an influx of angry users.

The Indian government has also sought to block the changes to WhatApp’s T&Cs in court — and the country’s antitrust authority is investigating.

Globally, WhatsApp users have until May 15 to accept the new terms (after which the requirement to accept the T&Cs update will become persistent, per a WhatsApp FAQ).

The majority of users who have had the terms pushed on them have already accepted them, according to Facebook, although it hasn’t disclosed what proportion of users that is.

But the intervention by Hamburg’s DPA could further delay Facebook’s rollout of the T&Cs — at least in Germany — as the agency has used an urgency procedure, allowed for under the European Union’s General Data Protection Regulation (GDPR), to order the tech giant not to share the data for three months.

A WhatsApp spokesperson disputed the legal validity of Hamburg’s order — calling it “a fundamental misunderstanding of the purpose and effect of WhatsApp’s update” and arguing that it “therefore has no legitimate basis”.

“Our recent update explains the options people have to message a business on WhatsApp and provides further transparency about how we collect and use data. As the Hamburg DPA’s claims are wrong, the order will not impact the continued roll-out of the update. We remain fully committed to delivering secure and private communications for everyone,” the spokesperson added, suggesting that Facebook-owned WhatsApp may be intending to ignore the order.

We understand that Facebook is considering its options to appeal Hamburg’s procedure.

The emergency powers Hamburg is using can’t extend beyond three months but the agency is also applying pressure to the European Data Protection Board (EDPB) to step in and make what it calls “a binding decision” for the 27 Member State bloc.

We’ve reached out to the EDPB to ask what action, if any, it could take in response to the Hamburg DPA’s call.

The body is not usually involved in making binding GDPR decisions related to specific complaints — unless EU DPAs cannot agree over a draft GDPR decision brought to them for review by a lead supervisory authority under the one-stop-shop mechanism for handling cross-border cases.

In such a scenario the EDPB can cast a deciding vote — but it’s not clear that an urgency procedure would qualify.

In taking the emergency action, the German DPA is not only attacking Facebook for continuing to thumb its nose at EU data protection rules, but throwing shade at its lead data supervisor in the region, Ireland’s Data Protection Commission (DPC) — accusing the latter of failing to investigate the very widespread concerns attached to the incoming WhatsApp T&Cs.

(“Our request to the lead supervisory authority for an investigation into the actual practice of data sharing was not honoured so far,” is the polite framing of this shade in Hamburg’s press release).

We’ve reached out to the DPC for a response and will update this report if we get one.

Ireland’s data watchdog is no stranger to criticism that it indulges in creative regulatory inaction when it comes to enforcing the GDPR — with critics charging commissioner Helen Dixon and her team of failing to investigate scores of complaints and, in the instances when it has opened probes, taking years to investigate — and opting for weak enforcements at the last.

The only GDPR decision the DPC has issued to date against a tech giant (against Twitter, in relation to a data breach) was disputed by other EU DPAs — which wanted a far tougher penalty than the $550k fine eventually handed down by Ireland.

GDPR investigations into Facebook and WhatsApp remain on the DPC’s desk. Although a draft decision in one WhatsApp data-sharing transparency case was sent to other EU DPAs in January for review — but a resolution has still yet to see the light of day almost three years after the regulation begun being applied.

In short, frustrations about the lack of GDPR enforcement against the biggest tech giants are riding high among other EU DPAs — some of whom are now resorting to creative regulatory actions to try to sidestep the bottleneck created by the one-stop-shop (OSS) mechanism which funnels so many complaints through Ireland.

The Italian DPA also issued a warning over the WhatsApp T&Cs change, back in January — saying it had contacted the EDPB to raise concerns about a lack of clear information over what’s changing.

At that point the EDPB emphasized that its role is to promote cooperation between supervisory authorities. It added that it will continue to facilitate exchanges between DPAs “in order to ensure a consistent application of data protection law across the EU in accordance with its mandate”. But the always fragile consensus between EU DPAs is becoming increasingly fraught over enforcement bottlenecks and the perception that the regulation is failing to be upheld because of OSS forum shopping.

That will increase pressure on the EDPB to find some way to resolve the impasse and avoid a wider break down of the regulation — i.e. if more and more Member State agencies resort to unilateral ’emergency’ action.

The Hamburg DPA writes that the update to WhatsApp’s terms grant the messaging platform “far-reaching powers to share data with Facebook” for the company’s own purposes (including for advertising and marketing) — such as by passing WhatApp users’ location data to Facebook and allowing for the communication data of WhatsApp users to be transferred to third-parties if businesses make use of Facebook’s hosting services.

Its assessment is that Facebook cannot rely on legitimate interests as a legal base for the expanded data sharing under EU law.

And if the tech giant is intending to rely on user consent it’s not meeting the bar either because the changes are not clearly explained nor are users offered a free choice to consent or not (which is the required standard under GDPR).

“The investigation of the new provisions has shown that they aim to further expand the close connection between the two companies in order for Facebook to be able to use the data of WhatsApp users for their own purposes at any time,” Hamburg goes on. “For the areas of product improvement and advertising, WhatsApp reserves the right to pass on data to Facebook companies without requiring any further consent from data subjects. In other areas, use for the company’s own purposes in accordance to the privacy policy can already be assumed at present.

“The privacy policy submitted by WhatsApp and the FAQ describe, for example, that WhatsApp users’ data, such as phone numbers and device identifiers, are already being exchanged between the companies for joint purposes such as network security and to prevent spam from being sent.”

DPAs like Hamburg may be feeling buoyed to take matters into their own hands on GDPR enforcement by a recent opinion by an advisor to the EU’s top court, as we suggested in our coverage at the time. Advocate General Bobek took the view that EU law allows agencies to bring their own proceedings in certain situations, including in order to adopt “urgent measures” or to intervene “following the lead data protection authority having decided not to handle a case.”

The CJEU ruling on that case is still pending — but the court tends to align with the position of its advisors.

 

#data-protection, #data-protection-commission, #data-protection-law, #europe, #european-data-protection-board, #european-union, #facebook, #gdpr, #general-data-protection-regulation, #germany, #hamburg, #helen-dixon, #ireland, #privacy, #privacy-policy, #social, #social-media, #terms-of-service, #whatsapp

China’s internet regulator takes aim at forced data collection

China is a step closer to cracking down on unscrupulous data collection by app developers. This week, the country’s cybersecurity watchdog began seeking comment on the range of user information that apps from instant messengers to ride-hailing services are allowed to collect.

The move follows in the footstep of a proposed data protection law that was released in October and is currently under review. The comprehensive data privacy law is set to be a “milestone” if passed and implemented, wrote the editorial of China Daily, the Chinese Communist Party’s official mouthpiece. The law is set to restrict data practices not just by private firms but also among government departments.

“Some leaking of personal information has resulted in economic losses for individuals when the information is used to swindle the targeted individual of his or her money,” said the party paper. “With increasingly advanced technology, the collection of personal information has been extended to biological information such as an individual’s face or even genes, which could result in serious consequences if such information is misused.”

Apps in China often force users into surrendering excessive personal information by declining access when users refuse to consent. The draft rules released this week take aim at the practice by defining the types of data collection that are “legal, proper and necessary.”

According to the draft, “necessary” data are those that ensure the “normal operation of apps’ basic functions.” As long as users have allowed the collection of necessary data, apps must grant them access.

Here are a few examples of what’s considered “necessary” personal data for different types of apps, as translated by China Law Translate.

  • Navigation: location
  • Ride-hailing: the registered user’s real identity (normally in the form of one’s mobile phone number in China) and location information
  • Messaging: the registered user’s real identity and contact list
  • Payment: the registered user’s real identity, the payer/payee’s bank information
  • Online shopping: the registered user’s real identity, payment details, information about the recipient like their name, address and phone number
  • Games: the registered user’s real identity
  • Dating: the registered user’s real identity, and the age, sex and marital status of the person looking for marriage or dating

There are also categories of apps that are required to grant users access without gathering any personal information upfront: live streaming, short video, video/music streaming, news, browsers, photo editors, and app stores.

It’s worth noting that while the draft provides clear rules for apps to follow, it gives no details on how they will be enforced or how offenders will be punished. For instance, will app stores incorporate the benchmark into their approval process? Or will internet users be the watchdog? It remains to be seen.

#asia, #china, #data-protection-law, #government, #privacy

UK class action style claim filed over Marriott data breach

A class action style suit has been filed in the UK against hotel group Marriott International over a massive data breach that exposed the information of some 500 million guests around the world, including around 30 million residents of the European Union, between July 2014 and September 2018.

The representative legal action against Marriott has been filed by UK resident, Martin Bryant, on behalf of millions of hotel guests domiciled in England & Wales who made reservations at hotel brands globally within the Starwood Hotels group, which is now part of Marriott International.

Hackers gained access to the systems of the Starwood Hotels group, starting in 2014, where they were able to help themselves to information such as guests’ names; email and postal addresses; telephone numbers; gender and credit card data. Marriott International acquired the Starwood Hotels group in 2016 — but the breach went undiscovered until 2018.

Bryant is being represented by international law firm, Hausfeld, which specialises in group actions.

Commenting in a statement, Hausfeld partner, Michael Bywell, said: “Over a period of several years, Marriott International failed to take adequate technical or organisational measures to protect millions of their guests’ personal data which was entrusted to them. Marriott International acted in clear breach of data protection laws specifically put in place to protect data subjects.”

“Personal data is increasingly critical as we live more of our lives online, but as consumers we don’t always realise the risks we are exposed to when our data is compromised through no fault of our own. I hope this case will raise awareness of the value of our personal data, result in fair compensation for those of us who have fallen foul of Marriott’s vast and long-lasting data breach, and also serve notice to other data owners that they must hold our data responsibly,” added Bryant in another supporting statement.

We’ve reached out to Marriott International for comment on the legal action.

A claim website for the action invites other eligible UK individuals to register their interest — and “hold Marriott to account for not securing your personal data”, as it puts it.

Here are the details of who is eligible to register their interest:

The ‘class’ of claimants on whose behalf the claim is brought includes all individuals who at any date prior to 10 September 2018 made a reservation online at a hotel operating under any of the following brands: W Hotels, St. Regis, Sheraton Hotels & Resorts, Westin Hotels & Resorts, Element Hotels, Aloft Hotels, The Luxury Collection, Tribute Portfolio, Le Méridien Hotel & Resorts, Four Points by Sheraton, Design Hotels. In addition, any other brand owned and/or operated by Marriott International Inc or Starwood Hotels and Resorts Worldwide LLC. The individuals must have been resident in England and Wales at some point during the relevant period prior to 10 September 2018 and are resident in England and Wales at the date the claim was issued. They must also have been at least 18 years old at the date the claim was issued.

The claim is being brought as a representative action under Rule 19.6 of the Civil Procedure Rules, per a press release, which also notes that everyone with the same interest as Bryant is included in the claimant class unless they opt out.

Those eligible to participate face no fees or costs, nor do affected guests face any financial risk from the litigation — which is being fully funded by Harbour Litigation Funding, a global litigation funder.

The suit is the latest sign that litigation funders are willing to take a punt on representative actions in the UK as a route to obtaining substantial damages for data issues. Another class action style suit was announced last week — targeting tracking cookies operated by data broker giants, Oracle and Salesforce.

Both lawsuits follow a landmark decision by a UK appeals court last year which allowed a class action-style suit against Google’s use between 2011 and 2012 of tracking cookies to override iPhone users’ privacy settings in Apple’s Safari browser to proceed, overturning an earlier court decision to toss the case.

The other unifying factor is the existence of Europe’s General Data Protection Regulation (GDPR) framework which has opened the door to major fines for data protection violations. So even if EU regulators continue to lack uniform vigour in enforcing data protection law, there’s a chance the region’s courts will do the job for them if more litigation funders see value in bringing representative cases to pursue damages for privacy violations.

The dates of the Marriott data breach means it falls under GDPR — which came into application in May 2018.

The UK’s data watchdog, the ICO, proposed a $123M fine for the security failing in July last year — saying then that the hotel operator had “failed to undertake sufficient due diligence when it bought Starwood and should also have done more to secure its systems”.

However it has yet to hand down a final decision. Asked when the Marriott decision will be finalized, an ICO spokeswoman told us the “regulatory process” has been extended until September 30. No additional detail was offered to explain the delay.

Here’s the regulator’s statement in full:

Under Schedule 16 of the Data Protection Act 2018, Marriott has agreed to an extension of the regulatory process until 30 September. We will not be commenting until the regulatory process has concluded.

#data-protection-law, #europe, #european-union, #gdpr, #general-data-protection-regulation, #hotels, #iphone, #law, #lawsuit, #litigation, #marriott, #marriott-international, #netherlands, #privacy, #salesforce, #starwood, #united-kingdom

Oracle and Salesforce hit with GDPR class action lawsuits over cookie tracking consent

The use of third party cookies for ad tracking and targeting by data broker giants Oracle and Salesforce is the focus of class action style litigation announced today in the UK and the Netherlands.

The suits will argue that mass surveillance of Internet users to carry out real-time bidding ad auctions cannot possibly be compatible with strict EU laws around consent to process personal data.

The litigants believe the collective claims could exceed €10BN, should they eventually prevail in their arguments — though such legal actions can take several years to work their way through the courts.

In the UK, the case may also face some legal hurdles given the lack of an established model for pursuing collective damages in cases relating to data rights. Though there are signs that’s changing.

Non-profit foundation, The Privacy Collective, has filed one case today with the District Court of Amsterdam, accusing the two data broker giants of breaching the EU’s General Data Protection Regulation (GDPR) in their processing and sharing of people’s information via third party tracking cookies and other adtech methods.

The Dutch case, which is being led by law-firm bureau Brandeis, is the biggest-ever class action in The Netherlands related to violation of the GDPR — with the claimant foundation representing the interests of all Dutch citizens whose personal data has been used without their consent and knowledge by Oracle and Salesforce. 

A similar case is due to be filed later this month at the High Court in London England, which will make reference to the GDPR and the UK’s PECR (Privacy of Electronic Communications Regulation) — the latter governing the use of personal data for marketing communications. The case there is being led by law firm Cadwalader

Under GDPR, consent for processing EU citizens’ personal data must be informed, specific and freely given. The regulation also confers rights on individuals around their data — such as the ability to receive a copy of their personal information.

It’s those requirements the litigation is focused on, with the cases set to argue that the tech giants’ third party tracking cookies, BlueKai and Krux — trackers that are hosted on scores of popular websites, such as Amazon, Booking.com, Dropbox, Reddit and Spotify to name a few — along with a number of other tracking techniques are being used to misuse Europeans’ data on a massive scale.

Per Oracle marketing materials, its Data Cloud and BlueKai Marketplace provider partners with access to some 2BN global consumer profiles. (Meanwhile, as we reported in June, BlueKai suffered a data breach that exposed billions of those records to the open web.)

While Salesforce claims its marketing cloud ‘interacts’ with more than 3BN browsers and devices monthly.

Both companies have grown their tracking and targeting capabilities via acquisition for years; Oracle bagging BlueKai in 2014 — and Salesforce snaffling Krux in 2016.

 

Discussing the lawsuit in a telephone call with TechCrunch, Dr Rebecca Rumbul, class representative and claimant in England & Wales, said: “There is, I think, no way that any normal person can really give informed consent to the way in which their data is going to be processed by the cookies that have been placed by Oracle and Salesforce.

“When you start digging into it there are numerous, fairly pernicious ways in which these cookies can and probably do operate — such as cookie syncing, and the aggregation of personal data — so there’s really, really serious privacy concerns there.”

The real-time-bidding (RTB) process that the pair’s tracking cookies and techniques feed, enabling the background, high velocity trading of profiles of individual web users as they browse in order to run dynamic ad auctions and serve behavioral ads targeting their interests, has, in recent years, been subject to a number of GDPR complaints, including in the UK.

These complaints argue that RTB’s handling of people’s information is a breach of the regulation because it’s inherently insecure to broadcast data to so many other entities — while, conversely, GDPR bakes in a requirement for privacy by design and default.

The UK Information Commissioner’s Office has, meanwhile, accepted for well over a year that adtech has a lawfulness problem. But the regulator has so far sat on its hands, instead of enforcing the law — leaving the complainants dangling. (Last year, Ireland’s DPC opened a formal investigation of Google’s adtech, following a similar complaint, but has yet to issue a single GDPR decision in a cross-border complaint — leading to concerns of an enforcement bottleneck.)

The two lawsuits targeting RTB aren’t focused on the security allegation, per Rumbul, but are mostly concerned with consent and data access rights.

She confirms they opted to litigate rather than trying to try a regulatory complaint route as a way of exercising their rights given the “David vs Goliath” nature of bringing claims against the tech giants in question.

“If I was just one tiny person trying to complaint to Oracle and trying to use the UK Information Commissioner to achieve that… they simply do not have the resources to direct at one complaint from one person against a company like Oracle — in terms of this kind of scale,” Rumbul told TechCrunch.

“In terms of being able to demonstrate harm, that’s quite a lot of work and what you get back in recompense would probably be quite small. It certainly wouldn’t compensate me for the time I would spend on it… Whereas doing it as a representative class action I can represent everyone in the UK that has been affected by this.

“The sums of money then work — in terms of the depths of Oracle’s pockets, the costs of litigation, which are enormous, and the fact that, hopefully, doing it this way, in a very large-scale, very public forum it’s not just about getting money back at the end of it; it’s about trying to achieve more standardized change in the industry.”

“If Salesforce and Oracle are not successful in fighting this then hopefully that send out ripples across the adtech industry as a whole — encouraging those that are using these quite pernicious cookies to change their behaviours,” she added.

The litigation is being funded by Innsworth, a litigation funder which is also funding Walter Merricks’ class action for 46 million consumers against Mastercard in London courts. And the GDPR appears to be helping to change the class action landscape in the UK — as it allows individuals to take private legal action. The framework can also support third parties to bring claims for redress on behalf of individuals. While changes to domestic consumer rights law also appear to be driving class actions.

Commenting in a statement, Ian Garrard, managing director of Innsworth Advisors, said: “The development of class action regimes in the UK and the availability of collective redress in the EU/EEA mean Innsworth can put money to work enabling access to justice for millions of individuals whose personal data has been misused.”

A separate and still ongoing lawsuit in the UK, which is seeking damages from Google on behalf of Safari users whose privacy settings it historically ignored, also looks to have bolstered the prospects of class action style legal actions related to data issues.

While the courts initially tossed the suit last year, the appeals court overturned that ruling — rejecting Google’s argument that UK and EU law requires “proof of causation and consequential damage” in order to bring a claim related to loss of control of data.

The judge said the claimant did not need to prove “pecuniary loss or distress” to recover damages, and also allowed the class to proceed without all the members having the same interest.

Discussing that case, Rumbul suggests a pending final judgement there (likely next year) may have a bearing on whether the lawsuit she’s involved with can be taken forward in the UK.

“I’m very much hoping that the UK judiciary are open to seeing these kind of cases come forward because without these kinds of things as very large class actions it’s almost like closing the door on this whole sphere of litigation. If there’s a legal ruling that says that case can’t go forward and therefore this case can’t go forward I’d be fascinated to understand how the judiciary think we’d have any recourse to these private companies for these kind of actions,” she said.

Asked why the litigation has focused on Oracle and Saleforce, given there are so many firms involved in the adtech pipeline, she said: “I am not saying that they are necessarily the worst or the only companies that are doing this. They are however huge, huge international multimillion-billion dollar companies. And they specifically went out and purchased different bits of adtech software, like BlueKai, in order to bolster their presence in this area — to bolster their own profits.

“This was a strategic business decision that they made to move into this space and become massive players. So in terms of the adtech marketplace they are very, very big players. If they are able to be held to account for this then it will hopefully change the industry as a whole. It will hopefully reduce the places to hide for the other more pernicious cookie manufacturers out there. And obviously they have huge, huge revenues so in terms of targeting people who are doing a lot of harm and that can afford to compensate people these are the right companies to be targeting.”

Rumbul also told us The Privacy Collective is looking to collect stories from web users who feel they have experienced harm related to online tracking.

“There’s plenty of evidence out there to show that how these cookies work means you can have very, very egregious outcomes for people at an individual level,” she added. “Whether that can be related to personal finance, to manipulation of addictive behaviors, whatever, these are all very, very possible — and they cover every aspect of our lives.”

Consumers in England and Wales and the Netherlands are being encouraged to register their support of the actions via The Privacy Collective’s website.

In a statement, Christiaan Alberdingk Thijm, lead lawyer at Brandeis, said: “Your data is being sold off in real-time to the highest bidder, in a flagrant violation of EU data protection regulations. This ad-targeting technology is insidious in that most people are unaware of its impact or the violations of privacy and data rights it entails. Within this adtech environment, Oracle and Salesforce perform activities which violate European privacy rules on a daily basis, but this is the first time they are being held to account. These cases will draw attention to astronomical profits being made from people’s personal information, and the risks to individuals and society of this lack of accountability.”

“Thousands of organisations are processing billions of bid requests each week with at best inconsistent application of adequate technical and organisational measures to secure the data, and with little or no consideration as to the requirements of data protection law about international transfers of personal data. The GDPR gives us the tool to assert individuals’ rights. The class action means we can aggregate the harm done,” added partner Melis Acuner from Cadwalader in another supporting statement.

We reached out to Oracle and Salesforce for comment on the litigation.

Oracle EVP and general counsel, Dorian Daley, said:

The Privacy Collective knowingly filed a meritless action based on deliberate misrepresentations of the facts.  As Oracle previously informed the Privacy Collective, Oracle has no direct role in the real-time bidding process (RTB), has a minimal data footprint in the EU, and has a comprehensive GDPR compliance program. Despite Oracle’s fulsome explanation, the Privacy Collective has decided to pursue its shake-down through litigation filed in bad faith.  Oracle will vigorously defend against these baseless claims.

A spokeswoman for Salesforce sent us this statement:

At Salesforce, Trust is our #1 value and nothing is more important to us than the privacy and security of our corporate customers’ data. We design and build our services with privacy at the forefront, providing our corporate customers with tools to help them comply with their own obligations under applicable privacy laws — including the EU GDPR — to preserve the privacy rights of their own customers.

Salesforce and another Data Management Platform provider, have received a privacy related complaint from a Dutch group called The Privacy Collective. The claim applies to the Salesforce Audience Studio service and does not relate to any other Salesforce service.

Salesforce disagrees with the allegations and intends to demonstrate they are without merit.

Our comprehensive privacy program provides tools to help our customers preserve the privacy rights of their own customers. To read more about the tools we provide our corporate customers and our commitment to privacy, visit salesforce.com/privacy/products/

#advertising-tech, #amazon, #bluekai, #booking-com, #consumer-rights-law, #data-protection, #data-protection-law, #data-security, #digital-rights, #dropbox, #europe, #european-union, #gdpr, #general-data-protection-regulation, #google, #information-commissioners-office, #ireland, #krux, #law, #lawsuit, #london, #marketing, #mastercard, #netherlands, #online-tracking, #oracle, #privacy, #salesforce, #spotify, #tc, #united-kingdom

TikTok is being investigated by France’s data watchdog

More worries for TikTok: A European data watchdog that’s landed the biggest blow on a tech giant to date — slapping Google with a $57M fine last year (upheld in June) — now has an open investigation into the social video app du jour, TechCrunch has confirmed.

A spokeswoman for France’s CNIL told us it opened an investigation into how the app handles user data in May 2020, following a complaint related to a request to delete a video. Its probe of the video sharing platform was reported earlier by Politico.

Under the European Union’s data protection framework, citizens who have given consent for their data to be processed continue to hold a range of rights attached to their personal data, including the ability to request a copy or deletion of the information, or ask for their data in a portable form.

Additional requirements under the EU’s GDPR (General Data Protection Regulation) include transparency obligations to ensure accountability with the framework. Which means data controllers must provide data subjects with clear information on the purposes of processing — including in order to obtain legally valid consent to process the data.

The CNIL’s spokeswoman told us its complaint-triggered investigation into TikTok has since widened to include issues related to transparency requirements about how it processes user data; users’ data access rights; transfers of user data outside the EU; and steps the platform takes to ensure the data of minors is adequately protected — a key issue, given the app’s popularity with teens.

French data protection law lets children consent to the processing of their data for information social services such as TikTok at aged 15 (or younger with parental consent).

As regards the original complaint, the CNIL’s spokeswoman said the person in question has since been “invited to exercise his rights with TikTok under the GDPR, which he had not taken beforehand” (via Google Translate).

We’ve reached out to TikTok for comment on the CNIL investigation.

One question mark is it’s not clear whether the French watchdog will be able to see its investigation of TikTok to full conclusion.

In further emailed remarks its spokeswoman noted the company is seeking to designate Ireland’s Data Protection Commission (DPC) as its lead authority in Europe — and is setting up an establishment in Ireland for that purpose. (Related: Last week TikTok announced a plan to open its first data center in Europe, which will eventually hold all EU users’ data, also in Ireland.)

If TikTok is able to satisfy the legal conditions it may be able to move any GDPR investigation to the DPC — which has gained a reputation for being painstakingly slow to enforce complex cross-border GDPR cases. Though in late May it finally submitted a first draft decision (on a Twitter case) to the other EU data watchdogs for review. A final decision in that case is still pending.

“The [TikTok] investigations could therefore ultimately be the sole responsibility of the Irish protection authority, which will have to deal with the case in cooperation with the other European data protection authorities,” the CNIL’s spokeswoman noted, before emphasizing there is a standard of proof it will have to meet.

“To come under the sole jurisdiction of the Irish authority and not of each of the authorities, Tiktok will nevertheless have to prove that its establishment in Ireland fulfils the conditions of a ‘principal establishment’ within the meaning of the GDPR.”

Under Europe’s GDPR framework, national data watchdogs have powers to issue penalties of up to 4% of a company’s global annual turnover and can also order infringing data processing to cease. But to do any of that they have to first investigate and go through the process of issuing a decision. In cross-border cases where multiple watchdogs have an interest, there’s also a requirement to liaise with other regulators to ensure there’s broad consensus on any decision.

#apps, #cnil, #data-portability, #data-protection, #data-protection-law, #europe, #european-union, #france, #gdpr, #general-data-protection-regulation, #google, #ireland, #privacy, #social, #tc, #tiktok

Legal clouds gather over US cloud services, after CJEU ruling

In the wake of yesterday’s landmark ruling by Europe’s top court — striking down a flagship transatlantic data transfer framework called Privacy Shield, and cranking up the legal uncertainty around processing EU citizens’ data in the U.S. in the process — Europe’s lead data protection regulator has fired its own warning shot at the region’s data protection authorities (DPAs), essentially telling them to get on and do the job of intervening to stop people’s data flowing to third countries where it’s at risk.

Countries like the U.S.

The original complaint that led to the Court of Justice of the EU (CJEU) ruling focused on Facebook’s use of a data transfer mechanism called Standard Contractual Clauses (SCCs) to authorize moving EU users’ data to the U.S. for processing.

Complainant Max Schrems asked the Irish Data Protection Commission (DPC) to suspend Facebook’s SCC data transfers in light of U.S. government mass surveillance programs. Instead, the regulator went to court to raise wider concerns about the legality of the transfer mechanism.

That in turn led Europe’s top judges to nuke the Commission’s adequacy decision, which underpinned the EU-U.S. Privacy Shield — meaning the U.S. no longer has a special arrangement greasing the flow of personal data from the EU. Yet, at the time of writing, Facebook is still using SCCs to process EU users’ data in the U.S. Much has changed, but the data hasn’t stopped flowing — yet.

Yesterday the tech giant said it would “carefully consider” the findings and implications of the CJEU decision on Privacy Shield, adding that it looked forward to “regulatory guidance.” It certainly didn’t offer to proactively flip a kill switch and stop the processing itself.

Ireland’s DPA, meanwhile, which is Facebook’s lead data regulator in the region, sidestepped questions over what action it would be taking in the wake of yesterday’s ruling — saying it (also) needed (more) time to study the legal nuances.

The DPC’s statement also only went so far as to say the use of SCCs for taking data to the U.S. for processing is “questionable” — adding that case by case analysis would be key.

The regulator remains the focus of sustained criticism in Europe over its enforcement record for major cross-border data protection complaints — with still zero decisions issued more than two years after the EU’s General Data Protection Regulation (GDPR) came into force, and an ever-growing backlog of open investigations into the data processing activities of platform giants.

In May, the DPC finally submitted to other DPAs for review its first draft decision on a cross-border case (an investigation into a Twitter security breach), saying it hoped the decision would be finalized in July. At the time of writing we’re still waiting for the bloc’s regulators to reach consensus on that.

The painstaking pace of enforcement around Europe’s flagship data protection framework remains a problem for EU lawmakers — whose two-year review last month called for uniformly “vigorous” enforcement by regulators.

The European Data Protection Supervisor (EDPS) made a similar call today, in the wake of the Schrems II ruling — which only looks set to further complicate the process of regulating data flows by piling yet more work on the desks of underfunded DPAs.

“European supervisory authorities have the duty to diligently enforce the applicable data protection legislation and, where appropriate, to suspend or prohibit transfers of data to a third country,” writes EDPS Wojciech Wiewiórowski, in a statement, which warns against further dithering or can-kicking on the intervention front.

“The EDPS will continue to strive, as a member of the European Data Protection Board (EDPB), to achieve the necessary coherent approach among the European supervisory authorities in the implementation of the EU framework for international transfers of personal data,” he goes on, calling for more joint working by the bloc’s DPAs.

Wiewiórowski’s statement also highlights what he dubs “welcome clarifications” regarding the responsibilities of data controllers and European DPAs — to “take into account the risks linked to the access to personal data by the public authorities of third countries.”

“As the supervisory authority of the EU institutions, bodies, offices and agencies, the EDPS is carefully analysing the consequences of the judgment on the contracts concluded by EU institutions, bodies, offices and agencies. The example of the recent EDPS’ own-initiative investigation into European institutions’ use of Microsoft products and services confirms the importance of this challenge,” he adds.

Part of the complexity of enforcement of Europe’s data protection rules is the lack of a single authority; a varied patchwork of supervisory authorities responsible for investigating complaints and issuing decisions.

Now, with a CJEU ruling that calls for regulators to assess third countries themselves — to determine whether the use of SCCs is valid in a particular use-case and country — there’s a risk of further fragmentation should different DPAs jump to different conclusions.

Yesterday, in its response to the CJEU decision, Hamburg’s DPA criticized the judges for not also striking down SCCs, saying it was “inconsistent” for them to invalidate Privacy Shield yet allow this other mechanism for international transfers. Supervisory authorities in Germany and Europe must now quickly agree how to deal with companies that continue to rely illegally on the Privacy Shield, the DPA warned.

In the statement, Hamburg’s data commissioner, Johannes Caspar, added: “Difficult times are looming for international data traffic.”

He also shot off a blunt warning that: “Data transmission to countries without an adequate level of data protection will… no longer be permitted in the future.”

Compare and contrast that with the Irish DPC talking about use of SCCs being “questionable,” case by case. (Or the U.K.’s ICO offering this bare minimum.)

Caspar also emphasized the challenge facing the bloc’s patchwork of DPAs to develop and implement a “common strategy” toward dealing with SCCs in the wake of the CJEU ruling.

In a press note today, Berlin’s DPA also took a tough line, warning that data transfers to third countries would only be permitted if they have a level of data protection essentially equivalent to that offered within the EU.

In the case of the U.S. — home to the largest and most used cloud services — Europe’s top judges yesterday reiterated very clearly that that is not in fact the case.

“The CJEU has made it clear that the export of data is not just about the economy but people’s fundamental rights must be paramount,” Berlin data commissioner Maja Smoltczyk said in a statement [which we’ve translated using Google Translate].

“The times when personal data could be transferred to the U.S. for convenience or cost savings are over after this judgment,” she added.

Both DPAs warned the ruling has implications for the use of cloud services where data is processed in other third countries where the protection of EU citizens’ data also cannot be guaranteed too, i.e. not just the U.S.

On this front, Smoltczyk name-checked China, Russia and India as countries EU DPAs will have to assess for similar problems.

“Now is the time for Europe’s digital independence,” she added.

Some commentators (including Schrems himself) have also suggested the ruling could see companies switching to local processing of EU users’ data. Though it’s also interesting to note the judges chose not to invalidate SCCs — thereby offering a path to legal international data transfers, but only provided the necessary protections are in place in that given third country.

Also issuing a response to the CJEU ruling today was the European Data Protection Board (EDPB). AKA the body made up of representatives from DPAs across the bloc. Chair Andrea Jelinek put out an emollient statement, writing that: “The EDPB intends to continue playing a constructive part in securing a transatlantic transfer of personal data that benefits EEA citizens and organisations and stands ready to provide the European Commission with assistance and guidance to help it build, together with the U.S., a new framework that fully complies with EU data protection law.”

Short of radical changes to U.S. surveillance law, it’s tough to see how any new framework could be made to legally stick, though. Privacy Shield’s predecessor arrangement, Safe Harbour, stood for around 15 years. Its shiny “new and improved” replacement didn’t even last five.

In the wake of the CJEU ruling, data exporters and importers are required to carry out an assessment of a country’s data regime to assess adequacy with EU legal standards before using SCCs to transfer data there.

“When performing such prior assessment, the exporter (if necessary, with the assistance of the importer) shall take into consideration the content of the SCCs, the specific circumstances of the transfer, as well as the legal regime applicable in the importer’s country. The examination of the latter shall be done in light of the non-exhaustive factors set out under Art 45(2) GDPR,” Jelinek writes.

“If the result of this assessment is that the country of the importer does not provide an essentially equivalent level of protection, the exporter may have to consider putting in place additional measures to those included in the SCCs. The EDPB is looking further into what these additional measures could consist of.”

Again, it’s not clear what “additional measures” a platform could plausibly deploy to “fix” the gaping lack of redress afforded to foreigners by U.S. surveillance law. Major legal surgery does seem to be required to square this circle.

Jelinek said the EDPB would be studying the judgement with the aim of putting out more granular guidance in the future. But her statement warns data exporters they have an obligation to suspend data transfers or terminate SCCs if contractual obligations are not or cannot be complied with, or else to notify a relevant supervisory authority if it intends to continue transferring data.

In her roundabout way, she also warns that DPAs now have a clear obligation to terminate SCCs where the safety of data cannot be guaranteed in a third country.

“The EDPB takes note of the duties for the competent supervisory authorities (SAs) to suspend or prohibit a transfer of data to a third country pursuant to SCCs, if, in the view of the competent SA and in the light of all the circumstances of that transfer, those clauses are not or cannot be complied with in that third country, and the protection of the data transferred cannot be ensured by other means, in particular where the controller or a processor has not already itself suspended or put an end to the transfer,” Jelinek writes.

One thing is crystal clear: Any sense of legal certainty U.S. cloud services were deriving from the existence of the EU-U.S. Privacy Shield — with its flawed claim of data protection adequacy — has vanished like summer rain.

In its place, a sense of déjà vu and a lot more work for lawyers.

#berlin, #china, #cloud, #cloud-services, #data-protection-law, #data-transmission, #digital-rights, #eu-us-privacy-shield, #europe, #european-commission, #european-data-protection-board, #european-union, #facebook, #general-data-protection-regulation, #germany, #google, #hamburg, #human-rights, #india, #ireland, #law, #mass-surveillance, #max-schrems, #microsoft, #personal-data, #privacy, #russia, #safe-harbour, #schrems-ii, #tc, #twitter, #united-states, #us-government

UK’s COVID-19 health data contracts with Google and Palantir finally emerge

Contracts for a number of coronavirus data deals that the U.K. government inked in haste with U.S. tech giants, including Google and Palantir, plus a U.K.-based AI firm called Faculty, have been published today by openDemocracy and law firm Foxglove — which had threatened legal action for withholding the information.

Concerns had been raised about what is an unprecedented transfer of health data on millions of U.K. citizens to private tech companies, including those with a commercial interest in acquiring data to train and build AI models. Freedom of Information requests for the contracts had been deferred up to now.

In a blog post today, openDemocracy and Foxglove write that the data store contracts show tech companies were “originally granted intellectual property rights (including the creation of databases), and were allowed to train their models and profit off their unprecedented access to NHS data.”

“Government lawyers have now claimed that a subsequent (undisclosed) amendment to the contract with Faculty has cured this problem, however they have not released the further contract. openDemocracy and Foxglove are demanding its immediate release,” they add.

They also say the contracts show that the terms of at least one of the deals — with Faculty — were changed “after initial demands for transparency under the Freedom of Information Act.”

They have published PDFs of the original contracts for Faculty, Google, Microsoft and Palantir. Amazon Web Services was also contracted by the NHS to provide cloud hosting services for the data store.

An excerpt from the Faculty contract regarding IP rights

Back in March, as concern about the looming impact of COVID-19 on the UK’s National Health Service (NHS) took hold, the government revealed plans for the health service to work with the aforementioned tech companies to develop a “data platform” — to help coordinate its response, touting the “power” of “secure, reliable and timely data” to inform “effective” pandemic decisions.

However the government’s lack of transparency around such massive health data deals with commercial tech giants — including the controversial firm Palantir, which has a track record of working with intelligence and law enforcement agencies to track individuals, such as supplying tech to ICE to aid deportations — raises major flags.

As does the ongoing failure by the government to publish the amended contracts — with the claimed tightened IP clauses.

The (now published, original) Google contract — to provide “technical, advisory and other support” to NHSX to tackle COVID-19 — is dated March 1, and specifies that services will be provided by Google to the NHS for zero charge. 

The Palantir contract, for provision of its Foundry data management platform services, is dated as beginning March 12 and expiring June 11 — with the company charging a mere £1 ($1.27) for services provided.

While the Faculty contract — providing “strategic support to the NHSX AI Lab” — has a value in excess of £1M (including VAT), and an earlier commencement date (February 3), with an expiry date of August 3.

The government announced its plan to launch an AI Lab within NHSX, the digital transformation branch of the health service, just under a year ago — saying then that it would plough in £250 million to apply AI to healthcare related challenges, and touting the potential for “earlier cancer detection, discovering new treatments and relieving the workload on our NHS workforce.”

The lab had been slated to start spending on AI in 2021. Yet the Faculty contract, in which the AI firm is providing “strategic support to the NHSX AI Lab,” and described as an “AI Lab Strategic Partner,” suggests the pandemic nudged the government to accelerate its plan.

We’ve reached out to the Department of Health with questions.

Last month, NHS England and NHS Improvement responded to an FOI request that TechCrunch filed in early April asking for the contracts — but only to say a response was delayed, already around a month after our original request. (The normal response time for U.K. FOIs is within 20 working days, although the law allows for “a reasonable extension of time to consider the public interest test.”)

Earlier this month, The Telegraph reported that Google-owned DeepMind co-founder Mustafa Suleyman — who has since moved over to work for Google in a policy role — was temporarily taken on by the NHS in March, in a pro bono advisory capacity that reportedly included discussing how to collect patient data.

An NHSX spokesperson told Digital Health that Suleyman had “volunteered his time and expertise for free to help the NHS during the greatest public health threat in a century,” and denied there had been any conflict of interest.

The latter refers to the fact that when Suleyman was still leading DeepMind the company inked a number of data-sharing agreements with NHS Trusts — gaining access to patient health data as part of an app development project. One of these contracts, with the Royal Free NHS Trust, was subsequently found to have breached U.K. data protection law. Regulators said patients could not have “reasonably expected” their information to be shared for this purpose. The Trust was also reprimanded over a lack of transparency.

Google has since taken over DeepMind’s health division and taken on most of the contracts it had inked with the NHS — despite Suleyman’s prior insistence that NHS patient data would not be shared with Google.

 

#amazon-web-services, #artificial-intelligence, #coronavirus, #covid-19, #data-protection-law, #deepmind, #europe, #google, #health, #microsoft, #mustafa-suleyman, #national-health-service, #nhs, #nhsx, #palantir, #tc, #uk-government, #united-kingdom

First major GDPR decisions looming on Twitter and Facebook

The lead data regulator for much of big tech in Europe is moving inexorably towards issuing its first major cross-border GDPR decision — saying today it’s submitted a draft decision related to Twitter’s business to its fellow EU watchdogs for review.

“The draft decision focusses on whether Twitter International Company has complied with Articles 33(1) and 33(5) of the GDPR,” said the Irish Data Protection Commission (DPC) in a statement.

Europe’s General Data Protection Regulation came into application two years ago, as an update to the European Union’s long-standing data protection framework which bakes in supersized fines for compliance violations. More interestingly, regulators have the power to order that violating data processing cease. While, in many EU countries, third parties such as consumer rights groups can file complaints on behalf of individuals.

Since GDPR begun being applied, there have been thousands of complaints filed across the bloc, targeting companies large and small — alongside a rising clamour around a lack of enforcement in major cross-border cases pertaining to big tech.

So the timing of the DPC’s announcement on reaching a draft decision in its Twitter probe is likely no accident. (GDPR’s actual anniversary of application is May 25.)

The draft decision relates to an inquiry the regulator instigated itself, in November 2018, after the social network had reported a data breach — as data controllers are required to do promptly under GDPR, risking penalties should they fail to do so.

Other interested EU watchdogs (all of them in this case) will now have one month to consider the decision — and lodge “reasoned and relevant objections” should they disagree with the DPC’s reasoning, per the GDPR’s one-stop-shop mechanism which enables EU regulators to liaise on cross-border inquiries.

In instances where there is disagreement between DPAs on a decision the regulation contains a dispute resolution mechanism (Article 65) — which loops in the European Data Protection Board (EDPB) to make a final decision on a majority basis.

On the Twitter decision, the DPC told us it’s hopeful this can be finalized in July.

Commissioner Helen Dixon has previously said the first cross border decisions would be coming “early” in 2020. However the complexity of working through new processes — such as the one-stop-shop — appear to have taken EU regulators longer than hoped.

The DPC is also dealing with a massive case load at this point, with more than 20 cross border investigations related to complaints and/or inquiries still pending decisions — with active probes into the data processing habits of a large number of tech giants; including Apple, Facebook, Google, Instagram, LinkedIn, Tinder, Verizon (TechCrunch’s parent company) and WhatsApp — in addition to its domestic caseload (operating with a budget that’s considerably less than it requested from the Irish government).

The scope of some of these major cross-border inquiries may also have bogged Ireland’s regulator down.

But — two years in — there are signs of momentum picking up, with the DPC’s deputy commissioner, Graham Doyle, pointing today to developments on four additional investigations from the cross-border pile — all of which concern Facebook owned platforms.

The furthest along of these is a probe into the level of transparency the tech giant provides about how user data is shared between its WhatsApp and Facebook services.

“We have this week sent a preliminary draft decision to WhatsApp Ireland Limited for their submissions which will be taken in to account by the DPC before preparing a draft decision in that matter also for Article 60 purposes,” said Doyle in a statement on that. “The inquiry into WhatsApp Ireland examines its compliance with Articles 12 to 14 of the GDPR in terms of transparency including in relation to transparency around what information is shared with Facebook.”

The other three cases the DPC said it’s making progress on relate to GDPR consent complaints filed back in May 2018 by the EU privacy rights not-for-profit, noyb.

noyb argues that Facebook uses a strategy of “forced consent” to continue processing individuals’ personal data — when the standard required by EU law is for users to be given a free choice unless consent is strictly necessary for provision of the service. (And noyb argues that microtargeted ads are not core to the provision of a social networking service; contextual ads could instead be served, for example.)

Back in January 2019, Google was fined $57M by France’s data watchdog, CNIL, over a similar complaint.

Per its statement today, the DPC said it has now completed the investigation phase of this complaint-based inquiry which it said is focused on “Facebook Ireland’s obligations to establish a lawful basis for personal data processing”.

“This inquiry is now in the decision-making phase at the DPC,” it added.

In further related developments it said it’s sent draft inquiry reports to the complainants and companies concerned for the same set of complaints for (Facebook owned) Instagram and WhatsApp. 

Doyle declined to give any firm timeline for when any of these additional inquiries might yield final decisions. But a summer date would, presumably, be the very earliest timeframe possible.

The regulator’s hope looks to be that once the first cross-border decision has made it through the GDPR’s one-stop-shop mechanism — and yielded something all DPAs can sign up to — it will grease the tracks for the next tranche of decisions.

That said, not all inquiries and decisions are equal clearly. And what exactly the DPC decides in such high profile probes will be key to whether or not there’s disagreement from other data protection agencies. Different EU DPAs can take a harder or softer line on applying the bloc’s rules, with some considerably more ‘business friendly‘ than others. Albeit, the GDPR was intended to try to shrink differences of application.

If there is disagreement among regulators on major cross border cases, such as the Facebook ones, the GDPR’s one-stop-shop mechanism will require more time to work through to find consensus. So critics of the regulation are likely to have plenty of attack area still.

Some of the inquiries the DPC is leading are also likely to set standards which could have major implications for many platforms and digital businesses so there will be vested interests seeking to influence outcomes on all sides. But with GDPR hitting its second birthday — and still hardly any decision-shaped lumps taken out of big tech — the regional pressure for enforcements to get flowing is massive.

Given the blistering pace of tech developments — and the market muscle of big tech being applied to steamroller individual rights — EU regulators have to be able to close the gap between investigation and enforcement or watch their flagship framework derided as a paper tiger…

Schrems II

Summer is also shaping up to be an interesting time for privacy watchers for another reason, with a landmark decision due from Europe’s top court on July 16 on the so called ‘Schrems II’ case (named for the Austrian lawyer, privacy rights campaigner and noyb founder, Max Schrems, who lodged the original complaint) — which relates to the legality of Standard Contractual Clauses (SCC) as a mechanism for personal data transfers out of the EU.

The DPC’s statement today makes a point of flagging this looming decision, with the regulator writing: “The case concerns proceedings initiated and pursued in the Irish High Court by the DPC which raised a number of significant questions about the regulation of international data transfers under EU data protection law. The judgement from the CJEU on foot of the reference made arising from these proceedings is anticipated to bring much needed clarity to aspects of the law and to represent a milestone in the law on international transfers.”

A legal opinion issued at the end of last year by an influential advisor to the court emphasized that EU data protection authorities have an obligation to step in and suspend data transfers by SCC if they are being used to send citizens’ data to a place where their information cannot be adequately protected.

Should the court hold to that view, all EU DPAs will have an obligation to consider the legality of SCC transfers to the US “on a case-by-case basis”, per Doyle.

“It will be in every single case you’d have to go and look at the set of circumstances in every single case to make a judgement whether to instruct them to cease doing it. There won’t be just a one size fits all,” he told TechCrunch. “It’s an extremely significant ruling.”

(If you’re curious about ‘Schrems I’, read this from 2015.)

#apple, #data-protection, #data-protection-law, #dpc, #europe, #european-data-protection-board, #european-union, #facebook, #france, #gdpr, #general-data-protection-regulation, #google, #helen-dixon, #instagram, #linkedin, #privacy, #schrems, #social-media, #social-network, #tc, #tinder, #united-states, #verizon, #whatsapp

UK’s NHS COVID-19 app lacks robust legal safeguards against data misuse, warns committee

A UK parliamentary committee that focuses on human rights issues has called for primary legislation to be put in place to ensure that legal protections wrap around the national coronavirus contact tracing app.

The app, called NHS COVID-19, is being fast tracked for public use — with a test ongoing this week in the Isle of Wight. It’s set to use Bluetooth Low Energy signals to log social interactions between users to try to automate some contacts tracing based on an algorithmic assessment of users’ infection risk.

The NHSX has said the app could be ready for launch within a matter of weeks but the committee says key choices related to the system architecture create huge risks for people’s rights that demand the safeguard of primary legislation.

“Assurances from Ministers about privacy are not enough. The Government has given assurances about protection of privacy so they should have no objection to those assurances being enshrined in law,” said committee chair, Harriet Harman MP, in a statement.

“The contact tracing app involves unprecedented data gathering. There must be robust legal protection for individuals about what that data will be used for, who will have access to it and how it will be safeguarded from hacking.

“Parliament was able quickly to agree to give the Government sweeping powers. It is perfectly possible for parliament to do the same for legislation to protect privacy.”

The NHSX, a digital arm of the country’s National Health Service, is in the process of testing the app — which it’s said could be launched nationally within a few weeks.

The government has opted for a system design that will centralize large amounts of social graph data when users experiencing COVID-19 symptoms (or who have had a formal diagnosis) choose to upload their proximity logs.

Earlier this week we reported on one of the committee hearings — when it took testimony from NHSX CEO Matthew Gould and the UK’s information commissioner, Elizabeth Denham, among other witnesses.

Warning now over a lack of parliamentary scrutiny — around what it describes as an unprecedented expansion of state surveillance — the committee report calls for primary legislation to ensure “necessary legal clarity and certainty as to how data gathered could be used, stored and disposed of”.

The committee also wants to see an independent body set up to carry out oversight monitoring and guard against ‘mission creep’ — a concern that’s also been raised by a number of UK privacy and security experts in an open letter late last month.

“A Digital Contact Tracing Human Rights Commissioner should be responsible for oversight and they should be able to deal with complaints from the Public and report to Parliament,” the committee suggests.

Prior to publishing its report, the committee wrote to health minister Matt Hancock, raising a full spectrum of concerns — receiving a letter in response.

In this letter, dated May 4, Hancock told it: “We do not consider that legislation is necessary in order to build and deliver the contact tracing app. It is consistent with the powers of, and duties imposed on, the Secretary of State at a time of national crisis in the interests of protecting public health.”

The committee’s view is Hancock’s ‘letter of assurance’ is not enough given the huge risks attached to the state tracking citizens’ social graph data.

“The current data protection framework is contained in a number of different documents and it is nearly impossible for the public to understand what it means for their data which may be collected by the digital contact tracing system. Government’s assurances around data protection and privacy standards will not carry any weight unless the Government is prepared to enshrine these assurances in legislation,” it writes in the report, calling for a bill that it says myst include include a number of “provisions and protections”.

Among the protections the committee is calling for are limits on who has access to data and for what purpose.

“Data held centrally may not be accessed or processed without specific statutory authorisation, for the purpose of combatting Covid-19 and provided adequate security protections are in place for any systems on which this data may be processed,” it urges.

It also wants legal protections against data reconstruction — by different pieces of data being combined “to reconstruct information about an individual”.

The report takes a very strong line — warning that no app should be released without “strong protections and guarantees” on “efficacy and proportionality”.

“Without clear efficacy and benefits of the app, the level of data being collected will be not be justifiable and it will therefore fall foul of data protection law and human rights protections,” says the committee.

The report also calls for regular reviews of the app — looking at efficacy; data safety; and “how privacy is being protected in the use of any such data”.

It also makes a blanket call for transparency, with the committee writing that the government and health authorities “must at all times be transparent about how the app, and data collected through it, is being used”.

A lack of transparency around the project was another of the concerns raised by the 177 academics who signed the open letter last month.

The government has committed to publishing data protection impact assessments for the app. But the ICO’s Denham still hadn’t had sight of this document as of this Monday.

Another call by the committee is for a time-limit to be attached to any data gathered by or generated via the app. “Any digital contact tracing (and data associated with it) must be permanently deleted when no longer required and in any event may not be kept beyond the duration of the public health emergency,” it writes.

We’ve reached out to the Department of Health and NHSX for comment on the human rights committee’s report.

There’s another element to this fast moving story: Yesterday the Financial Times reported that the NHSX has inked a new contract with an IT supplier which suggests it might be looking to change the app architecture — moving away from a centralized database to a decentralized system for contacts tracing. Although NHSX has not confirmed any such switch at this point.

Some other countries have reversed course in their choice of app architecture after running into technical challenges related to Bluetooth. The need to ensure public trust in the system was also cited by Germany for switching to a decentralized model.

The human rights committee report highlights a specific app efficacy issue of relevance to the UK, which it points out is also linked to these system architecture choices, noting that: “The Republic of Ireland has elected to use a decentralised app and if a centralised app is in use in Northern Ireland, there are risks that the two systems will not be interoperable which would be most unfortunate.”

#apps, #bluetooth, #data-protection-law, #digital-rights, #elizabeth-denham, #europe, #germany, #health, #human-rights, #identity-management, #ireland, #law, #matt-hancock, #mobile, #national-health-service, #nhs, #nhs-covid-19, #nhsx, #northern-ireland, #privacy, #privacy-policy, #terms-of-service, #united-kingdom