Ireland probes TikTok’s handling of kids’ data and transfers to China

Ireland’s Data Protection Commission (DPC) has yet another ‘Big Tech’ GDPR probe to add to its pile: The regulator said yesterday it has opened two investigations into video sharing platform TikTok.

The first covers how TikTok handles children’s data, and whether it complies with Europe’s General Data Protection Regulation.

The DPC also said it will examine TikTok’s transfers of personal data to China, where its parent entity is based — looking to see if the company meets requirements set out in the regulation covering personal data transfers to third countries.

TikTok was contacted for comment on the DPC’s investigation.

A spokesperson told us:

“The privacy and safety of the TikTok community, particularly our youngest members, is a top priority. We’ve implemented extensive policies and controls to safeguard user data and rely on approved methods for data being transferred from Europe, such as standard contractual clauses. We intend to fully cooperate with the DPC.”

The Irish regulator’s announcement of two “own volition” enquiries follows pressure from other EU data protection authorities and consumers protection groups which have raised concerns about how TikTok handles’ user data generally and children’s information specifically.

In Italy this January, TikTok was ordered to recheck the age of every user in the country after the data protection watchdog instigated an emergency procedure, using GDPR powers, following child safety concerns.

TikTok went on to comply with the order — removing more than half a million accounts where it could not verify the users were not children.

This year European consumer protection groups have also raised a number of child safety and privacy concerns about the platform. And, in May, EU lawmakers said they would review the company’s terms of service.

On children’s data, the GDPR sets limits on how kids’ information can be processed, putting an age cap on the ability of children to consent to their data being used. The age limit varies per EU Member State but there’s a hard cap for kids’ ability to consent at 13 years old (some EU countries set the age limit at 16).

In response to the announcement of the DPC’s enquiry, TikTok pointed to its use of age gating technology and other strategies it said it uses to detect and remove underage users from its platform.

It also flagged a number of recent changes it’s made around children’s accounts and data — such as flipping the default settings to make their accounts privacy by default and limiting their exposure to certain features that intentionally encourage interaction with other TikTok users if those users are over 16.

While on international data transfers it claims to use “approved methods”. However the picture is rather more complicated than TikTok’s statement implies. Transfers of Europeans’ data to China are complicated by there being no EU data adequacy agreement in place with China.

In TikTok’s case, that means, for any personal data transfers to China to be lawful, it needs to have additional “appropriate safeguards” in place to protect the information to the required EU standard.

When there is no adequacy arrangement in place, data controllers can, potentially, rely on mechanisms like Standard Contractual Clauses (SCCs) or binding corporate rules (BCRs) — and TikTok’s statement notes it uses SCCs.

But — crucially — personal data transfers out of the EU to third countries have faced significant legal uncertainty and added scrutiny since a landmark ruling by the CJEU last year which invalidated a flagship data transfer arrangement between the US and the EU and made it clear that DPAs (such as Ireland’s DPC) have a duty to step in and suspend transfers if they suspect people’s data is flowing to a third country where it might be at risk.

So while the CJEU did not invalidate mechanisms like SCCs entirely they essentially said all international transfers to third countries must be assessed on a case-by-case basis and, where a DPA has concerns, it must step in and suspend those non-secure data flows.

The CJEU ruling means just the fact of using a mechanism like SCCs doesn’t mean anything on its own re: the legality of a particular data transfer. It also amps up the pressure on EU agencies like Ireland’s DPC to be pro-active about assessing risky data flows.

Final guidance put out by the European Data Protection Board, earlier this year, provides details on the so-called ‘special measures’ that a data controller may be able to apply in order to increase the level of protection around their specific transfer so the information can be legally taken to a third country.

But these steps can include technical measures like strong encryption — and it’s not clear how a social media company like TikTok would be able to apply such a fix, given how its platform and algorithms are continuously mining users’ data to customize the content they see and in order to keep them engaged with TikTok’s ad platform.

In another recent development, China has just passed its first data protection law.

But, again, this is unlikely to change much for EU transfers. The Communist Party regime’s ongoing appropriation of personal data, through the application of sweeping digital surveillance laws, means it would be all but impossible for China to meet the EU’s stringent requirements for data adequacy. (And if the US can’t get EU adequacy it would be ‘interesting’ geopolitical optics, to put it politely, were the coveted status to be granted to China…)

One factor TikTok can take heart from is that it does likely have time on its side when it comes to the’s EU enforcement of its data protection rules.

The Irish DPC has a huge backlog of cross-border GDPR investigations into a number of tech giants.

It was only earlier this month that Irish regulator finally issued its first decision against a Facebook-owned company — announcing a $267M fine against WhatsApp for breaching GDPR transparency rules (but only doing so years after the first complaints had been lodged).

The DPC’s first decision in a cross-border GDPR case pertaining to Big Tech came at the end of last year — when it fined Twitter $550k over a data breach dating back to 2018, the year GDPR technically begun applying.

The Irish regulator still has scores of undecided cases on its desk — against tech giants including Apple and Facebook. That means that the new TikTok probes join the back of a much criticized bottleneck. And a decision on these probes isn’t likely for years.

On children’s data, TikTok may face swifter scrutiny elsewhere in Europe: The UK added some ‘gold-plaiting’ to its version of the EU GDPR in the area of children’s data — and, from this month, has said it expects platforms meet its recommended standards.

It has warned that platforms that don’t fully engage with its Age Appropriate Design Code could face penalties under the UK’s GDPR. The UK’s code has been credited with encouraging a number of recent changes by social media platforms over how they handle kids’ data and accounts.

#apps, #articles, #china, #communist-party, #data-controller, #data-protection, #data-protection-commission, #data-protection-law, #data-security, #encryption, #europe, #european-data-protection-board, #european-union, #general-data-protection-regulation, #ireland, #italy, #max-schrems, #noyb, #personal-data, #privacy, #social, #social-media, #spokesperson, #tiktok, #united-kingdom, #united-states

WhatsApp faces $267M fine for breaching Europe’s GDPR

It’s been a long time coming but Facebook is finally feeling some heat from Europe’s much trumpeted data protection regime: Ireland’s Data Protection Commission (DPC) has just announced a €225 million (~$267M) for WhatsApp.

The Facebook-owned messaging app has been under investigation by the Irish DPC, its lead data supervisor in the European Union, since December 2018 — several months after the first complaints were fired at WhatsApp over how it processes user data under Europe’s General Data Protection Regulation (GDPR), once it begun being applied in May 2018.

Despite receiving a number of specific complaints about WhatsApp, the investigation undertaken by the DPC that’s been decided today was what’s known as an “own volition” enquiry — meaning the regulator selected the parameters of the investigation itself, choosing to fix on an audit of WhatsApp’s ‘transparency’ obligations.

A key principle of the GDPR is that entities which are processing people’s data must be clear, open and honest with those people about how their information will be used.

The DPC’s decision today (which runs to a full 266 pages) concludes that WhatsApp failed to live up to the standard required by the GDPR.

Its enquiry considered whether or not WhatsApp fulfils transparency obligations to both users and non-users of its service (WhatsApp may, for example, upload the phone numbers of non-users if a user agrees to it ingesting their phone book which contains other people’s personal data); as well as looking at the transparency the platform offers over its sharing of data with its parent entity Facebook (a highly controversial issue at the time the privacy U-turn was announced back in 2016, although it predated GDPR being applied).

In sum, the DPC found a range of transparency infringements by WhatsApp — spanning articles 5(1)(a); 12, 13 and 14 of the GDPR.

In addition to issuing a sizeable financial penalty, it has ordered WhatsApp to take a number of actions to improve the level of transparency it offer users and non-users — giving the tech giant a three-month deadline for making all the ordered changes.

In a statement responding to the DPC’s decision, WhatsApp disputed the findings and dubbed the penalty “entirely disproportionate” — as well as confirming it will appeal, writing:

“WhatsApp is committed to providing a secure and private service. We have worked to ensure the information we provide is transparent and comprehensive and will continue to do so. We disagree with the decision today regarding the transparency we provided to people in 2018 and the penalties are entirely disproportionate. We will appeal this decision.” 

It’s worth emphasizing that the scope of the DPC enquiry which has finally been decided today was limited to only looking at WhatsApp’s transparency obligations.

The regulator was explicitly not looking into wider complaints — which have also been raised against Facebook’s data-mining empire for well over three years — about the legal basis WhatsApp claims for processing people’s information in the first place.

So the DPC will continue to face criticism over both the pace and approach of its GDPR enforcement.

 

Indeed, prior to today, Ireland’s regulator had only issued one decision in a major cross-border cases addressing ‘Big Tech’ — against Twitter when, back in December, it knuckle-tapped the social network over a historical security breach with a fine of $550k.

WhatsApp’s first GDPR penalty is, by contrast, considerably larger — reflecting what EU regulators (plural) evidently consider to be a far more serious infringement of the GDPR.

Transparency is a key principle of the regulation. And while a security breach may indicate sloppy practice, systematic opacity towards people whose data your adtech empire relies upon to turn a fat profit looks rather more intentional; indeed, it’s arguably the whole business model.

And — at least in Europe — such companies are going to find themselves being forced to be up front about what they’re doing with people’s data.

Is the GDPR working?  

The WhatsApp decision will rekindle the debate about whether the GDPR is working effectively where it counts most: Against the most powerful companies in the world, which are also of course Internet companies.

Under the EU’s flagship data protection regulation, decisions on cross border cases require agreement from all affected regulators — across the 27 Member States — so while the GDPR’s “one-stop-shop” mechanism seeks to streamline the regulatory burden for cross-border businesses by funnelling complaints and investigations via a lead regulator (typically where a company has its main legal establishment in the EU), objections can be raised to that lead supervisory authority’s conclusions (and any proposed sanctions), as has happened here in this WhatsApp case.

Ireland originally proposed a far more low-ball penalty of up to €50M for WhatsApp. However other EU regulators objected to its draft decision on a number of fronts — and the European Data Protection Board (EDPB) ultimately had to step in and take a binding decision (issued this summer) to settle the various disputes.

Through that (admittedly rather painful) joint-working, the DPC was required to increase the size of the fine issued to WhatsApp. In a mirror of what happened with its draft Twitter decision — where the DPC has also suggested an even tinier penalty in the first instance.

While there is a clear time cost in settling disputes between the EU’s smorgasbord of data protection agencies — the DPC submitted its draft WhatsApp decision to the other DPAs for review back in December, so it’s taken well over half a year to hash out all the disputes about WhatsApp’s lossy hashing and so forth — the fact that ‘corrections’ are being made to its decisions and conclusions can land — if not jointly agreed but at least arriving via a consensus getting pushed through by the EDPB — is a sign that the process, while slow and creaky, is working. At least technically.

Even so, Ireland’s data watchdog will continue to face criticism for its outsized role in handling GDPR complaints and investigations — with some accusing the DPC of essentially cherry-picking which issues to examine in detail (by its choice and framing of cases) and which to elide entirely (those issues it doesn’t open an enquiry into or complaints it simply drops or ignores), with its loudest critics arguing it’s therefore still a major bottleneck on effective enforcement of data protection rights across the EU.

The associated conclusion for that critique is that tech giants like Facebook are still getting a pretty free pass to violate Europe’s privacy rules.

But while it’s true that a $267M penalty is the equivalent of a parking ticket for Facebook’s business empire, orders to change how such adtech giants are able to process people’s information at least have the potential to be a far more significant correction on problematic business models.

Again, though, time will be needed to tell whether such wider orders are having the sought for impact.

In a statement reacting to the DPC’s WhatsApp decision today, noyb — the privacy advocacy group founded by long-time European privacy campaigner Max Schrems, said: “We welcome the first decision by the Irish regulator. However, the DPC gets about ten thousand complaints per year since 2018 and this is the first major fine. The DPC also proposed an initial €50MK fine and was forced by the other European data protection authorities to move towards €225M, which is still only 0.08% of the turnover of the Facebook Group. The GDPR foresees fines of up to 4% of the turnover. This shows how the DPC is still extremely dysfunctional.”

Schrems also noted that he and noyb still have a number of pending cases before the DPC — including on WhatsApp.

In further remarks, they raised concerns about the length of the appeals process and whether the DPC would make a muscular defence of a sanction it had been forced to increase by other EU DPAs.

“WhatsApp will surely appeal the decision. In the Irish court system this means that years will pass before any fine is actually paid. In our cases we often had the feeling that the DPC is more concerned with headlines than with actually doing the hard groundwork. It will be very interesting to see if the DPC will actually defend this decision fully, as it was basically forced to make this decision by its European counterparts. I can imagine that the DPC will simply not put many resources on the case or ‘settle’ with WhatsApp in Ireland. We will monitor this case closely to ensure that the DPC is actually following through with this decision.”

#data-protection, #data-protection-commission, #europe, #european-data-protection-board, #european-union, #facebook, #gdpr, #general-data-protection-regulation, #ireland, #noyb, #privacy, #social-media, #social-network, #transparency, #whatsapp

Controversial WhatsApp policy change hit with consumer law complaint in Europe

Facebook has been accused of multiple breaches of European Union consumer protection law as a result of its attempts to force WhatsApp users to accept controversial changes to the messaging platforms’ terms of use — such as threatening users that the app would stop working if they did not accept the updated policies by May 15.

The consumer protection association umbrella group, the Beuc, said today that together with eight of its member organizations it’s filed a complaint with the European Commission and with the European network of consumer authorities.

“The complaint is first due to the persistent, recurrent and intrusive notifications pushing users to accept WhatsApp’s policy updates,” it wrote in a press release.

“The content of these notifications, their nature, timing and recurrence put an undue pressure on users and impair their freedom of choice. As such, they are a breach of the EU Directive on Unfair Commercial Practices.”

After earlier telling users that notifications about the need to accept the new policy would become persistent, interfering with their ability to use the service, WhatsApp later rowed back from its own draconian deadline.

However the app continues to bug users to accept the update — with no option not to do so (users can close the policy prompt but are unable to decline the new terms or stop the app continuing to pop-up a screen asking them to accept the update).

“In addition, the complaint highlights the opacity of the new terms and the fact that WhatsApp has failed to explain in plain and intelligible language the nature of the changes,” the Beuc went on. “It is basically impossible for consumers to get a clear understanding of what consequences WhatsApp’s changes entail for their privacy, particularly in relation to the transfer of their personal data to Facebook and other third parties. This ambiguity amounts to a breach of EU consumer law which obliges companies to use clear and transparent contract terms and commercial communications.”

The organization pointed out that WhatsApp’s policy updates remain under scrutiny by privacy regulations in Europe — which it argues is another factor that makes Facebook’s aggressive attempts to push the policy on users highly inappropriate.

And while this consumer-law focused complaint is separate to the privacy issues the Beuc also flags — which are being investigated by EU data protection authorities (DPAs) — it has called on those regulators to speed up their investigations, adding: “We urge the European network of consumer authorities and the network of data protection authorities to work in close cooperation on these issues.”

The Beuc has produced a report setting out its concerns about the WhatsApp ToS change in more detail — where it hits out at the “opacity” of the new policies, further asserting:

“WhatsApp remains very vague about the sections it has removed and the ones it has added. It is up to users to seek out this information by themselves. Ultimately, it is almost impossible for users to clearly understand what is new and what has been amended. The opacity of the new policies is in breach of Article 5 of the UCTD [Unfair Contract Terms Directive] and is also a misleading and unfair practice prohibited under Article 5 and 6 of the UCPD [Unfair Commercial Practices Directive].”

Reached for comment on the consumer complaint, a WhatsApp spokesperson told us:

“Beuc’s action is based on a misunderstanding of the purpose and effect of the update to our terms of service. Our recent update explains the options people have to message a business on WhatsApp and provides further transparency about how we collect and use data. The update does not expand our ability to share data with Facebook, and does not impact the privacy of your messages with friends or family, wherever they are in the world. We would welcome an opportunity to explain the update to Beuc and to clarify what it means for people.”

The Commission was also contacted for comment on the Beuc’s complaint — we’ll update this report if we get a response.

The complaint is just the latest pushback in Europe over the controversial terms change by Facebook-owned WhatsApp — which triggered a privacy warning from Italy back in January, followed by an urgency procedure in Germany in May when Hamburg’s DPA banned the company from processing additional WhatsApp user data.

Although, earlier this year, Facebook’s lead data regulator in the EU, Ireland’s Data Protection Commission, appeared to accept Facebook’s reassurances that the ToS changes do not affect users in the region.

German DPAs were less happy, though. And Hamburg invoked emergency powers allowed for in the General Data Protection Regulation (GDPR) in a bid to circumvent a mechanism in the regulation that (otherwise) funnels cross-border complaints and concerns via a lead regulator — typically where a data controller has their regional base (in Facebook/WhatsApp’s case that’s Ireland).

Such emergency procedures are time-limited to three months. But the European Data Protection Board (EDPB) confirmed today that its plenary meeting will discuss the Hamburg DPA’s request for it to make an urgent binding decision — which could see the Hamburg DPA’s intervention set on a more lasting footing, depending upon what the EDPB decides.

In the meanwhile, calls for Europe’s regulators to work together to better tackle the challenges posed by platform power are growing, with a number of regional competition authorities and privacy regulators actively taking steps to dial up their joint working — in a bid to ensure that expertise across distinct areas of law doesn’t stay siloed and, thereby, risk disjointed enforcement, with conflicting and contradictory outcomes for Internet users.

There seems to be a growing understanding on both sides of the Atlantic for a joined up approach to regulating platform power and ensuring powerful platforms don’t simply get let off the hook.

 

#beuc, #europe, #european-commission, #european-data-protection-board, #european-union, #facebook, #gdpr, #general-data-protection-regulation, #germany, #hamburg, #ireland, #policy, #privacy, #social, #social-media, #whatsapp

UK gets data flows deal from EU — for now

The UK’s digital businesses can breathe a sign of relief today as the European Commission has officially signed off on data adequacy for the (now) third country, post-Brexit.

It’s a big deal for UK businesses as it means the country will be treated by Brussels as having essentially equivalent data protection rules as markets within the bloc, despite no longer being a member itself — enabling personal data to continue to flow freely from the EU to the UK, and avoiding any new legal barriers.

The granting of adequacy status has been all but assured in recent weeks, after European Union Member States signed off on a draft adequacy arrangement. But the Commission’s adoption of the decision marks the final step in the process — at least for now.

It’s notable that the Commission’s PR includes a clear warning that if the UK seeks to weaken protections afforded to people’s data under the current regime it “will intervene”.

In a statement, Věra Jourová, Commission VP for values and transparency, said:

The UK has left the EU but today its legal regime of protecting personal data is as it was. Because of this, we are adopting these adequacy decisions today. At the same time, we have listened very carefully to the concerns expressed by the Parliament, the Members States and the European Data Protection Board, in particular on the possibility of future divergence from our standards in the UK’s privacy framework. We are talking here about a fundamental right of EU citizens that we have a duty to protect. This is why we have significant safeguards and if anything changes on the UK side, we will intervene.”

The UK adequacy decision comes with a Sword of Damocles baked in: A sunset clause of four years. It’s a first — so, er, congratulations to the UK government for projecting a perception of itself as untrustworthy over the short run.

This clause means the UK’s regime will face full scrutiny again in 2025, with no automatic continuation if its standards are deemed to have slipped (as many fear they will).

The Commission also emphasizes that its decision does not mean the UK has four ‘guaranteed’ years in the clear. On the contrary, it says it will “continue to monitor the legal situation in the UK and could intervene at any point, if the UK deviates from the level of protection currently in place”.

Third countries without an adequacy agreement — such as the US, which has adequacy twice struck down by Europe’s top court (after it found US surveillance law incompatible with EU fundamental rights) — do not enjoy ‘seamless’ legal certainty around personal data flows; and must instead take steps to assess each of these transfers individually to determine whether (and how) they can move data legally.

Last week, the European Data Protection Board (EDPB) put out its final bit of guidance for third countries wanting to transfer personal data outside the bloc. And the advice makes it clear that some types of transfers are unlikely to be possible.

For other types of transfers, the advice discusses a number of of supplementary measures (including technical steps like robust encryption) that may be possible for a data controller to use in order to, through their own technical, contractual and organizational effort, ramp up the level of protection to achieve the required standard.

It is, in short, a lot of work. And without today’s adequacy decision UK businesses would have had to get intimately acquainted with the EDPB’s guidance. For now, though, they’ve dodged that bullet.

The qualifier is still very necessary, though, because the UK government has signalled that it intends to rethink data protection.

How exactly it goes about that — and to what extent it changes the current ‘essentially equivalent’ regime — may make all the difference. For example, Digital minister Oliver Dowden has talked about data being “a great opportunity” for the UK, post-Brexit.

And writing in the FT back in February he suggested there will be room for the UK to rewrite its national data protection rules without diverging so much that it puts adequacy at risk. “We fully intend to maintain those world-class standards. But to do so, we do not need to copy and paste the EU’s rule book, the General Data Protection Regulation, word-for-word,” he suggested then, adding that: “Countries as diverse as Israel and Uruguay have successfully secured adequacy with Brussels despite having their own data regimes. Not all of those were identical to GDPR, but equal doesn’t have to mean the same. The EU doesn’t hold the monopoly on data protection.”

The devil will, as they say, be in the detail. But some early signals are concerning — and the UK’s startup ecosystem would be well advised to take an active role in impressing upon government the importance to stay aligned with European data standards.

Moreover, there’s also the prospect of a legal challenge to the adequacy decision — even as is, i.e. based on current UK standards (which find plenty of critics). Certainly it can’t be ruled out — and the CJEU hasn’t shied away from quashing other adequacy arrangements it judged to be invalid…

Today, though, the Department for Digital, Media, Culture and Sport (DCMS) has seized the chance to celebrate a PR win, writing that the Commission’s decision “rightly recognises the country’s high data protection standards”.

The department also reiterated the UK government’s intention to “promote the free flow of personal data globally and across borders”, including through what it bills as “ambitious new trade deals and through new data adequacy agreements with some of the fastest growing economies” — simultaneously claiming it would do so “while ensuring people’s data continues to be protected to a high standard”. Pinky promise.

“All future decisions will be based on what maximises innovation and keeps up with evolving tech,” the DCMS added in a press release. “As such, the government’s approach will seek to minimise burdens on organisations seeking to use data to tackle some of the most pressing global issues, including climate change and the prevention of disease.”

In a statement, Dowden also made a point of combining both streams, saying: “We will now focus on unlocking the power of data to drive innovation and boost the economy while making sure we protect people’s safety and privacy.”

UK business and tech associations were just as quick to welcome the Commission’s adequacy decision. The alternative would of course have been very costly disruption.

In a statement, John Foster, director of policy for the Confederation of British Industry, said: “This breakthrough in the EU-UK adequacy decision will be welcomed by businesses across the country. The free flow of data is the bedrock of the modern economy and essential for firms across all sectors– from automotive to logistics — playing an important role in everyday trade of goods and services. This positive step will help us move forward as we develop a new trading relationship with the EU.”

In another supporting statement, Julian David, CEO of techUK, added: “Securing an EU-UK adequacy decision has been a top priority for techUK and the wider tech industry since the day after the 2016 referendum. The decision that the UK’s data protection regime offers an equivalent level of protection to the EU GDPR is a vote of confidence in the UK’s high data protection standards and is of vital importance to UK-EU trade as the free flow of data is essential to all business sectors.

“The data adequacy decision also provides a basis for the UK and EU to work together on global routes for the free flow of data with trust, building on the G7 Digital and Technology declaration and possibly unlocking €2TR of growth. The UK must also now move to complete the development of its own international data transfer regime in order to allow companies in the UK not just to exchange data with the EU but also to be able to access opportunities across the world.”

The Commission has actually adopted two UK adequacy decisions today — one under the General Data Protection Regulation (GDPR) and another for the Law Enforcement Directive.

Discussing key elements in its decision to grant the UK adequacy, EU lawmakers highlighted the fact the UK’s (current) system is based upon transposed European rules; that access to personal data by public authorities in the UK (such as for national security reasons) is done under a framework that has what it dubbed as “strong safeguards” (such as intercepts being subject to prior authorisation by an independent judicial body; measures needing to be necessary and proportionate; and redress mechanisms for those who believe they are subject to unlawful surveillance).

The Commission also noted that the UK is subject to the jurisdiction of the European Court of Human Rights; must adhere to the European Convention of Human Rights; and the Council of Europe Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data — aka “the only binding international treaty in the area of data protection”.

“These international commitments are an essential elements of the legal framework assessed in the two adequacy decisions,” the Commission notes. 

Data transfers for the purposes of UK immigration control have been excluded from the scope of the adequacy decision adopted under the GDPR — with the Commission saying that’s “in order to reflect a recent judgment of the England and Wales Court of Appeal on the validity and interpretation of certain restrictions of data protection rights in this area”.

“The Commission will reassess the need for this exclusion once the situation has been remedied under UK law,” it added.

So, again, there’s another caveat right there.

#brexit, #data-controller, #data-protection, #data-security, #encryption, #europe, #european-commission, #european-court-of-human-rights, #european-data-protection-board, #european-union, #general-data-protection-regulation, #oliver-dowden, #personal-data, #policy, #privacy, #surveillance-law, #uk-government, #united-kingdom, #united-states

EU puts out final guidance on data transfers to third countries

The European Data Protection Board (EDPB) published its final recommendations yesterday setting on guidance for making transfers of personal data to third countries to comply with EU data protection rules in light of last summer’s landmark CJEU ruling (aka Schrems II).

The long and short of these recommendations — which are fairly long; running to 48 pages — is that some data transfers to third countries will simply not be possible to (legally) carry out. Despite the continued existence of legal mechanisms that can, in theory, be used to make such transfers (like Standard Contractual Clauses; a transfer tool that was recently updated by the Commission).

However it’s up to the data controller to assess the viability of each transfer, on a case by case basis, to determine whether data can legally flow in that particular case. (Which may mean, for example, a business making complex assessments about foreign government surveillance regimes and how they impinge upon its specific operations.)

Companies that routinely take EU users’ data outside the bloc for processing in third countries (like the US), which do not have data adequacy arrangements with the EU, face substantial cost and challenge in attaining compliance — in a best case scenario.

Those that can’t apply viable ‘special measures’ to ensure transferred data is safe are duty bound to suspend data flows — with the risk, should they fail to do that, of being ordered to by a data protection authority (which could also apply additional sanctions).

One alternative option could be for such a firm to store and process EU users’ data locally — within the EU. But clearly that won’t be viable for every company.

Law firms are likely to be very happy with this outcome since there will be increased demand for legal advice as companies grapple with how to structure their data flows and adapt to a post-Schrems II world.

In some EU jurisdictions (such as Germany) data protection agencies are now actively carrying out compliance checks — so orders to suspend transfers are bound to follow.

While the European Data Protection Supervisor is busy scrutinizing EU institutions’ own use of US cloud services giants to see whether high level arrangements with tech giants like AWS and Microsoft pass muster or not.

Last summer the CJEU struck down the EU-US Privacy Shield — only a few years after the flagship adequacy arrangement was inked. The same core legal issues did for its predecessor, ‘Safe Harbor‘, though that had stood for some fifteen years. And since the demise of Privacy Shield the Commission has repeatedly warned there will be no quick fix replacement this time; nothing short of major reform of US surveillance law is likely to be required.

US and EU lawmakers remain in negotiations over a replacement EU-US data flows deal but a viable outcome that can stand up to legal challenge as the prior two agreements could not, may well require years of work, not months.

And that means EU-US data flows are facing legal uncertainty for the foreseeable future.

The UK, meanwhile, has just squeezed a data adequacy agreement out of the Commission — despite some loudly enunciated post-Brexit plans for regulatory divergence in the area of data protection.

If the UK follows through in ripping up key tenets of its inherited EU legal framework there’s a high chance it will also lose adequacy status in the coming years — meaning it too could face crippling barriers to EU data flows. (But for now it seems to have dodged that bullet.)

Data flows to other third countries that also lack an EU adequacy agreement — such as China and India — face the same ongoing legal uncertainty.

The backstory to the EU international data flows issues originates with a complaint — in the wake of NSA whistleblower Edward Snowden’s revelations about government mass surveillance programs, so more than seven years ago — made by the eponymous Max Schrems over what he argued were unsafe EU-US data flows.

Although his complaint was specifically targeted at Facebook’s business and called on the Irish Data Protection Commission (DPC) to use its enforcement powers and suspend Facebook’s EU-US data flows.

A regulatory dance of indecision followed which finally saw legal questions referred to Europe’s top court and — ultimately — the demise of the EU-US Privacy Shield. The CJEU ruling also put it beyond legal doubt that Member States’ DPAs must step in and act when they suspect data is flowing to a location where the information is at risk.

Following the Schrems II ruling, the DPC (finally) sent Facebook a preliminary order to suspend its EU-US data flows last fall. Facebook immediately challenged the order in the Irish courts — seeking to block the move. But that challenge failed. And Facebook’s EU-US data flows are now very much operating on borrowed time.

As one of the platform’s subject to Section 702 of the US’ FISA law, its options for applying ‘special measures’ to supplement its EU data transfers look, well, limited to say the least.

It can’t — for example — encrypt the data in a way that ensures it has no access to it (zero access encryption) since that’s not how Facebook’s advertising empire functions. And Schrems has previously suggested Facebook will have to federate its service — and store EU users’ information inside the EU — to fix its data transfer problem.

Safe to say, the costs and complexity of compliance for certain businesses like Facebook look massive.

But there will be compliance costs and complexity for thousands of businesses in the wake of the CJEU ruling.

Commenting on the EDPB’s adoption of final recommendations, chair Andrea Jelinek said: “The impact of Schrems II cannot be underestimated: Already international data flows are subject to much closer scrutiny from the supervisory authorities who are conducting investigations at their respective levels. The goal of the EDPB Recommendations is to guide exporters in lawfully transferring personal data to third countries while guaranteeing that the data transferred is afforded a level of protection essentially equivalent to that guaranteed within the European Economic Area.

“By clarifying some doubts expressed by stakeholders, and in particular the importance of examining the practices of public authorities in third countries, we want to make it easier for data exporters to know how to assess their transfers to third countries and to identify and implement effective supplementary measures where they are needed. The EDPB will continue considering the effects of the Schrems II ruling and the comments received from stakeholders in its future guidance.”

The EDPB put out earlier guidance on Schrems II compliance last year.

It said the main modifications between that earlier advice and its final recommendations include: “The emphasis on the importance of examining the practices of third country public authorities in the exporters’ legal assessment to determine whether the legislation and/or practices of the third country impinge — in practice — on the effectiveness of the Art. 46 GDPR transfer tool; the possibility that the exporter considers in its assessment the practical experience of the importer, among other elements and with certain caveats; and the clarification that the legislation of the third country of destination allowing its authorities to access the data transferred, even without the importer’s intervention, may also impinge on the effectiveness of the transfer tool”.

Commenting on the EDPB’s recommendations in a statement, law firm Linklaters dubbed the guidance “strict” — warning over the looming impact on businesses.

“There is little evidence of a pragmatic approach to these transfers and the EDPB seems entirely content if the conclusion is that the data must remain in the EU,” said Peter Church, a Counsel at the global law firm. “For example, before transferring personal data to third country (without adequate data protection laws) businesses must consider not only its law but how its law enforcement and national security agencies operate in practice. Given these activities are typically secretive and opaque, this type of analysis is likely to cost tens of thousands of euros and take time. It appears this analysis is needed even for relatively innocuous transfers.”

“It is not clear how SMEs can be expected to comply with these requirements,” he added. “Given we now operate in a globalised society the EDPB, like King Canute, should consider the practical limitations on its power. The guidance will not turn back the tides of data washing back and forth across the world, but many businesses will really struggle to comply with these new requirements.”

 

#andrea-jelinek, #china, #data-controller, #data-protection, #data-security, #edpb, #edward-snowden, #eu-us-privacy-shield, #europe, #european-data-protection-board, #european-union, #facebook, #general-data-protection-regulation, #germany, #india, #law-enforcement, #law-firms, #linklaters, #max-schrems, #policy, #privacy, #schrems-ii, #surveillance-law, #united-kingdom, #united-states

Ban biometric surveillance in public to safeguard rights, urge EU bodies

There have been further calls from EU institutions to outlaw biometric surveillance in public.

In a joint opinion published today, the European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS), Wojciech Wiewiórowski, have called for draft EU regulations on the use of artificial intelligence technologies to go further than the Commission’s proposal in April — urging that the planned legislation should be beefed up to include a “general ban on any use of AI for automated recognition of human features in publicly accessible spaces, such as recognition of faces, gait, fingerprints, DNA, voice, keystrokes and other biometric or behavioural signals, in any context”.

Such technologies are simply too harmful to EU citizens’ fundamental rights and freedoms — like privacy and equal treatment under the law — to permit their use, is the argument.

The EDPB is responsible for ensuring a harmonization application of the EU’s privacy rules, while the EDPS oversees EU institutions’ own compliance with data protection law and also provides legislative guidance to the Commission.

EU lawmakers’ draft proposal on regulating applications of AI contained restrictions on law enforcement’s use of biometric surveillance in public places — but with very wide-ranging exemptions which quickly attracted major criticism from digital rights and civil society groups, as well as a number of MEPs.

The EDPS himself also quickly urged a rethink. Now he’s gone further, with the EDPB joining in with the criticism.

The EDPB and the EDPS have jointly fleshed out a number of concerns with the EU’s AI proposal — while welcoming the overall “risk-based approach” taken by EU lawmakers — saying, for example, that legislators must be careful to ensure alignment with the bloc’s existing data protection framework to avoid rights risks.

“The EDPB and the EDPS strongly welcome the aim of addressing the use of AI systems within the European Union, including the use of AI systems by EU institutions, bodies or agencies. At the same time, the EDPB and EDPS are concerned by the exclusion of international law enforcement cooperation from the scope of the Proposal,” they write.

“The EDPB and EDPS also stress the need to explicitly clarify that existing EU data protection legislation (GDPR, the EUDPR and the LED) applies to any processing of personal data falling under the scope of the draft AI Regulation.”

As well as calling for the use of biometric surveillance to be banned in public, the pair have urged a total ban on AI systems using biometrics to categorize individuals into “clusters based on ethnicity, gender, political or sexual orientation, or other grounds on which discrimination is prohibited under Article 21 of the Charter of Fundamental Rights”.

That’s an interesting concern in light of Google’s push, in the adtech realm, to replace behavioral micromarketing of individuals with ads that address cohorts (or groups) of users, based on their interests — with such clusters of web users set to be defined by Google’s AI algorithms.

(It’s interesting to speculate, therefore, whether FLoCs risks creating a legal discrimination risk — based on how individual mobile users are grouped together for ad targeting purposes. Certainly, concerns have been raised over the potential for FLoCs to scale bias and predatory advertising. And it’s also interesting that Google avoided running early tests in Europe, likely owning to the EU’s data protection regime.)

In another recommendation today, the EDPB and the EDPS also express a view that the use of AI to infer emotions of a natural person is “highly undesirable and should be prohibited” —  except for what they describe as “very specified cases, such as some health purposes, where the patient emotion recognition is important”.

“The use of AI for any type of social scoring should be prohibited,” they go on — touching on one use-case that the Commission’s draft proposal does suggest should be entirely prohibited, with EU lawmakers evidently keen to avoid any China-style social credit system taking hold in the region.

However by failing to include a prohibition on biometric surveillance in public in the proposed regulation the Commission is arguably risking just such a system being developed on the sly — i.e. by not banning private actors from deploying technology that could be used to track and profile people’s behavior remotely and en masse.

Commenting in a statement, the EDPB’s chair Andrea Jelinek and the EDPS Wiewiórowski argue as much, writing [emphasis ours]:

“Deploying remote biometric identification in publicly accessible spaces means the end of anonymity in those places. Applications such as live facial recognition interfere with fundamental rights and freedoms to such an extent that they may call into question the essence of these rights and freedoms. This calls for an immediate application of the precautionary approach. A general ban on the use of facial recognition in publicly accessible areas is the necessary starting point if we want to preserve our freedoms and create a human-centric legal framework for AI. The proposed regulation should also prohibit any type of use of AI for social scoring, as it is against the EU fundamental values and can lead to discrimination.”

In their joint opinion they also express concerns about the Commission’s proposed enforcement structure for the AI regulation, arguing that data protection authorities (within Member States) should be designated as national supervisory authorities (“pursuant to Article 59 of the [AI] Proposal”) — pointing out the EU DPAs are already enforcing the GDPR (General Data Protection Regulation) and the LED (Law Enforcement Directive) on AI systems involving personal data; and arguing it would therefore be “a more harmonized regulatory approach, and contribute to the consistent interpretation of data processing provisions across the EU” if they were given competence for supervising the AI Regulation too.

They are also not happy with the Commission’s plan to give itself a predominant role in the planned European Artificial Intelligence Board (EAIB) — arguing that this “would conflict with the need for an AI European body independent from any political influence”. To ensure the Board’s independence the proposal should give it more autonomy and “ensure it can act on its own initiative”, they add.

The Commission has been contacted for comment.

The AI Regulation is one of a number of digital proposals unveiled by EU lawmakers in recent months. Negotiations between the different EU institutions — and lobbying from industry and civil society — continues as the bloc works toward adopting new digital rules.

In another recent and related development, the UK’s information commissioner warned last week over the threat posed by big data surveillance systems that are able to make use of technologies like live facial recognition — although she claimed it’s not her place to endorse or ban a technology.

But her opinion makes it clear that many applications of biometric surveillance may be incompatible with the UK’s privacy and data protection framework.

#andrea-jelinek, #artificial-intelligence, #biometrics, #data-protection, #data-protection-law, #edpb, #edps, #europe, #european-data-protection-board, #european-union, #facial-recognition, #general-data-protection-regulation, #law-enforcement, #privacy, #surveillance, #united-kingdom, #wojciech-wiewiorowski

EU bodies’ use of US cloud services from AWS, Microsoft being probed by bloc’s privacy chief

Europe’s lead data protection regulator has opened two investigations into EU institutions’ use of cloud services from U.S. cloud giants, Amazon and Microsoft, under so called Cloud II contracts inked earlier between European bodies, institutions and agencies and AWS and Microsoft.

A separate investigation has also been opened into the European Commission’s use of Microsoft Office 365 to assess compliance with earlier recommendations, the European Data Protection Supervisor (EDPS) said today.

Wojciech Wiewiórowski is probing the EU’s use of U.S. cloud services as part of a wider compliance strategy announced last October following a landmark ruling by the Court of Justice (CJEU) — aka, Schrems II — which struck down the EU-US Privacy Shield data transfer agreement and cast doubt upon the viability of alternative data transfer mechanisms in cases where EU users’ personal data is flowing to third countries where it may be at risk from mass surveillance regimes.

In October, the EU’s chief privacy regulator asked the bloc’s institutions to report on their transfers of personal data to non-EU countries. This analysis confirmed that data is flowing to third countries, the EDPS said today. And that it’s flowing to the U.S. in particular — on account of EU bodies’ reliance on large cloud service providers (many of which are U.S.-based).

That’s hardly a surprise. But the next step could be very interesting as the EDPS wants to determine whether those historical contracts (which were signed before the Schrems II ruling) align with the CJEU judgement or not.

Indeed, the EDPS warned today that they may not — which could thus require EU bodies to find alternative cloud service providers in the future (most likely ones located within the EU, to avoid any legal uncertainty). So this investigation could be the start of a regulator-induced migration in the EU away from U.S. cloud giants.

Commenting in a statement, Wiewiórowski said: “Following the outcome of the reporting exercise by the EU institutions and bodies, we identified certain types of contracts that require particular attention and this is why we have decided to launch these two investigations. I am aware that the ‘Cloud II contracts’ were signed in early 2020 before the ‘Schrems II’ judgement and that both Amazon and Microsoft have announced new measures with the aim to align themselves with the judgement. Nevertheless, these announced measures may not be sufficient to ensure full compliance with EU data protection law and hence the need to investigate this properly.”

Amazon and Microsoft have been contacted with questions regarding any special measures they have applied to these Cloud II contracts with EU bodies.

The EDPS said it wants EU institutions to lead by example. And that looks important given how, despite a public warning from the European Data Protection Board (EDPB) last year — saying there would be no regulatory grace period for implementing the implications of the Schrems II judgement — there hasn’t been any major data transfer fireworks yet.

The most likely reason for that is a fair amount of head-in-the-sand reaction and/or superficial tweaks made to contracts in the hopes of meeting the legal bar (but which haven’t yet been tested by regulatory scrutiny).

Final guidance from the EDPB is also still pending, although the Board put out detailed advice last fall.

The CJEU ruling made it plain that EU law in this area cannot simply be ignored. So as the bloc’s data regulators start scrutinizing contracts that are taking data out of the EU some of these arrangement are, inevitably, going to be found wanting — and their associated data flows ordered to stop.

To wit: A long-running complaint against Facebook’s EU-US data transfers — filed by the eponymous Max Schrems, a long-time EU privacy campaigners and lawyer, all the way back in 2013 — is slowing winding toward just such a possibility.

Last fall, following the Schrems II ruling, the Irish regulator gave Facebook a preliminary order to stop moving Europeans’ data over the pond. Facebook sought to challenge that in the Irish courts but lost its attempt to block the proceeding earlier this month. So it could now face a suspension order within months.

How Facebook might respond is anyone’s guess but Schrems suggested to TechCrunch last summer that the company will ultimately need to federate its service, storing EU users’ data inside the EU.

The Schrems II ruling does generally look like it will be good news for EU-based cloud service providers which can position themselves to solve the legal uncertainty issue (even if they aren’t as competitively priced and/or scalable as the dominant US-based cloud giants).

Fixing U.S. surveillance law, meanwhile — so that it gets independent oversight and accessible redress mechanisms for non-citizens in order to no longer be considered a threat to EU people’s data, as the CJEU judges have repeatedly found — is certainly likely to take a lot longer than ‘months’. If indeed the US authorities can ever be convinced of the need to reform their approach.

Still, if EU regulators finally start taking action on Schrems II — by ordering high profile EU-US data transfers to stop — that might help concentrate US policymakers’ minds toward surveillance reform. Otherwise local storage may be the new future normal.

#amazon, #aws, #cloud, #cloud-services, #data-protection, #data-protection-law, #data-security, #eu-us-privacy-shield, #europe, #european-commission, #european-data-protection-board, #european-union, #facebook, #lawyer, #max-schrems, #microsoft, #privacy, #surveillance-law, #united-states, #wojciech-wiewiorowski

European Parliament amps up pressure on EU-US data flows and GDPR enforcement

European Union lawmakers are facing further pressure to step in and do something about lackadaisical enforcement of the bloc’s flagship data protection regime after the European Parliament voted yesterday to back a call urging the Commission to start an infringement proceeding against Ireland’s Data Protection Commission (DPC) for not “properly enforcing” the regulation.

The Commission and the DPC have been contacted for comment on the parliament’s call.

Last summer the Commission’s own two-year review of the General Data Protection Regulation (GDPR) highlighted a lack of uniformly vigorous enforcement — but commissioners were keener to point out the positives, lauding the regulation as a “global reference point”.

But it’s now nearly three years since the regulation begun being applied and criticism over weak enforcement is getting harder for the EU’s executive to ignore.

The parliament’s resolution — which, while non-legally binding, fires a strong political message across the Commission’s bow — singles out the DPC for specific criticism given its outsized role in enforcement of the General Data Protection Regulation (GDPR). It’s the lead supervisory authority for complaints brought against the many big tech companies which choose to site their regional headquarters in the country (on account of its corporate-friendly tax system).

The text of the resolution expresses “deep concern” over the DPC’s failure to reach a decision on a number of complaints against breaches of the GDPR filed the day it came into application, on May 25, 2018 — including against Facebook and Google — and criticises the Irish data watchdog for interpreting ‘without delay’ in Article 60(3) of the GDPR “contrary to the legislators’ intention – as longer than a matter of months”, as they put it.

To date the DPC has only reached a final decision on one cross-border GDPR case — against Twitter.

The parliament also says it’s “concerned about the lack of tech specialists working for the DPC and their use of outdated systems” (which Brave also flagged last year) — as well as criticizing the watchdog’s handling of a complaint originally brought by privacy campaigner Max Schrems years before the GDPR came into application, which relates to the clash between EU privacy rights and US surveillance laws, and which still hasn’t resulted in a decision.

The DPC’s approach to handling Schrems’ 2013 complaint led to a 2018 referral to the CJEU — which in turn led to the landmark Schrems II judgement last summer invalidating the flagship EU-US data transfer arrangement, Privacy Shield.

That ruling did not outlaw alternative data transfer mechanisms but made it clear that EU DPAs have an obligation to step in and suspend data transfers if European’s information is being taken to a third country that does not have essentially equivalent protections to those they have under EU law — thereby putting the ball back in the DPC’s court on the Schrems complaint.

The Irish regulator then sent a preliminary order to Facebook to suspend its data transfers and the tech giant responded by filing for a judicial review of the DPC’s processes. However the Irish High Court rejected Facebook’s petition last week. And a stay on the DPC’s investigation was lifted yesterday — so the DPC’s process of reaching a decision on the Facebook data flows complaint has started moving again.

A final decision could still take several months more, though — as we’ve reported before — as the DPC’s draft decision will also need to be put to the other EU DPAs for review and the chance to object.

The parliament’s resolution states that it “is worried that supervisory authorities have not taken proactive steps under Article 61 and 66 of the GDPR to force the DPC to comply with its obligations under the GDPR”, and — in more general remarks on the enforcement of GDPR around international data transfers — it states that it:

Is concerned about the insufficient level of enforcement of the GDPR, particularly in the area of international transfers; expresses concerns at the lack of prioritisation and overall scrutiny by national supervisory authorities with regard to personal data transfers to third countries, despite the significant CJEU case law developments over the past five years; deplores the absence of meaningful decisions and corrective measures in this regard, and urges the EDPB [European Data Protection Board] and national supervisory authorities to include personal data transfers as part of their audit, compliance and enforcement strategies; points out that harmonised binding administrative procedures on the representation of data subjects and admissibility are needed to provide legal certainty and deal with crossborder complaints;

The knotty, multi-year saga of Schrems’ Facebook data-flows complaint, as played out via the procedural twists of the DPC and Facebook’s lawyers’ delaying tactics, illustrates the multi-layered legal, political and commercial complexities bound up with data flows out of the EU (post-Snowden’s 2013 revelations of US mass surveillance programs) — not to mention the staggering challenge for EU data subjects to actually exercise the rights they have on paper. But these intersecting issues around international data flows do seem to be finally coming to a head, in the wake of the Schrems II CJEU ruling.

The clock is now ticking for the issuing of major data suspension orders by EU data protection agencies, with Facebook’s business first in the firing line.

Other US-based services that are — similarly — subject to the US’ FISA regime (and also move EU users data over the pond for processing; and whose businesses are such they cannot shield user data via ‘zero access’ encryption architecture) are equally at risk of receiving an order to shut down their EU-US data-pipes. Or else having to shift data processing for these users inside the EU.

US-based services aren’t the only ones facing increasing legal uncertainty, either.

The UK, post-Brexit, is also classed as a third country (in EU law terms). And in a separate resolution today the parliament adopted a text on the UK adequacy agreement, granted earlier this year by the Commission, which raises objections to the arrangement — including by flagging a lack of GDPR enforcement in the UK as problematic.

On that front the parliament highlights how adtech complaints filed with the ICO have failed to yield a decision. (It writes that it’s concerned “non-enforcement is a structural problem” in the UK — which it suggests has left “a large number of data protection law breaches… [un]remedied”.)

It also calls out the UK’s surveillance regime, questioning its compatibility with the CJEU’s requirements for essential equivalence — while also raising concerns about the risk that the UK could undermine protections on EU citizens data via onward transfers to jurisdictions the EU does not have an adequacy agreement with, among other objections.

The Commission put a four year lifespan on the UK’s adequacy deal — meaning there will be another major review ahead of any continuation of the arrangement in 2025.

It’s a far cry from the ‘hands-off’ fifteen years the EU-US ‘Safe Harbor’ agreement stood for, before a Schrems challenge finally led to the CJEU striking it down back in 2015. So the takeaway here is that data deals that allow for people’s information to leave Europe aren’t going to be allowed to stand unchecked for years; close scrutiny and legal accountability are now firmly up front — and will remain in the frame going forward.

The global nature of the Internet and the ease with which data can digitally flow across borders of course brings huge benefits for businesses — but the resulting interplay between different legal regimes is leading to increasing levels of legal uncertainty for companies seeking to take people’s data across borders.

In the EU’s case, the issue is that data protection is regulated within the bloc and these laws require that protection stays with people’s information, no matter where it goes. So if the data flows to countries that do not offer the same safeguards — be that the US or indeed China or India (or even the UK) — then that risk is that it can’t, legally, be taken there.

How to resolve this clash, between data protection laws based on individual privacy rights and data access mandates driven by national security priorities, has no easy answers.

For the US, and for the transatlantic data flows between the EU and the US, the Commission has warned there will be no quick fix this time — as happened when it slapped a sticking plaster atop the invalidated Safe Harbor, hailing a new ‘Privacy Shield’ regime; only for the CJEU to blast that out of the water for much the same reasons a few years later. (The parliament resolution is particularly withering in its assessment of the Commission’s historic missteps there.)

For a fix to stick, major reform of US surveillance law is going to be needed. And the Commission appears to have accepted that’s not going to come overnight, so it seems to be trying to brace businesses for turbulence…

The parliament’s resolution on Schrems II also makes it clear that it expects DPAs to step in and cut off risky data flows — with MEPs writing that “if no arrangement with the US is swiftly found which guarantees an essentially equivalent and therefore adequate level of protection to that provided by the GDPR and the Charter, that these transfers will be suspended until the situation is resolved”.

So if DPAs fail to do this — and if Ireland keeps dragging its feet on closing out the Schrems complaint — they should expect more resolutions to be blasted at them from the parliament.

MEPs emphasize the need for any future EU-US data transfer agreement “to address the problems identified by the Court ruling in a sustainable manner” — pointing out that “no contract between companies can provide protection from indiscriminate access by intelligence authorities to the content of electronic communications, nor can any contract between companies provide sufficient legal remedies against mass surveillance”.

“This requires a reform of US surveillance laws and practices with a view to ensuring that access of US security authorities to data transferred from the EU is limited to what is necessary and proportionate, and that European data subjects have access to effective judicial redress before US courts,” the parliament adds.

It’s still true that businesses may be able to legally move EU personal data out of the bloc. Even, potentially, to the US — depending on the type of business; the data itself; and additional safeguards that could be applied.

However for data-mining companies like Facebook — which are subject to FISA and whose businesses rely on accessing people’s data — then achieving essential equivalence with EU privacy protections looks, well, essentially impossible.

And while the parliament hasn’t made an explicit call in the resolution for Facebook’s EU data flows to be cut off that is the clear implication of it urging infringement proceedings against the DPC (and deploring “the absence of meaningful decisions and corrective measures” in the area of international transfers).

The parliament says it wants to see “solid mechanisms compliant with the CJEU judgement” set out — for the benefit of businesses with the chance to legally move data out of the EU — saying, for example, that the Commission’s proposal for a template for Standard Contractual Clauses (SCCs) should “duly take into account all the relevant recommendations of the EDPB“.

It also says it supports the creation of a tool box of supplementary measures for such businesses to choose from — in areas like security and data protection certification; encryption safeguards; and pseudonymisation — so long as the measures included are accepted by regulators.

It also wants to see publicly available resources on the relevant legislation of the EU’s main trading partners to help businesses that have the possibility of being able to legally move data out of the bloc get guidance to help them do so with compliance.

The overarching message here is that businesses should buckle up for disruption of cross-border data flows — and tool up for compliance, where possible.

In another segment of the resolution, for example, the parliament calls on the Commission to “analyse the situation of cloud providers falling under section 702 of the FISA who transfers data using SCCs” — going on to suggest that support for European alternatives to US cloud providers may be needed to plug “gaps in the protection of data of European citizens transferred to the United States” and “reduce the dependence of the Union in storage capacities vis-à-vis third countries and to strengthen the Union’s strategic autonomy in terms of data management and protection”.

#brexit, #china, #cloud, #data-mining, #data-protection, #data-protection-commission, #data-security, #encryption, #eu-us-privacy-shield, #europe, #european-data-protection-board, #european-parliament, #european-union, #facebook, #general-data-protection-regulation, #google, #india, #ireland, #lawsuit, #max-schrems, #noyb, #privacy, #safe-harbor, #surveillance-law, #twitter, #united-kingdom, #united-states

Facebook loses last ditch attempt to derail DPC decision on its EU-US data flows

Facebook has failed in its bid to prevent its lead EU data protection regulator from pushing ahead with a decision on whether to order suspension of its EU-US data flows.

The Irish High Court has just issued a ruling dismissing the company’s challenge to the Irish Data Protection Commission’s (DPC) procedures.

The case has huge potential operational significance for Facebook which may be forced to store European users’ data locally if it’s ordered to stop taking their information to the U.S. for processing.

Last September Irish data watchdog made a preliminary order warning Facebook it may have to suspend EU-US data flows. Facebook responding by filing for a judicial review and obtaining a stay on the DPC’s procedure. That block is now being unblocked.

We understand the involved parties have been given a few days to read the High Court judgement ahead of another hearing on Thursday — when the court is expected to formally lift Facebook’s stay on the DPC’s investigation (and settle the matter of case costs).

The DPC declined to comment on today’s ruling in any detail — or on the timeline for making a decision on Facebook’s EU-US data flows — but deputy commissioner Graham Doyle told us it “welcomes today’s judgment”.

Its preliminary suspension order last fall followed a landmark judgement by Europe’s top court in the summer — when the CJEU struck down a flagship transatlantic agreement on data flows, on the grounds that US mass surveillance is incompatible with the EU’s data protection regime.

The fall-out from the CJEU’s invalidation of Privacy Shield (as well as an earlier ruling striking down its predecessor Safe Harbor) has been ongoing for years — as companies that rely on shifting EU users’ data to the US for processing have had to scramble to find valid legal alternatives.

While the CJEU did not outright ban data transfers out of the EU, it made it crystal clear that data protection agencies must step in and suspend international data flows if they suspect EU data is at risk. And EU to US data flows were signalled as at clear risk given the court simultaneously struck down Privacy Shield.

The problem for some businesses is that there may simply not be a valid legal alternative. And that’s where things look particularly sticky for Facebook, since its service falls under NSA surveillance via Section 702 of the FISA (which is used to authorize mass surveillance programs like Prism).

So what happens now for Facebook, following the Irish High Court ruling?

As ever in this complex legal saga — which has been going on in various forms since an original 2013 complaint made by European privacy campaigner Max Schrems — there’s still some track left to run.

After this unblocking the DPC will have two enquiries in train: Both the original one, related to Schrems’ complaint, and an own volition enquiry it decided to open last year — when it said it was pausing investigation of Schrems’ original complaint.

Schrems, via his privacy not-for-profit noyb, filed for his own judicial review of the DPC’s proceedings. And the DPC quickly agreed to settle — agreeing in January that it would ‘swiftly’ finalize Schrems’ original complaint. So things were already moving.

The tl;dr of all that is this: The last of the bungs which have been used to delay regulatory action in Ireland over Facebook’s EU-US data flows are finally being extracted — and the DPC must decide on the complaint.

Or, to put it another way, the clock is ticking for Facebook’s EU-US data flows. So expect another wordy blog post from Nick Clegg very soon.

Schrems previously told TechCrunch he expects the DPC to issue a suspension order against Facebook within months — perhaps as soon as this summer (and failing that by fall).

In a statement reacting to the Court ruling today he reiterated that position, saying: “After eight years, the DPC is now required to stop Facebook’s EU-US data transfers, likely before summer. Now we simply have two procedures instead of one.”

When Ireland (finally) decides it won’t mark the end of the regulatory procedures, though.

A decision by the DPC on Facebook’s transfers would need to go to the other EU DPAs for review — and if there’s disagreement there (as seems highly likely, given what’s happened with draft DPC GDPR decisions) it will trigger a further delay (weeks to months) as the European Data Protection Board seeks consensus.

If a majority of EU DPAs can’t agree the Board may itself have to cast a deciding vote. So that could extend the timeline around any suspension order. But an end to the process is, at long last, in sight.

And, well, if a critical mass of domestic pressure is ever going to build for pro-privacy reform of U.S. surveillance laws now looks like a really good time…

“We now expect the DPC to issue a decision to stop Facebook’s data transfers before summer,” added Schrems. “This would require Facebook to store most data from Europe locally, to ensure that Facebook USA does not have access to European data. The other option would be for the US to change its surveillance laws.”

Facebook has been contacted for comment on the Irish High Court ruling.

Update: The company has now sent us this statement:

“Today’s ruling was about the process the IDPC followed. The larger issue of how data can move around the world remains of significant importance to thousands of European and American businesses that connect customers, friends, family and employees across the Atlantic. Like other companies, we have followed European rules and rely on Standard Contractual Clauses, and appropriate data safeguards, to provide a global service and connect people, businesses and charities. We look forward to defending our compliance to the IDPC, as their preliminary decision could be damaging not only to Facebook, but also to users and other businesses.”

#data-protection, #data-security, #digital-rights, #dpc, #eu-us-privacy-shield, #europe, #european-data-protection-board, #european-union, #facebook, #human-rights, #ireland, #lawsuit, #max-schrems, #nick-clegg, #noyb, #policy, #privacy, #safe-harbor, #united-states

TikTok removes 500k+ accounts in Italy after DPA order to block underage users

Video sharing social network TikTok has removed more than 500,000 accounts in Italy following an intervention by the country’s data protection watchdog earlier this year ordering it to recheck the age of all Italian users and block access to any under the age of 13.

Between February 9 and April 21 more than 12.5M Italian users were asked to confirm that they are over 13 years old, according to the regulator.

Online age verification remains a hard problem and it’s not clear how many of the removed accounts definitively belonged to under 13s. The regulator said today that TikTok removed over 500k users because they were “likely” to be under the age of 16; around 400,000 because they declared an age under 13 and 140,000 through what the DPA describes as “a combination of moderation and reporting tools” implemented within the app.

TikTok has also agreed to take a series of additional measures to strengthen its ability to detect and block underage users — including potentially developing AI tools to help it identify when children are using the service.

Reached for comment, TikTok sent us a statement confirming that it is trialling “additional measures to help ensure that only users aged 13 or over are able to use TikTok”.

Here’s the statement, which TikTok attributed to Alexandra Evans, its head of child safety in Europe:

“TikTok’s top priority is protecting the privacy and safety of our users, and in particular our younger users. Following continued engagement with the Garante, we will be trialling additional measures to help ensure that only users aged 13 or over are able to use TikTok.

“We already take industry-leading steps to promote youth safety on TikTok such as setting accounts to private by default for users aged under 16 and enabling parents to link their account to their teen’s through Family Pairing. There is no finish line when it comes to safety, and we continue to evaluate and improve our policies, processes and systems, and consult with external experts.”

Italy’s data protection regulator made an emergency intervention in January — ordering TikTok to recheck the age of all users and block any users whose age it could not verify. The action followed reports in local media about a 10-year-old girl from Palermo who died of asphyxiation after participating in a “blackout challenge” on the social network.

Among the beefed up measures TikTok has agreed to take is a commitment to act faster to remove underage users — with the Italian DPA saying the platform has guaranteed it will cancel reported accounts it verifies as belonging to under 13s within 48 hours.

The regulator said TikTok has also committed to “study and develop” solutions — which may include the use of artificial intelligence — to “minimize the risk of children under 13 using the service”.

TikTok has also agree to launch ad campaigns, both in app and through radio and newspapers in Italy, to raise awareness about safe use of the platform and get the message out that it is not suitable for under-12s — including targeting this messaging in a language and format that’s likely to engage underage minors themselves.

The social network has also agreed to share information with the regulator relating to the effectiveness of the various experimental measures — to work with the regulator to identify the best ways of keeping underage users off the service.

The DPA said it will continue to monitor TikTok’s compliance with its commitments.

Prior to the Garante’s action, TikTok’s age verification checks had been widely criticized as trivially easier for kids to circumvent — with children merely needing to input a false birth date that suggested they are older than 13 to circumvent the age gate and access the service.

A wider investigation that the DPA opened into TikTok’s handling and processing of children’s data last year remains ongoing.

The regulator announced it had begun proceedings against the platform in December 2020, following months of investigation, saying then that it believed TikTok was not complying with EU data protection rules which set stringent requirements for processing children’s data.

In January the Garante also called for the European Data Protection Board to set up an EU taskforce to investigate concerns about the risks of children’s use of the platform — highlighting similar concerns being raised by other agencies in Europe and the U.S.

In February the European consumer rights organization, BEUC, also filed a series of complaints against TikTok, including in relation to its handling of kids’ data.

Earlier this year TikTok announced plans to bring in outside experts in the region to help with content moderation and said it would open a ‘transparency’ center in Europe where outside experts could get information on its content, security and privacy policies.

 

#age-verification, #artificial-intelligence, #beuc, #children, #childrens-data, #europe, #european-data-protection-board, #european-union, #italy, #policy, #privacy, #social, #social-network, #tiktok

Facebook ordered not to apply controversial WhatsApp T&Cs in Germany

The Hamburg data protection agency has banned Facebook from processing the additional WhatsApp user data that the tech giant is granting itself access to under a mandatory update to WhatsApp’s terms of service.

The controversial WhatsApp privacy policy update has caused widespread confusion around the world since being announced — and already been delayed by Facebook for several months after a major user backlash saw rivals messaging apps benefitting from an influx of angry users.

The Indian government has also sought to block the changes to WhatApp’s T&Cs in court — and the country’s antitrust authority is investigating.

Globally, WhatsApp users have until May 15 to accept the new terms (after which the requirement to accept the T&Cs update will become persistent, per a WhatsApp FAQ).

The majority of users who have had the terms pushed on them have already accepted them, according to Facebook, although it hasn’t disclosed what proportion of users that is.

But the intervention by Hamburg’s DPA could further delay Facebook’s rollout of the T&Cs — at least in Germany — as the agency has used an urgency procedure, allowed for under the European Union’s General Data Protection Regulation (GDPR), to order the tech giant not to share the data for three months.

A WhatsApp spokesperson disputed the legal validity of Hamburg’s order — calling it “a fundamental misunderstanding of the purpose and effect of WhatsApp’s update” and arguing that it “therefore has no legitimate basis”.

“Our recent update explains the options people have to message a business on WhatsApp and provides further transparency about how we collect and use data. As the Hamburg DPA’s claims are wrong, the order will not impact the continued roll-out of the update. We remain fully committed to delivering secure and private communications for everyone,” the spokesperson added, suggesting that Facebook-owned WhatsApp may be intending to ignore the order.

We understand that Facebook is considering its options to appeal Hamburg’s procedure.

The emergency powers Hamburg is using can’t extend beyond three months but the agency is also applying pressure to the European Data Protection Board (EDPB) to step in and make what it calls “a binding decision” for the 27 Member State bloc.

We’ve reached out to the EDPB to ask what action, if any, it could take in response to the Hamburg DPA’s call.

The body is not usually involved in making binding GDPR decisions related to specific complaints — unless EU DPAs cannot agree over a draft GDPR decision brought to them for review by a lead supervisory authority under the one-stop-shop mechanism for handling cross-border cases.

In such a scenario the EDPB can cast a deciding vote — but it’s not clear that an urgency procedure would qualify.

In taking the emergency action, the German DPA is not only attacking Facebook for continuing to thumb its nose at EU data protection rules, but throwing shade at its lead data supervisor in the region, Ireland’s Data Protection Commission (DPC) — accusing the latter of failing to investigate the very widespread concerns attached to the incoming WhatsApp T&Cs.

(“Our request to the lead supervisory authority for an investigation into the actual practice of data sharing was not honoured so far,” is the polite framing of this shade in Hamburg’s press release).

We’ve reached out to the DPC for a response and will update this report if we get one.

Ireland’s data watchdog is no stranger to criticism that it indulges in creative regulatory inaction when it comes to enforcing the GDPR — with critics charging commissioner Helen Dixon and her team of failing to investigate scores of complaints and, in the instances when it has opened probes, taking years to investigate — and opting for weak enforcements at the last.

The only GDPR decision the DPC has issued to date against a tech giant (against Twitter, in relation to a data breach) was disputed by other EU DPAs — which wanted a far tougher penalty than the $550k fine eventually handed down by Ireland.

GDPR investigations into Facebook and WhatsApp remain on the DPC’s desk. Although a draft decision in one WhatsApp data-sharing transparency case was sent to other EU DPAs in January for review — but a resolution has still yet to see the light of day almost three years after the regulation begun being applied.

In short, frustrations about the lack of GDPR enforcement against the biggest tech giants are riding high among other EU DPAs — some of whom are now resorting to creative regulatory actions to try to sidestep the bottleneck created by the one-stop-shop (OSS) mechanism which funnels so many complaints through Ireland.

The Italian DPA also issued a warning over the WhatsApp T&Cs change, back in January — saying it had contacted the EDPB to raise concerns about a lack of clear information over what’s changing.

At that point the EDPB emphasized that its role is to promote cooperation between supervisory authorities. It added that it will continue to facilitate exchanges between DPAs “in order to ensure a consistent application of data protection law across the EU in accordance with its mandate”. But the always fragile consensus between EU DPAs is becoming increasingly fraught over enforcement bottlenecks and the perception that the regulation is failing to be upheld because of OSS forum shopping.

That will increase pressure on the EDPB to find some way to resolve the impasse and avoid a wider break down of the regulation — i.e. if more and more Member State agencies resort to unilateral ’emergency’ action.

The Hamburg DPA writes that the update to WhatsApp’s terms grant the messaging platform “far-reaching powers to share data with Facebook” for the company’s own purposes (including for advertising and marketing) — such as by passing WhatApp users’ location data to Facebook and allowing for the communication data of WhatsApp users to be transferred to third-parties if businesses make use of Facebook’s hosting services.

Its assessment is that Facebook cannot rely on legitimate interests as a legal base for the expanded data sharing under EU law.

And if the tech giant is intending to rely on user consent it’s not meeting the bar either because the changes are not clearly explained nor are users offered a free choice to consent or not (which is the required standard under GDPR).

“The investigation of the new provisions has shown that they aim to further expand the close connection between the two companies in order for Facebook to be able to use the data of WhatsApp users for their own purposes at any time,” Hamburg goes on. “For the areas of product improvement and advertising, WhatsApp reserves the right to pass on data to Facebook companies without requiring any further consent from data subjects. In other areas, use for the company’s own purposes in accordance to the privacy policy can already be assumed at present.

“The privacy policy submitted by WhatsApp and the FAQ describe, for example, that WhatsApp users’ data, such as phone numbers and device identifiers, are already being exchanged between the companies for joint purposes such as network security and to prevent spam from being sent.”

DPAs like Hamburg may be feeling buoyed to take matters into their own hands on GDPR enforcement by a recent opinion by an advisor to the EU’s top court, as we suggested in our coverage at the time. Advocate General Bobek took the view that EU law allows agencies to bring their own proceedings in certain situations, including in order to adopt “urgent measures” or to intervene “following the lead data protection authority having decided not to handle a case.”

The CJEU ruling on that case is still pending — but the court tends to align with the position of its advisors.

 

#data-protection, #data-protection-commission, #data-protection-law, #europe, #european-data-protection-board, #european-union, #facebook, #gdpr, #general-data-protection-regulation, #germany, #hamburg, #helen-dixon, #ireland, #privacy, #privacy-policy, #social, #social-media, #terms-of-service, #whatsapp

Europe lays out plan for risk-based AI rules to boost trust and uptake

European Union lawmakers have presented their risk-based proposal for regulating high risk applications of artificial intelligence within the bloc’s single market.

The plan includes prohibitions on a small number of use-cases that are considered too dangerous to people’s safety or EU citizens’ fundamental rights, such as a China-style social credit scoring system or certain types of AI-enabled mass surveillance.

Most uses of AI won’t face any regulation (let alone a ban) under the proposal but a subset of so-called “high risk” uses will be subject to specific regulatory requirements, both ex ante and ex post.

There are also transparency requirements for certain use-cases — such as chatbots and deepfakes — where EU lawmakers believe that potential risk can be mitigated by informing users that they are interacting with something artificial.

The overarching goal for EU lawmakers is to foster public trust in how AI is implemented to help boost uptake of the technology. Senior Commission officials talk about wanting to develop an excellence ecosystem that’s aligned with European values.

“Today, we aim to make Europe world-class in the development of a secure, trustworthy and human-centered Artificial Intelligence, and the use of it,” said EVP Margrethe Vestager, announcing adoption of the proposal at a press conference.

“On the one hand, our regulation addresses the human and societal risks associated with specific uses of AI. This is to create trust. On the other hand, our coordinated plan outlines the necessary steps that Member States should take to boost investments and innovation. To guarantee excellence. All this, to ensure that we strengthen the uptake of AI across Europe.”

Under the proposal, mandatory requirements are attached to a “high risk” category of applications of AI — meaning those that present a clear safety risk or threaten to impinge on EU fundamental rights (such as the right to non-discrimination).

Examples of high risk AI use-cases that will be subject to the highest level of regulation on use are set out in annex 3 of the regulation — which the Commission said it will have the power to expand by delegate acts, as use-cases of AI continue to develop and risks evolve.

For now cited high risk examples fall into the following categories: Biometric identification and categorisation of natural persons; Management and operation of critical infrastructure; Education and vocational training; Employment, workers management and access to self-employment; Access to and enjoyment of essential private services and public services and benefits; Law enforcement; Migration, asylum and border control management; Administration of justice and democratic processes.

Military uses of AI are specifically excluded from scope as the regulation is focused on the bloc’s internal market.

The makers of high risk applications will have a set of ex ante obligations to comply with before bringing their product to market, including around the quality of the data-sets used to train their AIs and a level of human oversight over not just design but use of the system — as well as ongoing, ex post requirements, in the form of post-market surveillance.

Commission officials suggested the vast majority of applications of AI will fall outside this highly regulated category. Makers of those ‘low risk’ AI systems will merely be encouraged to adopt (non-legally binding) codes of conduct on use.

Penalties for infringing the rules on specific AI use-case bans have been set at up to 6% of global annual turnover or €30M (whichever is greater). While violations of the rules related to high risk applications can scale up to 4% (or €20M).

Enforcement will involve multiple agencies in each EU Member State — with the proposal intending oversight be carried out by existing (relevant) agencies, such as product safety bodies and data protection agencies.

That raises immediate questions over adequate resourcing of national bodies, given the additional work and technical complexity they will face in policing the AI rules; and also how enforcement bottlenecks will be avoided in certain Member States. (Notably, the EU’s General Data Protection Regulation is also overseen at the Member State level and has suffered from lack of uniformly vigorous enforcement.)

There will also be an EU-wide database set up to create a register of high risk systems implemented in the bloc (which will be managed by the Commission).

A new body, called the European Artificial Intelligence Board (EAIB), will also be set up to support a consistent application of the regulation — in a mirror to the European Data Protection Board which offers guidance for applying the GDPR.

In step with rules on certain uses of AI, the plan includes measures to co-ordinate EU Member State support for AI development — such as by establishing regulatory sandboxes to help startups and SMEs develop and test AI-fuelled innovations — and via the prospect of targeted EU funding to support AI developers.

Internal market commissioner Thierry Breton said investment is a crucial piece of the plan.

“Under our Digital Europe and Horizon Europe program we are going to free up a billion euros per year. And on top of that we want to generate private investment and a collective EU-wide investment of €20BN per year over the coming decade — the ‘digital decade’ as we have called it,” he said. “We also want to have €140BN which will finance digital investments under Next Generation EU [COVID-19 recovery fund] — and going into AI in part.”

Shaping rules for AI has been a key priority for EU president Ursula von der Leyen who took up her post at the end of 2019. A white paper was published last year, following a 2018 AI for EU strategy — and Vestager said that today’s proposal is the culmination of three years’ work.

Breton added that providing guidance for businesses to apply AI will give them legal certainty and Europe an edge. “Trust… we think is vitally important to allow the development we want of artificial intelligence,” he said. [Applications of AI] need to be trustworthy, safe, non-discriminatory — that is absolutely crucial — but of course we also need to be able to understand how exactly these applications will work.”

A version of today’s proposal leaked last week — leading to calls by MEPs to beef up the plan, such as by banning remote biometric surveillance in public places.

In the event the final proposal does treat remote biometric surveillance as a particularly high risk application of AI — and there is a prohibition in principal on the use of the technology in public by law enforcement.

However use is not completely proscribed, with a number of exceptions where law enforcement would still be able to make use of it, subject to a valid legal basis and appropriate oversight.

Today’s proposal kicks off the start of the EU’s co-legislative process, with the European Parliament and Member States via the EU Council set to have their say on the draft — meaning a lot could change ahead of agreement on a final pan-EU regulation.

Commissioners declined to give a timeframe for when legislation might be adopted, saying only that they hoped the other EU institutions would engage immediately and that the process could be done asap. It could, nonetheless, be several years before the AI regulation is ratified and in force.

This story is developing, refresh for updates… 

#ai, #artificial-intelligence, #digital-regulation, #europe, #european-data-protection-board, #european-union, #general-data-protection-regulation, #law-enforcement, #margrethe-vestager, #policy, #science-and-technology, #thierry-breton, #ursula-von-der-leyen

EU plan for risk-based AI rules to set fines as high as 4% of global turnover, per leaked draft

European Union lawmakers who are drawing up rules for applying artificial intelligence are considering fines of up to 4% of global annual turnover (or €20M, if greater) for a set of prohibited use-cases, according to a leaked draft of the AI regulation — reported earlier by Politico — that’s expected to be officially unveiled next week.

The plan to regulate AI has been on the cards for a while. Back in February 2020 the European Commission published a white paper, sketching plans for regulating so-called “high risk” applications of artificial intelligence.

At the time EU lawmakers were toying with a sectoral focus — envisaging certain sectors like energy and recruitment as vectors for risk. However that approach appears to have been rethought, per the leaked draft — which does not limit discussion of AI risk to particular industries or sectors.

Instead, the focus is on compliance requirements for high risk AI applications, wherever they may occur (weapons/military uses are specifically excluded, however, as such use-cases fall outside the EU treaties). Although it’s not abundantly clear from this draft exactly how ‘high risk’ will be defined.

The overarching goal for the Commission here is to boost public trust in AI, via a system of compliance checks and balances steeped in “EU values” in order to encourage uptake of so-called “trustworthy” and “human-centric” AI. So even makers of AI applications not considered to be ‘high risk’ will still be encouraged to adopt codes of conduct — “to foster the voluntary application of the mandatory requirements applicable to high-risk AI systems”, as the Commission puts it.

Another chunk of the regulation deals with measures to support AI development in the bloc — pushing Member States to establish regulatory sandboxing schemes in which startups and SMEs can be proritized for support to develop and test AI systems before bringing them to market.

Competent authorities “shall be empowered to exercise their discretionary powers and levers of proportionality in relation to artificial intelligence projects of entities participating the sandbox, while fully preserving authorities’ supervisory and corrective powers,” the draft notes.

What’s high risk AI?

Under the planned rules, those intending to apply artificial intelligence will need to determine whether a particular use-case is ‘high risk’ and thus whether they need to conduct a mandatory, pre-market compliance assessment or not.

“The classification of an AI system as high-risk should be based on its intended purpose — which should refer to the use for which an AI system is intended, including the specific context and conditions of use and — and be determined in two steps by considering whether it may cause certain harms and, if so, the severity of the possible harm and the probability of occurrence,” runs one recital in the draft.

“A classification of an AI system as high-risk for the purpose of this Regulation may not necessarily mean that the system as such or the product as a whole would necessarily be considered as ‘high-risk’ under the criteria of the sectoral legislation,” the text also specifies.

Examples of “harms” associated with high-risk AI systems are listed in the draft as including: “the injury or death of a person, damage of property, systemic adverse impacts for society at large, significant disruptions to the provision of essential services for the ordinary conduct of critical economic and societal activities, adverse impact on financial, educational or professional opportunities of persons, adverse impact on the access to public services and any form of public assistance, and adverse impact on [European] fundamental rights.”

Several examples of high risk applications are also discussed — including recruitment systems; systems that provide access to educational or vocational training institutions; emergency service dispatch systems; creditworthiness assessment; systems involved in determining taxpayer-funded benefits allocation; decision-making systems applied around the prevention, detection and prosecution of crime; and decision-making systems used to assist judges.

So long as compliance requirements — such as establishing a risk management system and carrying out post-market surveillance, including via a quality management system — are met such systems would not be barred from the EU market under the legislative plan.

Other requirements include in the area of security and that the AI achieves consistency of accuracy in performance — with a stipulation to report to “any serious incidents or any malfunctioning of the AI system which constitutes a breach of obligations” to an oversight authority no later than 15 days after becoming aware of it.

“High-risk AI systems may be placed on the Union market or otherwise put into service subject to compliance with mandatory requirements,” the text notes.

“Mandatory requirements concerning high-risk AI systems placed or otherwise put into service on the Union market should be complied with taking into account the intended purpose of the AI system and according to the risk management system to be established by the provider.

“Among other things, risk control management measures identified by the provider should be based on due consideration of the effects and possible interactions resulting from the combined application of the mandatory requirements and take into account the generally acknowledged state of the art, also including as reflected in relevant harmonised standards or common specifications.”

Prohibited practices and biometrics

Certain AI “practices” are listed as prohibited under Article 4 of the planned law, per this leaked draft — including (commercial) applications of mass surveillance systems and general purpose social scoring systems which could lead to discrimination.

AI systems that are designed to manipulate human behavior, decisions or opinions to a detrimental end (such as via dark pattern design UIs), are also listed as prohibited under Article 4; as are systems that use personal data to generate predictions in order to (detrimentally) target the vulnerabilities of persons or groups of people.

A casual reader might assume the regulation is proposing to ban, at a stroke, practices like behavioral advertising based on people tracking — aka the business models of companies like Facebook and Google. However that assumes adtech giants will accept that their tools have a detrimental impact on users.

On the contrary, their regulatory circumvention strategy is based on claiming the polar opposite; hence Facebook’s talk of “relevant” ads. So the text (as written) looks like it will be a recipe for (yet) more long-drawn out legal battles to try to make EU law stick vs the self-interested interpretations of tech giants.

The rational for the prohibited practices is summed up in an earlier recital of the draft — which states: “It should be acknowledged that artificial intelligence can enable new manipulative, addictive, social control and indiscriminate surveillance practices that are particularly harmful and should be prohibited as contravening the Union values of respect for human dignity, freedom, democracy, the rule of law and respect for human rights.”

It’s notable that the Commission has avoided proposing a ban on the use of facial recognition in public places — as it had apparently been considering, per a leaked draft early last year, before last year’s White Paper steered away from a ban.

In the leaked draft “remote biometric identification” in public places is singled out for “stricter conformity assessment procedures through the involvement of a notified body” — aka an “authorisation procedure that addresses the specific risks implied by the use of the technology” and includes a mandatory data protection impact assessment — vs most other applications of high risk AIs (which are allowed to meet requirements via self-assessment).

“Furthermore the authorising authority should consider in its assessment the likelihood and severity of harm caused by inaccuracies of a system used for a given purpose, in particular with regard to age, ethnicity, sex or disabilities,” runs the draft. “It should further consider the societal impact, considering in particular democratic and civic participation, as well as the methodology, necessity and proportionality for the inclusion of persons in the reference database.”

AI systems “that may primarily lead to adverse implications for personal safety” are also required to undergo this higher bar of regulatory involvement as part of the compliance process.

The envisaged system of conformity assessments for all high risk AIs is ongoing, with the draft noting: “It is appropriate that an AI system undergoes a new conformity assessment whenever a change occurs which may affect the compliance of the system with this Regulation or when the intended purpose of the system changes.”

“For AI systems which continue to ‘learn’ after being placed on the market or put into service (i.e. they automatically adapt how functions are carried out) changes to the algorithm and performance which have not been pre-determined and assessed at the moment of the conformity assessment shall result in a new conformity
assessment of the AI system,” it adds.

The carrot for compliant businesses is to get to display a ‘CE’ mark to help them win the trust of users and friction-free access across the bloc’s single market.

“High-risk AI systems should bear the CE marking to indicate their conformity with this Regulation so that they can move freely within the Union,” the text notes, adding that: “Member States should not create obstacles to the placing on the market or putting into service of AI systems that comply with the requirements laid down in this Regulation.”

Transparency for bots and deepfakes

As well as seeking to outlaw some practices and establish a system of pan-EU rules for bringing ‘high risk’ AI systems to market safely — with providers expected to make (mostly self) assessments and fulfil compliance obligations (such as around the quality of the data-sets used to train the model; record-keeping/documentation; human oversight; transparency; accuracy) prior to launching such a product into the market and conduct ongoing post-market surveillance — the proposed regulation seeks shrink the risk of AI being used to trick people.

It does this by suggesting “harmonised transparency rules” for AI systems intended to interact with natural persons (aka voice AIs/chat bots etc); and for AI systems used to generate or manipulate image, audio or video content (aka deepfakes).

“Certain AI systems intended to interact with natural persons or to generate content may pose specific risks of impersonation or deception irrespective of whether they qualify as high-risk or not. In certain circumstances, the use of these systems should therefore be subject to specific transparency obligations without prejudice to the requirements and obligations for high-risk AI systems,” runs the text.

“In particular, natural persons should be notified that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use. Moreover, users, who use an AI system to generate or manipulate image, audio or video content that appreciably resembles existing persons, places or events and would falsely appear to a reasonable person to be authentic, should disclose that the content has been artificially created or manipulated by labelling the artificial intelligence output accordingly and disclosing its artificial origin.

“This labelling obligation should not apply where the use of such content is necessary for the purposes of safeguarding public security or for the exercise of a legitimate right or freedom of a person such as for satire, parody or freedom of arts and sciences and subject to appropriate safeguards for the rights and freedoms of third parties.”

What about enforcement?

While the proposed AI regime hasn’t yet been officially unveiled by the Commission — so details could still change before next week — a major question mark looms over how a whole new layer of compliance around specific applications of (often complex) artificial intelligence can be effectively oversee and any violations enforced, especially given ongoing weaknesses in the enforcement of the EU’s data protection regime (which begun being applied back in 2018).

So while providers of high risk AIs are required to take responsibility for putting their system/s on the market (and therefore for compliance with all the various stipulations, which also include registering high risk AI systems in an EU database the Commission intends to maintain), the proposal leaves enforcement in the hands of Member States — who will be responsible for designating one or more national competent authorities to supervise application of the oversight regime.

We’ve seen how this story plays out with the General Data Protection Regulation. The Commission itself has conceded GDPR enforcement is not consistently or vigorously applied across the bloc — so a major question is how these fledgling AI rules will avoid the same forum-shopping fate?

“Member States should take all necessary measures to ensure that the provisions of this Regulation are implemented, including by laying down effective, proportionate and dissuasive penalties for their infringement. For certain specific infringements, Member States should take into account the margins and criteria set out in this Regulation,” runs the draft.

The Commission does add a caveat — about potentially stepping in in the event that Member State enforcement doesn’t deliver. But there’s no near term prospect of a different approach to enforcement, suggesting the same old pitfalls will likely appear.

“Since the objective of this Regulation, namely creating the conditions for an ecosystem of trust regarding the placing on the market, putting into service and use of artificial intelligence in the Union, cannot be sufficiently achieved by the Member States and can rather, by reason of the scale or effects of the action, be better achieved at Union level, the Union may adopt measures, in accordance with the principle of subsidiarity as set out in Article 5 of the Treaty on European Union,” is the Commission’s back-stop for future enforcement failure.

The oversight plan for AI includes setting up a mirror entity akin to the GDPR’s European Data Protection Board — to be called the European Artificial Intelligence Board — which will similarly support application of the regulation by issuing relevant recommendations and opinions for EU lawmakers, such as around the list of prohibited AI practices and high-risk systems.

 

#ai, #artificial-intelligence, #behavioral-advertising, #europe, #european-commission, #european-data-protection-board, #european-union, #facebook, #facial-recognition, #general-data-protection-regulation, #policy, #regulation, #tc

Further delay to GDPR enforcement of 2018 Twitter breach

Twitter users have to wait to longer to find out what penalties, if any, the platform faces under the European Union’s General Data Protection Regulation (GDPR) for a data breach that dates back around two years.

In the meanwhile the platform has continued to suffer security failures — including, just last month, when hackers gained control of scores of verified accounts and tweeted out a crypto scam.

The tech firm’s lead regulator in the region, Ireland’s Data Protection Commission (DPC), began investigating an earlier Twitter breach in November 2018 — completing the probe earlier this year and submitting a draft decision to other EU DPAs for review in May, just ahead of the second anniversary of the GDPR’s application.

In a statement on the development, Graham Doyle, the DPC’s deputy commissioner, told TechCrunch: “The Irish Data Protection Commission (DPC) issued a draft decision to other Concerned Supervisory Authorities (CSAs) on 22 May 2020, in relation to this inquiry into Twitter. A number of objections were raised by CSAs and the DPC engaged in a consultation process with them. However, following consultation a number of objections were maintained and the DPC has now referred the matter to the European Data Protection Board (EDPB) under Article 65 of the GDPR.”

Under the regulation’s one-stop-shop mechanism, cross-border cases are handled by a lead regulator — typically where the business has established its regional base. For many tech companies that means Ireland, so the DPC has an oversized role in the regulation of Silicon Valley’s handling of people’s data.

This means it now has a huge backlog of highly anticipated complaints relating to tech giants including Apple, Facebook, Google, LinkedIn and indeed Twitter. The regulator also continues to face criticism for not yet ‘getting it over the line’ in any of these complaints and investigations pertaining to big tech. So the Twitter breach case is being especially closely watched as it looks set to be the Irish DPC’s first enforcement decision in a cross-border GDPR case.

Last year commissioner Helen Dixon said the first of these decisions would be coming “early” in 2020. In the event, we’re past the halfway mark of the year with still no enforcement to show for it. Though the DPC emphasizes the need to follow due process to ensure final decisions stand up to any challenge.

The latest delay in the Twitter case is a consequence of disagreements between the DPC and other regional watchdogs which, under the rules of GDPR, have a right to raise objections on a draft decision where users in their countries are also affected.

It’s not clear what specific objections have been raised to the DPC’s draft Twitter decision, or indeed what Ireland’s regulator has decided in what should be a relatively straightforward case, given it’s a breach — not a complaint about a core element of a data-mining business model.

Far more complex complaints are still sitting on the DPC’s desk. Doyle confirmed that a complaint pertaining to WhatsApp’s legal basis for sharing user data with Facebook remains the next most progressed in the stack, for example.

So, given the DPC’s Twitter breach draft decision hasn’t been universally accepted by Europe’s data watchdogs it’s all but inevitable Facebook -WhatsApp will go through the same objections process. Ergo, expect more delays.

Article 65 of the GDPR sets out a process for handling objections on draft decisions. It allows for one month for DPAs to reach a two-thirds majority, with the possibility for a further extension of another month — which would push a decision on the Twitter case into late October.

If there’s still not enough votes in favor at that point, a further two weeks are allowed for EDPB members to reach a simple majority. If DPAs are still split the Board chair, currently Andrea Jelinek, has the deciding vote. So the body’s role in major decisions over big tech looks set to be very key.

We’ve reached out to the EDPB with questions related to the Twitter objections and will update this report with any response.

The Article 65 process exists to try to find consensus across a patchwork of national and regional data supervisors. But it won’t silence critics who argue the GDPR is not able to be applied fast enough to uphold EU citizens’ rights in the face of fast-iterating data-mining giants.

To wit: Given the latest developments, a final decision on the Twitter breach could be delayed until November — a full two years after the investigation began.

Earlier this summer a two-year review of GDPR by the European Commission, meanwhile, highlighted a lack of uniformly vigorous enforcement. Though commissioners signalled a willingness to wait and see how the one-stop-shop mechanism runs its course on cross-border cases, while admitting there’s a need to reinforce cooperation and co-ordination on cross border issues.

“We need to be sure that it’s possible for all the national authorities to work together. And in the network of national authorities it’s the case — and with the Board [EDPB] it’s possible to organize that. So we’ll continue to work on it,” justice commissioner, Didier Reynders, said in June.

“The best answer will be a decision from the Irish data protection authority about important cases,” he added then.

#andrea-jelinek, #data-protection, #europe, #european-commission, #european-data-protection-board, #european-union, #facebook, #gdpr, #general-data-protection-regulation, #helen-dixon, #ireland, #linkedin, #privacy, #social, #twitter, #whatsapp

EU websites’ use of Google Analytics and Facebook Connect targeted by post-Schrems II privacy complaints

A month after Europe’s top court struck down a flagship data transfer arrangement between the EU and the US as unsafe, European privacy campaign group, noyb, has filed complaints against 101 websites with regional operators which it’s identified as still sending data to the US via Google Analytics and/or Facebook Connect integrations.

Among the entities listed in its complaint are ecommerce companies, publishers & broadcasters, telcos & ISPs, banks and universities — including Airbnb Ireland, Allied Irish Banks, Danske Bank, Fastweb, MTV Internet, Sky Deutschland, Takeaway.com and Tele2, to name a few.

“A quick analysis of the HTML source code of major EU webpages shows that many companies still use Google Analytics or Facebook Connect one month after a major judgment by the Court of Justice of the European Union (CJEU) — despite both companies clearly falling under US surveillance laws, such as FISA 702,” the campaign group writes on its website.

“Neither Facebook nor Google seem to have a legal basis for the data transfers. Google still claims to rely on the ‘Privacy Shield’ a month after it was invalidated, while Facebook continues to use the ‘SCCs’ [Standard Contractual Clauses], despite the Court finding that US surveillance laws violate the essence of EU fundamental rights.”

We’ve reached out to Facebook and Google with questions about their legal bases for such transfers — and will update this report with any response.

Privacy watchers will know that noyb’s founder, Max Schrems, was responsible for the original legal challenge that took down an anterior EU-US data arrangement, Safe Harbor, all the way back in 2015. His updated complaint ended up taking down the EU-US Privacy Shield last month — although he’d actually targeted Facebook’s use of a separate data transfer mechanism (SCCs), urging its data supervisor, Ireland’s DPC, to step in and suspend its use of that tool.

The regulator chose to go to court instead, raising wider concerns about the legality of EU-US data transfer arrangements — which resulted in the CJEU concluding that the Commission should not have granted the US a so-called ‘adequacy agreement’, thus pulling the rug out from under Privacy Shield.

The decision means the US is now what’s considered a ‘third country’ in data protection terms, with no special arrangement to enable it to process EU users’ information.

More than that, the court’s ruling also made it clear EU data watchdogs have a responsibility to intervene where they suspect there are risks to EU people’s data if it’s being transferred to a third country via SCCs.

European data watchdogs swiftly warned there would be no grace period for entities still illegally relying on Privacy Shield — so anyone listed in the above complaint that’s still referencing the defunct mechanism in their privacy policy won’t even have a proverbial figleaf to hide their legal blushes.

noyb’s contention with this latest clutch of complaints is that none of the aforementioned 101 websites has a valid legal basis to keep transferring visitor data to the US via the embedded Google Analytics and/or Facebook Connect integrations.

“We have done a quick search on major websites in each EU member state for code from Facebook and Google. These code snippets forward data on each visitor to Google or Facebook. Both companies admit that they transfer data of Europeans to the US for processing, where these companies are under a legal obligation to make such data available to US agencies like the NSA. Neither Google Analytics nor Facebook Connect are essential to run these webpages and are services that could have been replaced or at least deactivated by now,” said Schrems, honorary chair of noyb.eu, in a statement.

Since the CJEU’s Schrems II ruling, and indeed since the Safe Harbor strike down, the US Department of Commerce and European Commission have stuck their heads in the sand — signalling they intend to try cobbling together another data pact to replace the defunct Privacy Shield (which replaced the blasted-to-smithereens (un)Safe Harbor. So, er… ).

Yet without root-and-branch reform of US surveillance law, any third pop by respective lawmakers at papering over the legal schism of US national security priorities vs EU privacy rights is just as surely doomed to fail.

The more cynical among you might say the high level administrative manoeuvers around this topic are, in fact, simply intended to buy more time — for the data to keep flowing and ‘business as usual’ to continue.

But there is now substantial legal risk attached to a strategy of trying to pretend US surveillance law doesn’t exist.

Here’s Schrems again, on last month’s CJEU ruling, suggesting that Facebook and Google could be in the frame for legal liability if they don’t proactively warn EU customers of their data responsibilities: “The Court was explicit that you cannot use the SCCs when the recipient in the US falls under these mass surveillance laws. It seems US companies are still trying to convince their EU customers of the opposite. This is more than shady. Under the SCCs the US data importer would instead have to inform the EU data sender of these laws and warn them. If this is not done, then these US companies are actually liable for any financial damage caused.”

And as noyb’s press release notes, GDPR’s penalties regime can scale as high as 4% of the worldwide turnover of the EU sender and the US recipient of personal data. So, again, hi Facebook, hi Google…

The crowdfunded campaign group has pledged to continue dialling up the pressure on EU regulators to act and on EU data processors to review any US data transfer arrangements — and “adapt to the clear ruling by the EU’s supreme court”, as it puts it.

Other types of legal action are also starting to draw on Europe’s General Data Protection Regulation (GDPR) framework — and, importantly, attract funding — such as two class action style suits filed against Oracle and Salesforce’s use of tracking cookies earlier this month. (As we said when GDPR came into force back in 2018, the lawsuits are coming.)

Now, with two clear strikes from the CJEU on the issue of US surveillance law vs EU data protection, it looks like it’ll be diminishing returns for US tech giants hoping to pretend everything’s okay on the data processing front.

noyb is also putting its money where its mouth is — offering free guidelines and model requests for EU entities to use to help them get their data affairs in prompt legal order. 

“While we understand that some things may need some time to rearrange, it is unacceptable that some players seem to simply ignore Europe’s top court,” Schrems added, in further comments on the latest flotilla of complaints. “This is also unfair towards competitors that comply with these rules. We will gradually take steps against controllers and processors that violate the GDPR and against authorities that do not enforce the Court’s ruling, like the Irish DPC that stays dormant.”

We’ve reached out to Ireland’s Data Protection Commission to ask what steps it will be taking in light of the latest noyb complaints, a number of which target websites that appear to be operated by an Ireland-based legal entity.

Schrems original 2013 complaint against Facebook’s use of SCCs also ended up in Ireland, where the tech giant — and many others — locates its EU EQ. Schrem’s request that the DPC order Facebook to suspend its use of SCCs still hasn’t been fulfilled, some seven years and five complaints later. And the regulator continues to face accusations of inaction, given the growing backlog of cross-border GDPR complaints against tech giants like Facebook and Google.

Ireland’s DPC has still yet to issue a single final decision on any of these major GDPR complaints. But the legal pressure for it and all EU regulators to get a move on and enforce the bloc’s law will only increase, even as class action style lawsuits are filed to try to do what regulators have failed to.

Earlier this summer the Commission acknowledged a lack of uniformly “vigorous” enforcement of GDPR in a review of the mechanism’s first two years of operation.

“The European Data Protection Board [EDPB] and the data protection authorities have to step up their work to create a truly common European culture — providing more coherent and more practical guidance, and work on vigorous but uniform enforcement,” said Věra Jourová, Commission VP for values and transparency then, giving the Commission’s first public assessment of whether GDPR is working.

We’ve also reached out to France’s CNIL to ask what action it will be taking in light of the noyb complaints.

Following the judgement in July the French regulator said it was “conducting a precise analysis”, along with the EDPB, with a view to “drawing conclusions as soon as possible on the consequences of the ruling for data transfers from the European Union to the United States”.

Since then the EDPB guidance has come out — inking the obvious: That transfers on the basis of Privacy Shield “are illegal”. And while the CJEU ruling did not invalidate the use of SCCs it gave only a very qualified green light to continued use.

As we reported last month, the ability to use SCCs to transfer data to the U.S. hinges on a data controller being able to offer a legal guarantee that “U.S. law does not impinge on the adequate level of protection” for the transferred data.

“Whether or not you can transfer personal data on the basis of SCCs will depend on the result of your assessment, taking into account the circumstances of the transfers, and supplementary measures you could put in place,” the EDPB added.

#airbnb, #campaign, #cjeu, #data-controller, #data-protection, #data-security, #digital-rights, #ecommerce, #eu-us-privacy-shield, #europe, #european-commission, #european-data-protection-board, #european-union, #facebook, #france, #gdpr, #general-data-protection-regulation, #html, #human-rights, #ireland, #lawsuit, #max-schrems, #noyb, #oracle, #privacy, #privacy-shield, #safe-harbor, #salesforce, #sccs, #schrems-ii, #takeaway-com, #tc, #united-states, #us-department-of-commerce, #vera-jourova

No grace period after Schrems II Privacy Shield ruling, warn EU data watchdogs

European data watchdogs have issued updated guidance in the wake of last week’s landmark ruling striking down a flagship transatlantic data transfer mechanism called Privacy Shield.

In an FAQ on the Schrems II judgement, the European Data Protection Board (EDPB) warns there will be no regulatory grace period.

The EU-US Privacy Shield is dead and any companies still relying on it to authorize transfers of EU citizens’ personal data are doing so illegally is the top-line message.

“Transfers on the basis of this legal framework are illegal,” warns the EDPB baldly. Entities that wish to keep on transferring personal data to the U.S. need to use an alternative mechanism — but must first determine whether they can meet the legal requirement to protect the data from US surveillance.

What alternatives are there? Standard Contractual Clauses (SCCs) were not invalidated by the CJEU ruling. Binding Corporate Rules (BCRs) are also still technically available.

But in both cases would-be data exporters must conduct an up front analysis to ascertain whether they can in fact legally use these tools to move data in their specific context.

Anyone who is already using SCCs for the transfer of EU citizens’ data to the US (hi facebook!) isn’t exempt from carrying out an assessment — and needs to inform the relevant supervisory authority if they intend to keep using the mechanism.

The rub here for US transfers is that the CJEU judges invalidated Privacy Shield on the grounds that US surveillance laws fundamentally clash with EU privacy rights. So, in other words, Houston, you have a privacy problem…

“The Court found that U.S. law (i.e., Section 702 FISA [Foreign Intelligence Surveillance Act] and EO [Executive Order] 12333) does not ensure an essentially equivalent level of protection,” warns the EDPB in answer to the (expected) frequently asked question: “I am using SCCs with a data importer in the U.S., what should I do?”.

“Whether or not you can transfer personal data on the basis of SCCs will depend on the result of your assessment, taking into account the circumstances of the transfers, and supplementary measures you could put in place.”

The ability to use SCCs to transfer data to the US hinges on a data controller being able to offer a legal guarantee that “U.S. law does not impinge on the adequate level of protection” for the transferred data.

If a EU-US data exporter can’t be confident of that they are required to pull the plug on the data transfer. No ifs, no buts.

While, those who believe they can offer a legal guarantee of “appropriate safeguards” — and thus intend to keep transferring data to the US via SCC — must notify the relevant data watchdog. So there’s no option to carry on ‘as normal’ without informing the regulator. 

It’s the same story with BCRs — on which the EDPB notes: “Given the judgment of the Court, which invalidated the Privacy Shield because of the degree of interference created by the law of the U.S. with the fundamental rights of persons whose data are transferred to that third country, and the fact that the Privacy Shield was also designed to bring guarantees to data transferred with other tools such as BCRs, the Court’s assessment applies as well in the context of BCRs, since U.S. law will also have primacy over this tool.”

So, again, a case by case assessment is required to figure out whether you can be legally confident in offering the required level of protection.

#data-controller, #eu-us-privacy-shield, #europe, #european-data-protection-board, #european-union, #foreign-intelligence-surveillance-act, #human-rights, #policy, #privacy, #schrems-ii, #tc, #united-states

Legal clouds gather over US cloud services, after CJEU ruling

In the wake of yesterday’s landmark ruling by Europe’s top court — striking down a flagship transatlantic data transfer framework called Privacy Shield, and cranking up the legal uncertainty around processing EU citizens’ data in the U.S. in the process — Europe’s lead data protection regulator has fired its own warning shot at the region’s data protection authorities (DPAs), essentially telling them to get on and do the job of intervening to stop people’s data flowing to third countries where it’s at risk.

Countries like the U.S.

The original complaint that led to the Court of Justice of the EU (CJEU) ruling focused on Facebook’s use of a data transfer mechanism called Standard Contractual Clauses (SCCs) to authorize moving EU users’ data to the U.S. for processing.

Complainant Max Schrems asked the Irish Data Protection Commission (DPC) to suspend Facebook’s SCC data transfers in light of U.S. government mass surveillance programs. Instead, the regulator went to court to raise wider concerns about the legality of the transfer mechanism.

That in turn led Europe’s top judges to nuke the Commission’s adequacy decision, which underpinned the EU-U.S. Privacy Shield — meaning the U.S. no longer has a special arrangement greasing the flow of personal data from the EU. Yet, at the time of writing, Facebook is still using SCCs to process EU users’ data in the U.S. Much has changed, but the data hasn’t stopped flowing — yet.

Yesterday the tech giant said it would “carefully consider” the findings and implications of the CJEU decision on Privacy Shield, adding that it looked forward to “regulatory guidance.” It certainly didn’t offer to proactively flip a kill switch and stop the processing itself.

Ireland’s DPA, meanwhile, which is Facebook’s lead data regulator in the region, sidestepped questions over what action it would be taking in the wake of yesterday’s ruling — saying it (also) needed (more) time to study the legal nuances.

The DPC’s statement also only went so far as to say the use of SCCs for taking data to the U.S. for processing is “questionable” — adding that case by case analysis would be key.

The regulator remains the focus of sustained criticism in Europe over its enforcement record for major cross-border data protection complaints — with still zero decisions issued more than two years after the EU’s General Data Protection Regulation (GDPR) came into force, and an ever-growing backlog of open investigations into the data processing activities of platform giants.

In May, the DPC finally submitted to other DPAs for review its first draft decision on a cross-border case (an investigation into a Twitter security breach), saying it hoped the decision would be finalized in July. At the time of writing we’re still waiting for the bloc’s regulators to reach consensus on that.

The painstaking pace of enforcement around Europe’s flagship data protection framework remains a problem for EU lawmakers — whose two-year review last month called for uniformly “vigorous” enforcement by regulators.

The European Data Protection Supervisor (EDPS) made a similar call today, in the wake of the Schrems II ruling — which only looks set to further complicate the process of regulating data flows by piling yet more work on the desks of underfunded DPAs.

“European supervisory authorities have the duty to diligently enforce the applicable data protection legislation and, where appropriate, to suspend or prohibit transfers of data to a third country,” writes EDPS Wojciech Wiewiórowski, in a statement, which warns against further dithering or can-kicking on the intervention front.

“The EDPS will continue to strive, as a member of the European Data Protection Board (EDPB), to achieve the necessary coherent approach among the European supervisory authorities in the implementation of the EU framework for international transfers of personal data,” he goes on, calling for more joint working by the bloc’s DPAs.

Wiewiórowski’s statement also highlights what he dubs “welcome clarifications” regarding the responsibilities of data controllers and European DPAs — to “take into account the risks linked to the access to personal data by the public authorities of third countries.”

“As the supervisory authority of the EU institutions, bodies, offices and agencies, the EDPS is carefully analysing the consequences of the judgment on the contracts concluded by EU institutions, bodies, offices and agencies. The example of the recent EDPS’ own-initiative investigation into European institutions’ use of Microsoft products and services confirms the importance of this challenge,” he adds.

Part of the complexity of enforcement of Europe’s data protection rules is the lack of a single authority; a varied patchwork of supervisory authorities responsible for investigating complaints and issuing decisions.

Now, with a CJEU ruling that calls for regulators to assess third countries themselves — to determine whether the use of SCCs is valid in a particular use-case and country — there’s a risk of further fragmentation should different DPAs jump to different conclusions.

Yesterday, in its response to the CJEU decision, Hamburg’s DPA criticized the judges for not also striking down SCCs, saying it was “inconsistent” for them to invalidate Privacy Shield yet allow this other mechanism for international transfers. Supervisory authorities in Germany and Europe must now quickly agree how to deal with companies that continue to rely illegally on the Privacy Shield, the DPA warned.

In the statement, Hamburg’s data commissioner, Johannes Caspar, added: “Difficult times are looming for international data traffic.”

He also shot off a blunt warning that: “Data transmission to countries without an adequate level of data protection will… no longer be permitted in the future.”

Compare and contrast that with the Irish DPC talking about use of SCCs being “questionable,” case by case. (Or the U.K.’s ICO offering this bare minimum.)

Caspar also emphasized the challenge facing the bloc’s patchwork of DPAs to develop and implement a “common strategy” toward dealing with SCCs in the wake of the CJEU ruling.

In a press note today, Berlin’s DPA also took a tough line, warning that data transfers to third countries would only be permitted if they have a level of data protection essentially equivalent to that offered within the EU.

In the case of the U.S. — home to the largest and most used cloud services — Europe’s top judges yesterday reiterated very clearly that that is not in fact the case.

“The CJEU has made it clear that the export of data is not just about the economy but people’s fundamental rights must be paramount,” Berlin data commissioner Maja Smoltczyk said in a statement [which we’ve translated using Google Translate].

“The times when personal data could be transferred to the U.S. for convenience or cost savings are over after this judgment,” she added.

Both DPAs warned the ruling has implications for the use of cloud services where data is processed in other third countries where the protection of EU citizens’ data also cannot be guaranteed too, i.e. not just the U.S.

On this front, Smoltczyk name-checked China, Russia and India as countries EU DPAs will have to assess for similar problems.

“Now is the time for Europe’s digital independence,” she added.

Some commentators (including Schrems himself) have also suggested the ruling could see companies switching to local processing of EU users’ data. Though it’s also interesting to note the judges chose not to invalidate SCCs — thereby offering a path to legal international data transfers, but only provided the necessary protections are in place in that given third country.

Also issuing a response to the CJEU ruling today was the European Data Protection Board (EDPB). AKA the body made up of representatives from DPAs across the bloc. Chair Andrea Jelinek put out an emollient statement, writing that: “The EDPB intends to continue playing a constructive part in securing a transatlantic transfer of personal data that benefits EEA citizens and organisations and stands ready to provide the European Commission with assistance and guidance to help it build, together with the U.S., a new framework that fully complies with EU data protection law.”

Short of radical changes to U.S. surveillance law, it’s tough to see how any new framework could be made to legally stick, though. Privacy Shield’s predecessor arrangement, Safe Harbour, stood for around 15 years. Its shiny “new and improved” replacement didn’t even last five.

In the wake of the CJEU ruling, data exporters and importers are required to carry out an assessment of a country’s data regime to assess adequacy with EU legal standards before using SCCs to transfer data there.

“When performing such prior assessment, the exporter (if necessary, with the assistance of the importer) shall take into consideration the content of the SCCs, the specific circumstances of the transfer, as well as the legal regime applicable in the importer’s country. The examination of the latter shall be done in light of the non-exhaustive factors set out under Art 45(2) GDPR,” Jelinek writes.

“If the result of this assessment is that the country of the importer does not provide an essentially equivalent level of protection, the exporter may have to consider putting in place additional measures to those included in the SCCs. The EDPB is looking further into what these additional measures could consist of.”

Again, it’s not clear what “additional measures” a platform could plausibly deploy to “fix” the gaping lack of redress afforded to foreigners by U.S. surveillance law. Major legal surgery does seem to be required to square this circle.

Jelinek said the EDPB would be studying the judgement with the aim of putting out more granular guidance in the future. But her statement warns data exporters they have an obligation to suspend data transfers or terminate SCCs if contractual obligations are not or cannot be complied with, or else to notify a relevant supervisory authority if it intends to continue transferring data.

In her roundabout way, she also warns that DPAs now have a clear obligation to terminate SCCs where the safety of data cannot be guaranteed in a third country.

“The EDPB takes note of the duties for the competent supervisory authorities (SAs) to suspend or prohibit a transfer of data to a third country pursuant to SCCs, if, in the view of the competent SA and in the light of all the circumstances of that transfer, those clauses are not or cannot be complied with in that third country, and the protection of the data transferred cannot be ensured by other means, in particular where the controller or a processor has not already itself suspended or put an end to the transfer,” Jelinek writes.

One thing is crystal clear: Any sense of legal certainty U.S. cloud services were deriving from the existence of the EU-U.S. Privacy Shield — with its flawed claim of data protection adequacy — has vanished like summer rain.

In its place, a sense of déjà vu and a lot more work for lawyers.

#berlin, #china, #cloud, #cloud-services, #data-protection-law, #data-transmission, #digital-rights, #eu-us-privacy-shield, #europe, #european-commission, #european-data-protection-board, #european-union, #facebook, #general-data-protection-regulation, #germany, #google, #hamburg, #human-rights, #india, #ireland, #law, #mass-surveillance, #max-schrems, #microsoft, #personal-data, #privacy, #russia, #safe-harbour, #schrems-ii, #tc, #twitter, #united-states, #us-government