Ireland probes TikTok’s handling of kids’ data and transfers to China

Ireland’s Data Protection Commission (DPC) has yet another ‘Big Tech’ GDPR probe to add to its pile: The regulator said yesterday it has opened two investigations into video sharing platform TikTok.

The first covers how TikTok handles children’s data, and whether it complies with Europe’s General Data Protection Regulation.

The DPC also said it will examine TikTok’s transfers of personal data to China, where its parent entity is based — looking to see if the company meets requirements set out in the regulation covering personal data transfers to third countries.

TikTok was contacted for comment on the DPC’s investigation.

A spokesperson told us:

“The privacy and safety of the TikTok community, particularly our youngest members, is a top priority. We’ve implemented extensive policies and controls to safeguard user data and rely on approved methods for data being transferred from Europe, such as standard contractual clauses. We intend to fully cooperate with the DPC.”

The Irish regulator’s announcement of two “own volition” enquiries follows pressure from other EU data protection authorities and consumers protection groups which have raised concerns about how TikTok handles’ user data generally and children’s information specifically.

In Italy this January, TikTok was ordered to recheck the age of every user in the country after the data protection watchdog instigated an emergency procedure, using GDPR powers, following child safety concerns.

TikTok went on to comply with the order — removing more than half a million accounts where it could not verify the users were not children.

This year European consumer protection groups have also raised a number of child safety and privacy concerns about the platform. And, in May, EU lawmakers said they would review the company’s terms of service.

On children’s data, the GDPR sets limits on how kids’ information can be processed, putting an age cap on the ability of children to consent to their data being used. The age limit varies per EU Member State but there’s a hard cap for kids’ ability to consent at 13 years old (some EU countries set the age limit at 16).

In response to the announcement of the DPC’s enquiry, TikTok pointed to its use of age gating technology and other strategies it said it uses to detect and remove underage users from its platform.

It also flagged a number of recent changes it’s made around children’s accounts and data — such as flipping the default settings to make their accounts privacy by default and limiting their exposure to certain features that intentionally encourage interaction with other TikTok users if those users are over 16.

While on international data transfers it claims to use “approved methods”. However the picture is rather more complicated than TikTok’s statement implies. Transfers of Europeans’ data to China are complicated by there being no EU data adequacy agreement in place with China.

In TikTok’s case, that means, for any personal data transfers to China to be lawful, it needs to have additional “appropriate safeguards” in place to protect the information to the required EU standard.

When there is no adequacy arrangement in place, data controllers can, potentially, rely on mechanisms like Standard Contractual Clauses (SCCs) or binding corporate rules (BCRs) — and TikTok’s statement notes it uses SCCs.

But — crucially — personal data transfers out of the EU to third countries have faced significant legal uncertainty and added scrutiny since a landmark ruling by the CJEU last year which invalidated a flagship data transfer arrangement between the US and the EU and made it clear that DPAs (such as Ireland’s DPC) have a duty to step in and suspend transfers if they suspect people’s data is flowing to a third country where it might be at risk.

So while the CJEU did not invalidate mechanisms like SCCs entirely they essentially said all international transfers to third countries must be assessed on a case-by-case basis and, where a DPA has concerns, it must step in and suspend those non-secure data flows.

The CJEU ruling means just the fact of using a mechanism like SCCs doesn’t mean anything on its own re: the legality of a particular data transfer. It also amps up the pressure on EU agencies like Ireland’s DPC to be pro-active about assessing risky data flows.

Final guidance put out by the European Data Protection Board, earlier this year, provides details on the so-called ‘special measures’ that a data controller may be able to apply in order to increase the level of protection around their specific transfer so the information can be legally taken to a third country.

But these steps can include technical measures like strong encryption — and it’s not clear how a social media company like TikTok would be able to apply such a fix, given how its platform and algorithms are continuously mining users’ data to customize the content they see and in order to keep them engaged with TikTok’s ad platform.

In another recent development, China has just passed its first data protection law.

But, again, this is unlikely to change much for EU transfers. The Communist Party regime’s ongoing appropriation of personal data, through the application of sweeping digital surveillance laws, means it would be all but impossible for China to meet the EU’s stringent requirements for data adequacy. (And if the US can’t get EU adequacy it would be ‘interesting’ geopolitical optics, to put it politely, were the coveted status to be granted to China…)

One factor TikTok can take heart from is that it does likely have time on its side when it comes to the’s EU enforcement of its data protection rules.

The Irish DPC has a huge backlog of cross-border GDPR investigations into a number of tech giants.

It was only earlier this month that Irish regulator finally issued its first decision against a Facebook-owned company — announcing a $267M fine against WhatsApp for breaching GDPR transparency rules (but only doing so years after the first complaints had been lodged).

The DPC’s first decision in a cross-border GDPR case pertaining to Big Tech came at the end of last year — when it fined Twitter $550k over a data breach dating back to 2018, the year GDPR technically begun applying.

The Irish regulator still has scores of undecided cases on its desk — against tech giants including Apple and Facebook. That means that the new TikTok probes join the back of a much criticized bottleneck. And a decision on these probes isn’t likely for years.

On children’s data, TikTok may face swifter scrutiny elsewhere in Europe: The UK added some ‘gold-plaiting’ to its version of the EU GDPR in the area of children’s data — and, from this month, has said it expects platforms meet its recommended standards.

It has warned that platforms that don’t fully engage with its Age Appropriate Design Code could face penalties under the UK’s GDPR. The UK’s code has been credited with encouraging a number of recent changes by social media platforms over how they handle kids’ data and accounts.

#apps, #articles, #china, #communist-party, #data-controller, #data-protection, #data-protection-commission, #data-protection-law, #data-security, #encryption, #europe, #european-data-protection-board, #european-union, #general-data-protection-regulation, #ireland, #italy, #max-schrems, #noyb, #personal-data, #privacy, #social, #social-media, #spokesperson, #tiktok, #united-kingdom, #united-states

What China’s new data privacy law means for US tech firms

China enacted a sweeping new data privacy law on August 20 that will dramatically impact how tech companies can operate in the country. Officially called the Personal Information Protection Law of the People’s Republic of China (PIPL), the law is the first national data privacy statute passed in China.

Modeled after the European Union’s General Data Protection Regulation, the PIPL imposes protections and restrictions on data collection and transfer that companies both inside and outside of China will need to address. It is particularly focused on apps using personal information to target consumers or offer them different prices on products and services, and preventing the transfer of personal information to other countries with fewer protections for security.

The PIPL, slated to take effect on November 1, 2021, does not give companies a lot of time to prepare. Those that already follow GDPR practices, particularly if they’ve implemented it globally, will have an easier time complying with China’s new requirements. But firms that have not implemented GDPR practices will need to consider adopting a similar approach. In addition, U.S. companies will need to consider the new restrictions on the transfer of personal information from China to the U.S.

Implementation and compliance with the PIPL is a much more significant task for companies that have not implemented GDPR principles.

Here’s a deep dive into the PIPL and what it means for tech firms:

New data handling requirements

The PIPL introduces perhaps the most stringent set of requirements and protections for data privacy in the world (this includes special requirements relating to processing personal information by governmental agencies that will not be addressed here). The law broadly relates to all kinds of information, recorded by electronic or other means, related to identified or identifiable natural persons, but excludes anonymized information.

The following are some of the key new requirements for handling people’s personal information in China that will affect tech businesses:

Extra-territorial application of the China law

Historically, China regulations have only been applied to activities inside the country. The PIPL is similar in applying the law to personal information handling activities within Chinese borders. However, similar to GDPR, it also expands its application to the handling of personal information outside China if the following conditions are met:

  • Where the purpose is to provide products or services to people inside China.
  • Where analyzing or assessing activities of people inside China.
  • Other circumstances provided in laws or administrative regulations.

For example, if you are a U.S.-based company selling products to consumers in China, you may be subject to the China data privacy law even if you do not have a facility or operations there.

Data handling principles

The PIPL introduces principles of transparency, purpose and data minimization: Companies can only collect personal information for a clear, reasonable and disclosed purpose, and to the smallest scope for realizing the purpose, and retain the data only for the period necessary to fulfill that purpose. Any information handler is also required to ensure the accuracy and completeness of the data it handles to avoid any negative impact on personal rights and interests.

#asia, #china, #column, #computer-security, #data-protection, #data-security, #ec-china, #ec-column, #ec-east-asia, #encryption, #european-union, #general-data-protection-regulation, #government, #internet, #iphone, #privacy, #tc

UK offers cash for CSAM detection tech targeted at e2e encryption

The UK government is preparing to spend over half a million dollars to encourage the development of detection technologies for child sexual exploitation material (CSAM) that can be bolted on to end-to-end encrypted messaging platforms to scan for the illegal material, as part of its ongoing policy push around Internet and child safety.

In a joint initiative today, the Home Office and the Department for Digital, Media, Culture and Sport (DCMS) announced a “Tech Safety Challenge Fund” — which will distribute up to £425,000 (~$584k) to five organizations (£85k/$117k each) to develop “innovative technology to keep children safe in environments such as online messaging platforms with end-to-end encryption”.

A Challenge statement for applicants to the program adds that the focus is on solutions that can be deployed within e2e encrypted environments “without compromising user privacy”.

“The problem that we’re trying to fix is essentially the blindfolding of law enforcement agencies,” a Home Office spokeswoman told us, arguing that if tech platforms go ahead with their “full end-to-end encryption plans, as they currently are… we will be completely hindered in being able to protect our children online”.

While the announcement does not name any specific platforms of concern, Home Secretary Priti Patel has previously attacked Facebook’s plans to expand its use of e2e encryption — warning in April that the move could jeopardize law enforcement’s ability to investigate child abuse crime.

Facebook-owned WhatsApp also already uses e2e encryption so that platform is already a clear target for whatever ‘safety’ technologies might result from this taxpayer-funded challenge.

Apple’s iMessage and FaceTime are among other existing mainstream messaging tools which use e2e encryption.

So there is potential for very widespread application of any ‘child safety tech’ developed through this government-backed challenge. (Per the Home Office, technologies submitted to the Challenge will be evaluated by “independent academic experts”. The department was unable to provide details of who exactly will assess the projects.)

Patel, meanwhile, is continuing to apply high level pressure on the tech sector on this issue — including aiming to drum up support from G7 counterparts.

Writing in paywalled op-ed in Tory-friendly newspaper, The Telegraph, she trails a meeting she’ll be chairing today where she says she’ll push the G7 to collectively pressure social media companies to do more to address “harmful content on their platforms”.

“The introduction of end-to-end encryption must not open the door to even greater levels of child sexual abuse. Hyperbolic accusations from some quarters that this is really about governments wanting to snoop and spy on innocent citizens are simply untrue. It is about keeping the most vulnerable among us safe and preventing truly evil crimes,” she adds.

“I am calling on our international partners to back the UK’s approach of holding technology companies to account. They must not let harmful content continue to be posted on their platforms or neglect public safety when designing their products. We believe there are alternative solutions, and I know our law enforcement colleagues agree with us.”

In the op-ed, the Home Secretary singles out Apple’s recent move to add a CSAM detection tool to iOS and macOS to scan content on user’s devices before it’s uploaded to iCloud — welcoming the development as a “first step”.

“Apple state their child sexual abuse filtering technology has a false positive rate of 1 in a trillion, meaning the privacy of legitimate users is protected whilst those building huge collections of extreme child sexual abuse material are caught out. They need to see th[r]ough that project,” she writes, urging Apple to press ahead with the (currently delayed) rollout.

Last week the iPhone maker said it would delay implementing the CSAM detection system — following a backlash led by security experts and privacy advocates who raised concerns about vulnerabilities in its approach, as well as the contradiction of a ‘privacy-focused’ company carrying out on-device scanning of customer data. They also flagged the wider risk of the scanning infrastructure being seized upon by governments and states who might order Apple to scan for other types of content, not just CSAM.

Patel’s description of Apple’s move as just a “first step” is unlikely to do anything to assuage concerns that once such scanning infrastructure is baked into e2e encrypted systems it will become a target for governments to widen the scope of what commercial platforms must legally scan for.

However the Home Office’s spokeswoman told us that Patel’s comments on Apple’s CSAM tech were only intended to welcome its decision to take action in the area of child safety — rather than being an endorsement of any specific technology or approach. (And Patel does also write: “But that is just one solution, by one company. Greater investment is essential.”)

The Home Office spokeswoman wouldn’t comment on which types of technologies the government is aiming to support via the Challenge fund, either, saying only that they’re looking for a range of solutions.

She told us the overarching goal is to support ‘middleground’ solutions — denying the government is trying to encourage technologists to come up with ways to backdoor e2e encryption.

In recent years in the UK GCHQ has also floated the controversial idea of a so-called ‘ghost protocol’ — that would allow for state intelligence or law enforcement agencies to be invisibly CC’d by service providers into encrypted communications on a targeted basis. That proposal was met with widespread criticism, including from the tech industry, which warned it would undermine trust and security and threaten fundamental rights.

It’s not clear if the government has such an approach — albeit with a CSAM focus — in mind here now as it tries to encourage the development of ‘middleground’ technologies that are able to scan e2e encrypted content for specifically illegal stuff.

In another concerning development, earlier this summer, guidance put out by DCMS for messaging platforms recommended that they “prevent” the use of e2e encryption for child accounts altogether.

Asked about that, the Home Office spokeswoman told us the tech fund is “not too different” and “is trying to find the solution in between”.

“Working together and bringing academics and NGOs into the field so that we can find a solution that works for both what social media companies want to achieve and also make sure that we’re able to protect children,” said said, adding: “We need everybody to come together and look at what they can do.”

There is not much more clarity in the Home Office guidance to suppliers applying for the chance to bag a tranche of funding.

There it writes that proposals must “make innovative use of technology to enable more effective detection and/or prevention of sexually explicit images or videos of children”.

“Within scope are tools which can identify, block or report either new or previously known child sexual abuse material, based on AI, hash-based detection or other techniques,” it goes on, further noting that proposals need to address “the specific challenges posed by e2ee environments, considering the opportunities to respond at different levels of the technical stack (including client-side and server-side).”

General information about the Challenge — which is open to applicants based anywhere, not just in the UK — can be found on the Safety Tech Network website.

The deadline for applications is October 6.

Selected applicants will have five months, between November 2021 and March 2022 to deliver their projects.

When exactly any of the tech might be pushed at the commercial sector isn’t clear — but the government may be hoping that by keeping up the pressure on the tech sector platform giants will develop this stuff themselves, as Apple has been.

The Challenge is just the latest UK government initiative to bring platforms in line with its policy priorities — back in 2017, for example, it was pushing them to build tools to block terrorist content — and you could argue it’s a form of progress that ministers are not simply calling for e2e encryption to be outlawed, as they frequently have in the past.

That said, talk of ‘preventing’ the use of e2e encryption — or even fuzzy suggestions of “in between” solutions — may not end up being so very different.

What is different is the sustained focus on child safety as the political cudgel to make platforms comply. That seems to be getting results.

Wider government plans to regulate platforms — set out in a draft Online Safety bill, published earlier this year — have yet to go through parliamentary scrutiny. But in one already baked in change, the country’s data protection watchdog is now enforcing a children’s design code which stipulates that platforms need to prioritize kids’ privacy by default, among other recommended standards.

The Age Appropriate Design Code was appended to the UK’s data protection bill as an amendment — meaning it sits under wider legislation that transposed Europe’s General Data Protection Regulation (GDPR) into law, which brought in supersized penalties for violations like data breaches. And in recent months a number of social media giants have announced changes to how they handle children’s accounts and data — which the ICO has credited to the code.

So the government may be feeling confident that it has finally found a blueprint for bringing tech giants to heel.

#apple, #csam, #csam-detection, #e2e-encryption, #encrypted-communications, #encryption, #end-to-end-encryption, #europe, #facebook, #g7, #general-data-protection-regulation, #home-office, #law-enforcement, #policy, #privacy, #social-media, #tc, #uk-government, #united-kingdom, #whatsapp

After years of inaction against adtech, UK’s ICO calls for browser-level controls to fix ‘cookie fatigue’

In the latest quasi-throwback toward ‘do not track‘, the UK’s data protection chief has come out in favor of a browser- and/or device-level setting to allow Internet users to set “lasting” cookie preferences — suggesting this as a fix for the barrage of consent pop-ups that continues to infest websites in the region.

European web users digesting this development in an otherwise monotonously unchanging regulatory saga, should be forgiven — not only for any sense of déjà vu they may experience — but also for wondering if they haven’t been mocked/gaslit quite enough already where cookie consent is concerned.

Last month, UK digital minister Oliver Dowden took aim at what he dubbed an “endless” parade of cookie pop-ups — suggesting the government is eyeing watering down consent requirements around web tracking as ministers consider how to diverge from European Union data protection standards, post-Brexit. (He’s slated to present the full sweep of the government’s data ‘reform’ plans later this month so watch this space.)

Today the UK’s outgoing information commissioner, Elizabeth Denham, stepped into the fray to urge her counterparts in G7 countries to knock heads together and coalesce around the idea of letting web users express generic privacy preferences at the browser/app/device level, rather than having to do it through pop-ups every time they visit a website.

In a statement announcing “an idea” she will present this week during a virtual meeting of fellow G7 data protection and privacy authorities — less pithily described in the press release as being “on how to improve the current cookie consent mechanism, making web browsing smoother and more business friendly while better protecting personal data” — Denham said: “I often hear people say they are tired of having to engage with so many cookie pop-ups. That fatigue is leading to people giving more personal data than they would like.

“The cookie mechanism is also far from ideal for businesses and other organisations running websites, as it is costly and it can lead to poor user experience. While I expect businesses to comply with current laws, my office is encouraging international collaboration to bring practical solutions in this area.”

“There are nearly two billion websites out there taking account of the world’s privacy preferences. No single country can tackle this issue alone. That is why I am calling on my G7 colleagues to use our convening power. Together we can engage with technology firms and standards organisations to develop a coordinated approach to this challenge,” she added.

Contacted for more on this “idea”, an ICO spokeswoman reshuffled the words thusly: “Instead of trying to effect change through nearly 2 billion websites, the idea is that legislators and regulators could shift their attention to the browsers, applications and devices through which users access the web.

“In place of click-through consent at a website level, users could express lasting, generic privacy preferences through browsers, software applications and device settings – enabling them to set and update preferences at a frequency of their choosing rather than on each website they visit.”

Of course a browser-baked ‘Do not track’ (DNT) signal is not a new idea. It’s around a decade old at this point. Indeed, it could be called the idea that can’t die because it’s never truly lived — as earlier attempts at embedding user privacy preferences into browser settings were scuppered by lack of industry support.

However the approach Denham is advocating, vis-a-vis “lasting” preferences, may in fact be rather different to DNT — given her call for fellow regulators to engage with the tech industry, and its “standards organizations”, and come up with “practical” and “business friendly” solutions to the regional Internet’s cookie pop-up problem.

It’s not clear what consensus — practical or, er, simply pro-industry — might result from this call. If anything.

Indeed, today’s press release may be nothing more than Denham trying to raise her own profile since she’s on the cusp of stepping out of the information commissioner’s chair. (Never waste a good international networking opportunity and all that — her counterparts in the US, Canada, Japan, France, Germany and Italy are scheduled for a virtual natter today and tomorrow where she implies she’ll try to engage them with her big idea).

Her UK replacement, meanwhile, is already lined up. So anything Denham personally champions right now, at the end of her ICO chapter, may have a very brief shelf life — unless she’s set to parachute into a comparable role at another G7 caliber data protection authority.

Nor is Denham the first person to make a revived pitch for a rethink on cookie consent mechanisms — even in recent years.

Last October, for example, a US-centric tech-publisher coalition came out with what they called a Global Privacy Standard (GPC) — aiming to build momentum for a browser-level pro-privacy signal to stop the sale of personal data, geared toward California’s Consumer Privacy Act (CCPA), though pitched as something that could have wider utility for Internet users.

By January this year they announced 40M+ users were making use of a browser or extension that supports GPC — along with a clutch of big name publishers signed up to honor it. But it’s fair to say its global impact so far remains limited. 

More recently, European privacy group noyb published a technical proposal for a European-centric automated browser-level signal that would let regional users configure advanced consent choices — enabling the more granular controls it said would be needed to fully mesh with the EU’s more comprehensive (vs CCPA) legal framework around data protection.

The proposal, for which noyb worked with the Sustainable Computing Lab at the Vienna University of Economics and Business, is called Advanced Data Protection Control (ADPC). And noyb has called on the EU to legislate for such a mechanism — suggesting there’s a window of opportunity as lawmakers there are also keen to find ways to reduce cookie fatigue (a stated aim for the still-in-train reform of the ePrivacy rules, for example).

So there are some concrete examples of what practical, less fatiguing yet still pro-privacy consent mechanisms might look like to lend a little more color to Denham’s ‘idea’ — although her remarks today don’t reference any such existing mechanisms or proposals.

(When we asked the ICO for more details on what she’s advocating for, its spokeswoman didn’t cite any specific technical proposals or implementations, historical or contemporary, either, saying only: “By working together, the G7 data protection authorities could have an outsized impact in stimulating the development of technological solutions to the cookie consent problem.”)

So Denham’s call to the G7 does seem rather low on substance vs profile-raising noise.

In any case, the really big elephant in the room here is the lack of enforcement around cookie consent breaches — including by the ICO.

Add to that, there’s the now very pressing question of how exactly the UK will ‘reform’ domestic law in this area (post-Brexit) — which makes the timing of Denham’s call look, well, interestingly opportune. (And difficult to interpret as anything other than opportunistically opaque at this point.)

The adtech industry will of course be watching developments in the UK with interest — and would surely be cheering from the rooftops if domestic data protection ‘reform’ results in amendments to UK rules that allow the vast majority of websites to avoid having to ask Brits for permission to process their personal data, say by opting them into tracking by default (under the guise of ‘fixing’ cookie friction and cookie fatigue for them).

That would certainly be mission accomplished after all these years of cookie-fatigue-generating-cookie-consent-non-compliance by surveillance capitalism’s industrial data complex.

It’s not yet clear which way the UK government will jump — but eyebrows should raise to read the ICO writing today that it expects compliance with (current) UK law when it has so roundly failed to tackle the adtech industry’s role in cynically sicking up said cookie fatigue by failing to take any action against such systemic breaches.

The bald fact is that the ICO has — for years — avoided tackling adtech abuse of data protection, despite acknowledging publicly that the sector is wildly out of control.

Instead, it has opted for a cringing ‘process of engagement’ (read: appeasement) that has condemned UK Internet users to cookie pop-up hell.

This is why the regulator is being sued for inaction — after it closed a long-standing complaint against the security abuse of people’s data in real-time bidding ad auctions with nothing to show for it… So, yes, you can be forgiven for feeling gaslit by Denham’s call for action on cookie fatigue following the ICO’s repeat inaction on the causes of cookie fatigue…

Not that the ICO is alone on that front, however.

There has been a fairly widespread failure by EU regulators to tackle systematic abuse of the bloc’s data protection rules by the adtech sector — with a number of complaints (such as this one against the IAB Europe’s self-styled ‘transparency and consent framework’) still working, painstakingly, through the various labyrinthine regulatory processes.

France’s CNIL has probably been the most active in this area — last year slapping Amazon and Google with fines of $42M and $120M for dropping tracking cookies without consent, for example. (And before you accuse CNIL of being ‘anti-American’, it has also gone after domestic adtech.)

But elsewhere — notably Ireland, where many adtech giants are regionally headquartered — the lack of enforcement against the sector has allowed for cynical, manipulative and/or meaningless consent pop-ups to proliferate as the dysfunctional ‘norm’, while investigations have failed to progress and EU citizens have been forced to become accustomed, not to regulatory closure (or indeed rapture), but to an existentially endless consent experience that’s now being (re)branded as ‘cookie fatigue’.

Yes, even with the EU’s General Data Protection Regulation (GDPR) coming into application in 2018 and beefing up (in theory) consent standards.

This is why the privacy campaign group noyb is now lodging scores of complaints against cookie consent breaches — to try to force EU regulators to actually enforce the law in this area, even as it also finds time to put up a practical technical proposal that could help shrink cookie fatigue without undermining data protection standards. 

It’s a shining example of action that has yet to inspire the lion’s share of the EU’s actual regulators to act on cookies. The tl;dr is that EU citizens are still waiting for the cookie consent reckoning — even if there is now a bit of high level talk about the need for ‘something to be done’ about all these tedious pop-ups.

The problem is that while GDPR certainly cranked up the legal risk on paper, without proper enforcement it’s just a paper tiger. And the pushing around of lots of paper is very tedious, clearly. 

Most cookie pop-ups you’ll see in the EU are thus essentially privacy theatre; at the very least they’re unnecessarily irritating because they create ongoing friction for web users who must constantly respond to nags for their data (typically to repeatedly try to deny access if they can actually find a ‘reject all’ setting).

But — even worse — many of these pervasive pop-ups are actively undermining the law (as a number of studies have shown) because the vast majority do not meet the legal standard for consent.

So the cookie consent/fatigue narrative is actually a story of faux compliance enabled by an enforcement vacuum that’s now also encouraging the watering down of privacy standards as a result of such much unpunished flouting of the law.

There is a lesson here, surely.

‘Faux consent’ pop-ups that you can easily stumble across when surfing the ‘ad-supported’ Internet in Europe include those failing to provide users with clear information about how their data will be used; or not offering people a free choice to reject tracking without being penalized (such as with no/limited access to the content they’re trying to access), or at least giving the impression that accepting is a requirement to access said content (dark pattern!); and/or otherwise manipulating a person’s choice by making it super simple to accept tracking and far, far, far more tedious to deny.

You can also still sometimes find cookie notices that don’t offer users any choice at all — and just pop up to inform that ‘by continuing to browse you consent to your data being processed’ — which, unless the cookies in question are literally essential for provision of the webpage, is basically illegal. (Europe’s top court made it abundantly clear in 2019 that active consent is a requirement for non-essential cookies.)

Nonetheless, to the untrained eye — and sadly there are a lot of them where cookie consent notices are concerned — it can look like it’s Europe’s data protection law that’s the ass because it seemingly demands all these meaningless ‘consent’ pop-ups, which just gloss over an ongoing background data grab anyway.

The truth is regulators should have slapped down these manipulative dark patterns years ago.

The problem now is that regulatory failure is encouraging political posturing — and, in a twisting double-back throw by the ICO! — regulatory thrusting around the idea that some newfangled mechanism is what’s really needed to remove all this universally inconvenient ‘friction’.

An idea like noyb’s ADPC does indeed look very useful in ironing out the widespread operational wrinkles wrapping the EU’s cookie consent rules. But when it’s the ICO suggesting a quick fix after the regulatory authority has failed so spectacularly over the long duration of complaints around this issue you’ll have to forgive us for being sceptical.

In such a context the notion of ‘cookie fatigue’ looks like it’s being suspiciously trumped up; fixed on as a convenient scapegoat to rechannel consumer frustration with hated online tracking toward high privacy standards — and away from the commercial data-pipes that demand all these intrusive, tedious cookie pop-ups in the first place — whilst neatly aligning with the UK government’s post-Brexit political priorities on ‘data’.

Worse still: The whole farcical consent pantomime — which the adtech industry has aggressively engaged in to try to sustain a privacy-hostile business model in spite of beefed up European privacy laws — could be set to end in genuine tragedy for user rights if standards end up being slashed to appease the law mockers.

The target of regulatory ire and political anger should really be the systematic law-breaking that’s held back privacy-respecting innovation and non-tracking business models — by making it harder for businesses that don’t abuse people’s data to compete.

Governments and regulators should not be trying to dismantle the principle of consent itself. Yet — at least in the UK — that does now look horribly possible.

Laws like GDPR set high standards for consent which — if they were but robustly enforced — could lead to reform of highly problematic practices like behavorial advertising combined with the out-of-control scale of programmatic advertising.

Indeed, we should already be seeing privacy-respecting forms of advertising being the norm, not the alternative — free to scale.

Instead, thanks to widespread inaction against systematic adtech breaches, there has been little incentive for publishers to reform bad practices and end the irritating ‘consent charade’ — which keeps cookie pop-ups mushrooming forth, oftentimes with ridiculously lengthy lists of data-sharing ‘partners’ (i.e. if you do actually click through the dark patterns to try to understand what is this claimed ‘choice’ you’re being offered).

As well as being a criminal waste of web users’ time, we now have the prospect of attention-seeking, politically charged regulators deciding that all this ‘friction’ justifies giving data-mining giants carte blanche to torch user rights — if the intention is to fire up the G7 to send a collect invite to the tech industry to come up with “practical” alternatives to asking people for their consent to track them — and all because authorities like the ICO have been too risk averse to actually defend users’ rights in the first place.

Dowden’s remarks last month suggest the UK government may be preparing to use cookie consent fatigue as convenient cover for watering down domestic data protection standards — at least if it can get away with the switcheroo.

Nothing in the ICO’s statement today suggests it would stand in the way of such a move.

Now that the UK is outside the EU, the UK government has said it believes it has an opportunity to deregulate domestic data protection — although it may find there are legal consequences for domestic businesses if it diverges too far from EU standards.

Denham’s call to the G7 naturally includes a few EU countries (the biggest economies in the bloc) but by targeting this group she’s also seeking to engage regulators further afield — in jurisdictions that currently lack a comprehensive data protection framework. So if the UK moves, cloaked in rhetoric of ‘Global Britain’, to water down its (EU-based) high domestic data protection standards it will be placing downward pressure on international aspirations in this area — as a counterweight to the EU’s geopolitical ambitions to drive global standards up to its level.

The risk, then, is a race to the bottom on privacy standards among Western democracies — at a time when awareness about the importance of online privacy, data protection and information security has actually never been higher.

Furthermore, any UK move to weaken data protection also risks putting pressure on the EU’s own high standards in this area — as the regional trajectory would be down not up. And that could, ultimately, give succour to forces inside the EU that lobby against its commitment to a charter of fundamental rights — by arguing such standards undermine the global competitiveness of European businesses.

So while cookies themselves — or indeed ‘cookie fatigue’ — may seem an irritatingly small concern, the stakes attached to this tug of war around people’s rights over what can happen to their personal data are very high indeed.

#advertising-tech, #amazon, #california, #canada, #cookie-consent-notices, #cookie-fatigue, #cookies, #data-protection, #data-protection-law, #data-security, #do-not-track, #elizabeth-denham, #europe, #european-union, #france, #g7, #general-data-protection-regulation, #germany, #google, #ireland, #italy, #japan, #noyb, #oliver-dowden, #online-privacy, #online-tracking, #privacy, #tc, #tracking, #uk-government, #united-kingdom, #united-states, #web-tracking

WhatsApp faces $267M fine for breaching Europe’s GDPR

It’s been a long time coming but Facebook is finally feeling some heat from Europe’s much trumpeted data protection regime: Ireland’s Data Protection Commission (DPC) has just announced a €225 million (~$267M) for WhatsApp.

The Facebook-owned messaging app has been under investigation by the Irish DPC, its lead data supervisor in the European Union, since December 2018 — several months after the first complaints were fired at WhatsApp over how it processes user data under Europe’s General Data Protection Regulation (GDPR), once it begun being applied in May 2018.

Despite receiving a number of specific complaints about WhatsApp, the investigation undertaken by the DPC that’s been decided today was what’s known as an “own volition” enquiry — meaning the regulator selected the parameters of the investigation itself, choosing to fix on an audit of WhatsApp’s ‘transparency’ obligations.

A key principle of the GDPR is that entities which are processing people’s data must be clear, open and honest with those people about how their information will be used.

The DPC’s decision today (which runs to a full 266 pages) concludes that WhatsApp failed to live up to the standard required by the GDPR.

Its enquiry considered whether or not WhatsApp fulfils transparency obligations to both users and non-users of its service (WhatsApp may, for example, upload the phone numbers of non-users if a user agrees to it ingesting their phone book which contains other people’s personal data); as well as looking at the transparency the platform offers over its sharing of data with its parent entity Facebook (a highly controversial issue at the time the privacy U-turn was announced back in 2016, although it predated GDPR being applied).

In sum, the DPC found a range of transparency infringements by WhatsApp — spanning articles 5(1)(a); 12, 13 and 14 of the GDPR.

In addition to issuing a sizeable financial penalty, it has ordered WhatsApp to take a number of actions to improve the level of transparency it offer users and non-users — giving the tech giant a three-month deadline for making all the ordered changes.

In a statement responding to the DPC’s decision, WhatsApp disputed the findings and dubbed the penalty “entirely disproportionate” — as well as confirming it will appeal, writing:

“WhatsApp is committed to providing a secure and private service. We have worked to ensure the information we provide is transparent and comprehensive and will continue to do so. We disagree with the decision today regarding the transparency we provided to people in 2018 and the penalties are entirely disproportionate. We will appeal this decision.” 

It’s worth emphasizing that the scope of the DPC enquiry which has finally been decided today was limited to only looking at WhatsApp’s transparency obligations.

The regulator was explicitly not looking into wider complaints — which have also been raised against Facebook’s data-mining empire for well over three years — about the legal basis WhatsApp claims for processing people’s information in the first place.

So the DPC will continue to face criticism over both the pace and approach of its GDPR enforcement.

 

Indeed, prior to today, Ireland’s regulator had only issued one decision in a major cross-border cases addressing ‘Big Tech’ — against Twitter when, back in December, it knuckle-tapped the social network over a historical security breach with a fine of $550k.

WhatsApp’s first GDPR penalty is, by contrast, considerably larger — reflecting what EU regulators (plural) evidently consider to be a far more serious infringement of the GDPR.

Transparency is a key principle of the regulation. And while a security breach may indicate sloppy practice, systematic opacity towards people whose data your adtech empire relies upon to turn a fat profit looks rather more intentional; indeed, it’s arguably the whole business model.

And — at least in Europe — such companies are going to find themselves being forced to be up front about what they’re doing with people’s data.

Is the GDPR working?  

The WhatsApp decision will rekindle the debate about whether the GDPR is working effectively where it counts most: Against the most powerful companies in the world, which are also of course Internet companies.

Under the EU’s flagship data protection regulation, decisions on cross border cases require agreement from all affected regulators — across the 27 Member States — so while the GDPR’s “one-stop-shop” mechanism seeks to streamline the regulatory burden for cross-border businesses by funnelling complaints and investigations via a lead regulator (typically where a company has its main legal establishment in the EU), objections can be raised to that lead supervisory authority’s conclusions (and any proposed sanctions), as has happened here in this WhatsApp case.

Ireland originally proposed a far more low-ball penalty of up to €50M for WhatsApp. However other EU regulators objected to its draft decision on a number of fronts — and the European Data Protection Board (EDPB) ultimately had to step in and take a binding decision (issued this summer) to settle the various disputes.

Through that (admittedly rather painful) joint-working, the DPC was required to increase the size of the fine issued to WhatsApp. In a mirror of what happened with its draft Twitter decision — where the DPC has also suggested an even tinier penalty in the first instance.

While there is a clear time cost in settling disputes between the EU’s smorgasbord of data protection agencies — the DPC submitted its draft WhatsApp decision to the other DPAs for review back in December, so it’s taken well over half a year to hash out all the disputes about WhatsApp’s lossy hashing and so forth — the fact that ‘corrections’ are being made to its decisions and conclusions can land — if not jointly agreed but at least arriving via a consensus getting pushed through by the EDPB — is a sign that the process, while slow and creaky, is working. At least technically.

Even so, Ireland’s data watchdog will continue to face criticism for its outsized role in handling GDPR complaints and investigations — with some accusing the DPC of essentially cherry-picking which issues to examine in detail (by its choice and framing of cases) and which to elide entirely (those issues it doesn’t open an enquiry into or complaints it simply drops or ignores), with its loudest critics arguing it’s therefore still a major bottleneck on effective enforcement of data protection rights across the EU.

The associated conclusion for that critique is that tech giants like Facebook are still getting a pretty free pass to violate Europe’s privacy rules.

But while it’s true that a $267M penalty is the equivalent of a parking ticket for Facebook’s business empire, orders to change how such adtech giants are able to process people’s information at least have the potential to be a far more significant correction on problematic business models.

Again, though, time will be needed to tell whether such wider orders are having the sought for impact.

In a statement reacting to the DPC’s WhatsApp decision today, noyb — the privacy advocacy group founded by long-time European privacy campaigner Max Schrems, said: “We welcome the first decision by the Irish regulator. However, the DPC gets about ten thousand complaints per year since 2018 and this is the first major fine. The DPC also proposed an initial €50MK fine and was forced by the other European data protection authorities to move towards €225M, which is still only 0.08% of the turnover of the Facebook Group. The GDPR foresees fines of up to 4% of the turnover. This shows how the DPC is still extremely dysfunctional.”

Schrems also noted that he and noyb still have a number of pending cases before the DPC — including on WhatsApp.

In further remarks, they raised concerns about the length of the appeals process and whether the DPC would make a muscular defence of a sanction it had been forced to increase by other EU DPAs.

“WhatsApp will surely appeal the decision. In the Irish court system this means that years will pass before any fine is actually paid. In our cases we often had the feeling that the DPC is more concerned with headlines than with actually doing the hard groundwork. It will be very interesting to see if the DPC will actually defend this decision fully, as it was basically forced to make this decision by its European counterparts. I can imagine that the DPC will simply not put many resources on the case or ‘settle’ with WhatsApp in Ireland. We will monitor this case closely to ensure that the DPC is actually following through with this decision.”

#data-protection, #data-protection-commission, #europe, #european-data-protection-board, #european-union, #facebook, #gdpr, #general-data-protection-regulation, #ireland, #noyb, #privacy, #social-media, #social-network, #transparency, #whatsapp

UK now expects compliance with children’s privacy design code

In the UK, a 12-month grace period for compliance with a design code aimed at protecting children online expires today — meaning app makers offering digital services in the market which are “likely” to be accessed by children (defined in this context as users under 18 years old) are expected to comply with a set of standards intended to safeguard kids from being tracked and profiled.

The age appropriate design code came into force on September 2 last year however the UK’s data protection watchdog, the ICO, allowed the maximum grace period for hitting compliance to give organizations time to adapt their services.

But from today it expects the standards of the code to be met.

Services where the code applies can include connected toys and games and edtech but also online retail and for-profit online services such as social media and video sharing platforms which have a strong pull for minors.

Among the code’s stipulations are that a level of ‘high privacy’ should be applied to settings by default if the user is (or is suspected to be) a child — including specific provisions that geolocation and profiling should be off by default (unless there’s a compelling justification for such privacy hostile defaults).

The code also instructs app makers to provide parental controls while also providing the child with age-appropriate information about such tools — warning against parental tracking tools that could be used to silently/invisibly monitor a child without them being made aware of the active tracking.

Another standard takes aim at dark pattern design — with a warning to app makers against using “nudge techniques” to push children to provide “unnecessary personal data or weaken or turn off their privacy protections”.

The full code contains 15 standards but is not itself baked into legislation — rather it’s a set of design recommendations the ICO wants app makers to follow.

The regulatory stick to make them do so is that the watchdog is explicitly linking compliance with its children’s privacy standards to passing muster with wider data protection requirements that are baked into UK law.

The risk for apps that ignore the standards is thus that they draw the attention of the watchdog — either through a complaint or proactive investigation — with the potential of a wider ICO audit delving into their whole approach to privacy and data protection.

“We will monitor conformance to this code through a series of proactive audits, will consider complaints, and take appropriate action to enforce the underlying data protection standards, subject to applicable law and in line with our Regulatory Action Policy,” the ICO writes in guidance on its website. “To ensure proportionate and effective regulation we will target our most significant powers, focusing on organisations and individuals suspected of repeated or wilful misconduct or serious failure to comply with the law.”

It goes on to warn it would view a lack of compliance with the kids’ privacy code as a potential black mark against (enforceable) UK data protection laws, adding: “If you do not follow this code, you may find it difficult to demonstrate that your processing is fair and complies with the GDPR [General Data Protection Regulation] or PECR [Privacy and Electronics Communications Regulation].”

Tn a blog post last week, Stephen Bonner, the ICO’s executive director of regulatory futures and innovation, also warned app makers: “We will be proactive in requiring social media platforms, video and music streaming sites and the gaming industry to tell us how their services are designed in line with the code. We will identify areas where we may need to provide support or, should the circumstances require, we have powers to investigate or audit organisations.”

“We have identified that currently, some of the biggest risks come from social media platforms, video and music streaming sites and video gaming platforms,” he went on. “In these sectors, children’s personal data is being used and shared, to bombard them with content and personalised service features. This may include inappropriate adverts; unsolicited messages and friend requests; and privacy-eroding nudges urging children to stay online. We’re concerned with a number of harms that could be created as a consequence of this data use, which are physical, emotional and psychological and financial.”

“Children’s rights must be respected and we expect organisations to prove that children’s best interests are a primary concern. The code gives clarity on how organisations can use children’s data in line with the law, and we want to see organisations committed to protecting children through the development of designs and services in accordance with the code,” Bonner added.

The ICO’s enforcement powers — at least on paper — are fairly extensive, with GDPR, for example, giving it the ability to fine infringers up to £17.5M or 4% of their annual worldwide turnover, whichever is higher.

The watchdog can also issue orders banning data processing or otherwise requiring changes to services it deems non-compliant. So apps that chose to flout the children’s design code risk setting themselves up for regulatory bumps or worse.

In recent months there have been signs some major platforms have been paying mind to the ICO’s compliance deadline — with Instagram, YouTube and TikTok all announcing changes to how they handle minors’ data and account settings ahead of the September 2 date.

In July, Instagram said it would default teens to private accounts — doing so for under 18s in certain countries which the platform confirmed to us includes the UK — among a number of other child-safety focused tweaks. Then in August, Google announced similar changes for accounts on its video charing platform, YouTube.

A few days later TikTok also said it would add more privacy protections for teens. Though it had also made earlier changes limiting privacy defaults for under 18s.

Apple also recently got itself into hot water with the digital rights community following the announcement of child safety-focused features — including a child sexual abuse material (CSAM) detection tool which scans photo uploads to iCloud; and an opt in parental safety feature that lets iCloud Family account users turn on alerts related to the viewing of explicit images by minors using its Messages app.

The unifying theme underpinning all these mainstream platform product tweaks is clearly ‘child protection’.

And while there’s been growing attention in the US to online child safety and the nefarious ways in which some apps exploit kids’ data — as well as a number of open probes in Europe (such as this Commission investigation of TikTok, acting on complaints) — the UK may be having an outsized impact here given its concerted push to pioneer age-focused design standards.

The code also combines with incoming UK legislate which is set to apply a ‘duty of care’ on platforms to take a rboad-brush safety-first stance toward users, also with a big focus on kids (and there it’s also being broadly targeted to cover all children; rather than just applying to kids under 13s as with the US’ COPPA, for example).

In the blog post ahead of the compliance deadline expiring, the ICO’s Bonner sought to take credit for what he described as “significant changes” made in recent months by platforms like Facebook, Google, Instagram and TikTok, writing: “As the first-of-its kind, it’s also having an influence globally. Members of the US Senate and Congress have called on major US tech and gaming companies to voluntarily adopt the standards in the ICO’s code for children in America.”

“The Data Protection Commission in Ireland is preparing to introduce the Children’s Fundamentals to protect children online, which links closely to the code and follows similar core principles,” he also noted.

And there are other examples in the EU: France’s data watchdog, the CNIL, looks to have been inspired by the ICO’s approach — issuing its own set of right child-protection focused recommendations this June (which also, for example, encourage app makers to add parental controls with the clear caveat that such tools must “respect the child’s privacy and best interests”).

The UK’s focus on online child safety is not just making waves overseas but sparking growth in a domestic compliance services industry.

Last month, for example, the ICO announced the first clutch of GDPR certification scheme criteria — including two schemes which focus on the age appropriate design code. Expect plenty more.

Bonner’s blog post also notes that the watchdog will formally set out its position on age assurance this autumn — so it will be providing further steerage to organizations which are in scope of the code on how to tackle that tricky piece, although it’s still not clear how hard a requirement the ICO will support, with Bonner suggesting it could be actually “verifying ages or age estimation”. Watch that space. Whatever the recommendations are, age assurance services are set to spring up with compliance-focused sales pitches.

Children’s safety online has been a huge focus for UK policymakers in recent years, although the wider (and long in train) Online Safety (neé Harms) Bill remains at the draft law stage.

An earlier attempt by UK lawmakers to bring in mandatory age checks to prevent kids from accessing adult content websites — dating back to 2017’s Digital Economy Act — was dropped in 2019 after widespread criticism that it would be both unworkable and a massive privacy risk for adult users of porn.

But the government did not drop its determination to find a way to regulate online services in the name of child safety. And online age verification checks look set to be — if not a blanket, hardened requirement for all digital services — increasingly brought in by the backdoor, through a sort of ‘recommended feature’ creep (as the ORG has warned). 

The current recommendation in the age appropriate design code is that app makers “take a risk-based approach to recognising the age of individual users and ensure you effectively apply the standards in this code to child users”, suggesting they: “Either establish age with a level of certainty that is appropriate to the risks to the rights and freedoms of children that arise from your data processing, or apply the standards in this code to all your users instead.” 

At the same time, the government’s broader push on online safety risks conflicting with some of the laudable aims of the ICO’s non-legally binding children’s privacy design code.

For instance, while the code includes the (welcome) suggestion that digital services gather as little information about children as possible, in an announcement earlier this summer UK lawmakers put out guidance for social media platforms and messaging services — ahead of the planned Online Safety legislation — that recommends they prevent children from being able to use end-to-end encryption.

That’s right; the government’s advice to data-mining platforms — which it suggests will help prepare them for requirements in the incoming legislation — is not to use ‘gold standard’ security and privacy (e2e encryption) for kids.

So the official UK government messaging to app makers appears to be that, in short order, the law will require commercial services to access more of kids’ information, not less — in the name of keeping them ‘safe’. Which is quite a contradiction vs the data minimization push on the design code.

The risk is that a tightening spotlight on kids privacy ends up being fuzzed and complicated by ill-thought through policies that push platforms to monitor kids to demonstrate ‘protection’ from a smorgasbord of online harms — be it adult content or pro-suicide postings, or cyber bullying and CSAM.

The law looks set to encourage platforms to ‘show their workings’ to prove compliance — which risks resulting in ever closer tracking of children’s activity, retention of data — and maybe risk profiling and age verification checks (that could even end up being applied to all users; think sledgehammer to crack a nut). In short, a privacy dystopia.

Such mixed messages and disjointed policymaking seem set to pile increasingly confusing — and even conflicting — requirements on digital services operating in the UK, making tech businesses legally responsible for divining clarity amid the policy mess — with the simultaneous risk of huge fines if they get the balance wrong.

Complying with the ICO’s design standards may therefore actually be the easy bit.

 

#data-processing, #data-protection, #encryption, #europe, #general-data-protection-regulation, #google, #human-rights, #identity-management, #instagram, #online-harms, #online-retail, #online-safety, #policy, #privacy, #regulatory-compliance, #social-issues, #social-media, #social-media-platforms, #tc, #tiktok, #uk-government, #united-kingdom, #united-states

UK names John Edwards as its choice for next data protection chief as gov’t eyes watering down privacy standards

The UK government has named the person it wants to take over as its chief data protection watchdog, with sitting commissioner Elizabeth Denham overdue to vacate the post: The Department of Digital, Culture, Media and Sport (DCMS) today said its preferred replacement is New Zealand’s privacy commissioner, John Edwards.

Edwards, who has a legal background, has spent more than seven years heading up the Office of the Privacy Commissioner In New Zealand — in addition to other roles with public bodies in his home country.

He is perhaps best known to the wider world for his verbose Twitter presence and for taking a public dislike to Facebook: In the wake of the 2018 Cambridge Analytica data misuse scandal Edwards publicly announced that he was deleting his account with the social media — accusing Facebook of not complying with the country’s privacy laws.

An anti-‘Big Tech’ stance aligns with the UK government’s agenda to tame the tech giants as it works to bring in safety-focused legislation for digital platforms and reforms of competition rules that take account of platform power.

If confirmed in the role — the DCMS committee has to approve Edwards’ appointment; plus there’s a ceremonial nod needed from the Queen — he will be joining the regulatory body at a crucial moment as digital minister Oliver Dowden has signalled the beginnings of a planned divergence from the European Union’s data protection regime, post-Brexit, by Boris Johnson’s government.

Dial back the clock five years and prior digital minister, Matt Hancock, was defending the EU’s General Data Protection Regulation (GDPR) as a “decent piece of legislation” — and suggesting to parliament that there would be little room for the UK to diverge in data protection post-Brexit.

But Hancock is now out of government (aptly enough after a data leak showed him breaching social distancing rules by kissing his aide inside a government building), and the government mood music around data has changed key to something far more brash — with sitting digital minister Dowden framing unfettered (i.e. deregulated) data-mining as “a great opportunity” for the post-Brexit UK.

For months, now, ministers have been eyeing how to rework the UK’s current (legascy) EU-based data protection framework — to, essentially, reduce user rights in favor of soundbites heavy on claims of slashing ‘red tape’ and turbocharging data-driven ‘innovation’. Of course the government isn’t saying the quiet part out loud; its press releases talk about using “the power of data to drive growth and create jobs while keeping high data protection standards”. But those standards are being reframed as a fig leaf to enable a new era of data capture and sharing by default.

Dowden has said that the emergency data-sharing which was waived through during the pandemic — when the government used the pressing public health emergency to justify handing NHS data to a raft of tech giantsshould be the ‘new normal’ for a post-Brexit UK. So, tl;dr, get used to living in a regulatory crisis.

A special taskforce, which was commissioned by the prime minister to investigate how the UK could reshape its data policies outside the EU, also issued a report this summer — in which it recommended scrapping some elements of the UK’s GDPR altogether — branding the regime “prescriptive and inflexible”; and advocating for changes to “free up data for innovation and in the public interest”, as it put it, including pushing for revisions related to AI and “growth sectors”.

The government is now preparing to reveal how it intends to act on its appetite to ‘reform’ (read: reduce) domestic privacy standards — with proposals for overhauling the data protection regime incoming next month.

Speaking to the Telegraph for a paywalled article published yesterday, Dowden trailed one change that he said he wants to make which appears to target consent requirements — with the minister suggesting the government will remove the legal requirement to gain consent to, for example, track and profile website visitors — all the while framing it as a pro-consumer move; a way to do away with “endless” cookie banners.

Only cookies that pose a ‘high risk’ to privacy would still require consent notices, per the report — whatever that means.

“There’s an awful lot of needless bureaucracy and box ticking and actually we should be looking at how we can focus on protecting people’s privacy but in as light a touch way as possible,” the digital minister also told the Telegraph.

The draft of this Great British ‘light touch’ data protection framework will emerge next month, so all the detail is still to be set out. But the overarching point is that the government intends to redefine UK citizens’ privacy rights, using meaningless soundbites — with Dowden touting a plan for “common sense” privacy rules — to cover up the fact that it intends to reduce the UK’s currently world class privacy standards and replace them with worse protections for data.

If you live in the UK, how much privacy and data protection you get will depend upon how much ‘innovation’ ministers want to ‘turbocharge’ today — so, yes, be afraid.

It will then fall to Edwards — once/if approved in post as head of the ICO — to nod any deregulation through in his capacity as the post-Brexit information commissioner.

We can speculate that the government hopes to slip through the devilish detail of how it will torch citizens’ privacy rights behind flashy, distraction rhetoric about ‘taking action against Big Tech’. But time will tell.

Data protection experts are already warning of a regulatory stooge.

While the Telegraph suggests Edwards is seen by government as an ideal candidate to ensure the ICO takes a “more open and transparent and collaborative approach” in its future dealings with business.

In a particularly eyebrow raising detail, the newspaper goes on to report that government is exploring the idea of requiring the ICO to carry out “economic impact assessments” — to, in the words of Dowden, ensure that “it understands what the cost is on business” before introducing new guidance or codes of practice.

All too soon, UK citizens may find that — in the ‘sunny post-Brexit uplands’ — they are afforded exactly as much privacy as the market deems acceptable to give them. And that Brexit actually means watching your fundamental rights being traded away.

In a statement responding to Edwards’ nomination, Denham, the outgoing information commissioner, appeared to offer some lightly coded words of warning for government, writing [emphasis ours]: “Data driven innovation stands to bring enormous benefits to the UK economy and to our society, but the digital opportunity before us today will only be realised where people continue to trust their data will be used fairly and transparently, both here in the UK and when shared overseas.”

The lurking iceberg for government is of course that if wades in and rips up a carefully balanced, gold standard privacy regime on a soundbite-centric whim — replacing a pan-European standard with ‘anything goes’ rules of its/the market’s choosing — it’s setting the UK up for a post-Brexit future of domestic data misuse scandals.

You only have to look at the dire parade of data breaches over in the US to glimpse what’s coming down the pipe if data protection standards are allowed to slip. The government publicly bashing the private sector for adhering to lax standards it deregulated could soon be the new ‘get popcorn’ moment for UK policy watchers…

UK citizens will surely soon learn of unfair and unethical uses of their data under the ‘light touch’ data protection regime — i.e. when they read about it in the newspaper.

Such an approach will indeed be setting the country on a path where mistrust of digital services becomes the new normal. And that of course will be horrible for digital business over the longer run. But Dowden appears to lack even a surface understanding of Internet basics.

The UK is also of course setting itself on a direct collision course with the EU if it goes ahead and lowers data protection standards.

This is because its current data adequacy deal with the bloc — which allows for EU citizens’ data to continue flowing freely to the UK — was granted only on the basis that the UK was, at the time it was inked, still aligned with the GDPR. So Dowden’s rush to rip up protections for people’s data presents a clear risk to the “significant safeguards” needed to maintain EU adequacy. Meaning the deal could topple.

Back in June, when the Commission signed off on the UK’s adequacy deal, it clearly warned that “if anything changes on the UK side, we will intervene”.

Add to that, the adequacy deal is also the first with a baked in sunset clause — meaning it will automatically expire in four years. So even if the Commission avoids taking proactive action over slipping privacy standards in the UK there is a hard deadline — in 2025 — when the EU’s executive will be bound to look again in detail at exactly what Dowden & Co. have wrought. And it probably won’t be pretty.

The longer term UK ‘plan’ (if we can put it that way) appears to be to replace domestic economic reliance on EU data flows — by seeking out other jurisdictions that may be friendly to a privacy-light regime governing what can be done with people’s information.

Hence — also today — DCMS trumpeted an intention to secure what it billed as “new multi-billion pound global data partnerships” — saying it will prioritize striking ‘data adequacy’ “partnerships” with the US, Australia, the Republic of Korea, Singapore, and the Dubai International Finance Centre and Colombia.

Future partnerships with India, Brazil, Kenya and Indonesia will also be prioritized, it added — with the government department cheerfully glossing over the fact it’s UK citizens’ own privacy that is being deprioritized here.

“Estimates suggest there is as much as £11 billion worth of trade that goes unrealised around the world due to barriers associated with data transfers,” DCMS writes in an ebullient press release.

As it stands, the EU is of course the UK’s largest trading partner. And statistics from the House of Commons library on the UK’s trade with the EU — which you won’t find cited in the DCMS release — underline quite how tiny this potential Brexit ‘data bonanza’ is, given that UK exports to the EU stood at £294 billion in 2019 (43% of all UK exports).

So even the government’s ‘economic’ case to water down citizens’ privacy rights looks to be puffed up with the same kind of misleadingly vacuous nonsense as ministers’ reframing of a post-Brexit UK as ‘Global Britain’.

Everyone hates cookies banners, sure, but that’s a case for strengthening not weakening people’s privacy — for making non-tracking the default setting online and outlawing manipulative dark patterns so that Internet users don’t constantly have to affirm they want their information protected. Instead the UK may be poised to get rid of annoying cookie consent ‘friction’ by allowing a free for all on citizens’ data.

 

#artificial-intelligence, #australia, #brazil, #colombia, #data-mining, #data-protection, #data-security, #digital-rights, #elizabeth-denham, #europe, #european-union, #facebook, #gdpr, #general-data-protection-regulation, #human-rights, #india, #indonesia, #john-edwards, #kenya, #korea, #matt-hancock, #new-zealand, #nhs, #oliver-dowden, #privacy, #singapore, #social-issues, #social-media, #uk-government, #united-kingdom, #united-states

Evervault’s ‘encryption as a service’ is now open access

Dublin-based Evervault, a developer-focused security startup which sells encryption vis API and is backed by a raft of big name investors including the likes of Sequoia, Kleiner Perkins and Index Ventures, is coming out of closed beta today — announcing open access to its encryption engine.

The startup says some 3,000 developers are on its waitlist to kick the tyres of its encryption engine, which it calls E3.

Among “dozens” of companies in its closed preview are drone delivery firm Manna, fintech startup Okra, and healthtech company Vital. Evervault says it’s targeting its tools at developers at companies with a core business need to collect and process four types of data: Identity & contact data; Financial & transaction data; Health & medical data; and Intellectual property.

The first suite of products it offers on E3 are called Relay and Cages; the former providing a new way for developers to encrypt and decrypt data as it passes in and out of apps; the latter offering a secure method — using trusted execution environments running on AWS — to process encrypted data by isolating the code that processes plaintext data from the rest of the developer stack.

Evervault is the first company to get a product deployed on Amazon Web Services’ Nitro Enclaves, per founder Shane Curran.

“Nitro Enclaves are basically environments where you can run code and prove that the code that’s running in the data itself is the code that you’re meant to be running,” he tells TechCrunch. “We were the first production deployment of a product on AWS Nitro Enclaves — so in terms of the people actually taking that approach we’re the only ones.”

It shouldn’t be news to anyone to say that data breaches continue to be a serious problem online. And unfortunately it’s sloppy security practices by app makers — or even a total lack of attention to securing user data — that’s frequently to blame when plaintext data leaks or is improperly accessed.

Evervault’s fix for this unfortunate ‘feature’ of the app ecosystem is to make it super simple for developers to bake in encryption via an API — taking the strain of tasks like managing encryption keys. (“Integrate Evervault in 5 minutes by changing a DNS record and including our SDK,” is the developer-enticing pitch on its website.)

“At the high level what we’re doing… is we’re really focusing on getting companies from [a position of] not approaching security and privacy from any perspective at all — up and running with encryption so that they can actually, at the very least, start to implement the controls,” says Curran.

“One of the biggest problems that companies have these days is they basically collect data and the data sort of gets sprawled across both their implementation and their test sets as well. The benefit of encryption is that  you know exactly when data was accessed and how it was accessed. So it just gives people a platform to see what’s happening with the data and start implementing those controls themselves.”

With C-Suite executives paying increasing mind to the need to properly secure data — thanks to years of horrific data breach scandals (and breach déjà vu), and also because of updated data protection laws like Europe’s General Data Protection Regulation (GDPR) which has beefed up penalties for lax security and data misuse — a growing number of startups are now pitching services that promise to deliver ‘data privacy’, touting tools they claim will protect data while still enabling developers to extract useful intel.

Evervault’s website also deploys the term “data privacy” — which it tells us it defines to mean that “no unauthorized party has access to plaintext user/customer data; users/customers and authorized developers have full control over who has access to data (including when and for what purpose); and, plaintext data breaches are ended”. (So encrypted data could, in theory, still leak — but the point is the information would remain protected as a result of still being robustly encrypted.)

Among a number of techniques being commercialized by startups in this space is homomorphic encryption — a process that allows for analysis of encrypted data without the need to decrypt the data.

Evervault’s first offering doesn’t go that far — although its ‘encryption manifesto‘ notes that it’s keeping a close eye on the technique. And Curran confirms it is likely to incorporate the approach in time. But he says its first focus has been to get E3 up and running with an offering that can help a broad swathe of developers.

“Fully homomorphic [encryption] is great. The biggest challenge if you’re targeting software developers who are building normal services it’s very hard to build general purpose applications on top of it. So we take another approach — which is basically using trusted execution environments. And we worked with the Amazon Web Services team on being their first production deployment of their new product called Nitro Enclaves,” he tells TechCrunch.

“The bigger focus for us is less about the underlying technology itself and it’s more about taking what the best security practices are for companies that are already investing heavily in this and just making them accessible to average developers who don’t even know how encryption works,” Curran continues. “That’s where we get the biggest nuance of Evervault vs some of these others privacy and security companies — we build for developers who don’t normally think about security when they’re building things and try to build a great experience around that… so it’s really just about bridging the gap between ‘the start of art’ and bringing it to average developers.”

“Over time fully homomorphic encryption is probably a no-brainer for us but both in terms of performance and flexibility for your average developer to get up and running it didn’t really make sense for us to build on it in its current form. But it’s something we’re looking into. We’re really looking at what’s coming out of academia — and if we can fit it in there. But in the meantime it’s all this trusted execution environment,” he adds.

Curran suggests Evervault’s main competitor at this point is open source encryption libraries — so basically developers opting to ‘do’ the encryption piece themselves. Hence it’s zeroing in on the service aspect of its offering; taking on encryption management tasks so developers don’t have to, while also reducing their security risk by ensuring they don’t have to touch data in the clear.

“When we’re looking at those sort of developers — who’re already starting to think about doing it themselves — the biggest differentiator with Evervault is, firstly the speed of integration, but more importantly it’s the management of encrypted data itself,” Curran suggests. “With Evervault we manage the keys but we don’t store any data and our customers store encrypted data but they don’t store keys. So it means that even if they want to encrypt something with Evervault they never have all the data themselves in plaintext — whereas with open source encryption they’ll have to have it at some point before they do the encryption. So that’s really the base competitor that we see.”

“Obviously there are some other projects out there — like Tim Berners-Lee’s Solid project and so on. But it’s not clear that there’s anybody else taking the developer-experience focused approach to encryption specifically. Obviously there’s a bunch of API security companies… but encryption through an API is something we haven’t really come across in the past with customers,” he adds.

While Evervault’s current approach sees app makers’ data hosted in dedicated trusted execution environments running on AWS, the information still exists there as plaintext — for now. But as encryption continues to evolves it’s possible to envisage a future where apps aren’t just encrypted by default (Evervault’s stated mission is to “encrypt the web”) but where user data, once ingested and encrypted, never needs to be decrypted — as all processing can be carried out on ciphertext.

Homomorphic encryption has unsurprisingly been called the ‘holy grail’ of security and privacy — and startups like Duality are busy chasing it. But the reality on the ground, online and in app stores, remains a whole lot more rudimentary. So Evervault sees plenty of value in getting on with trying to raise the encryption bar more generally.

Curran also points out that plenty of developers aren’t actually doing much processing of the data they gather — arguing therefore that caging plaintext data inside a trusted execution environment can thus abstract away a large part of the risk related to these sort of data flows anyway. “The reality is most developers who are building software these days aren’t necessarily processing data themselves,” he suggests. “They’re actually just sort of collecting it from their users and then sharing it with third party APIs.

“If you look at a startup building something with Stripe — the credit card flows through their systems but it always ends up being passed on somewhere else. I think that’s generally the direction that most startups are going these days. So you can trust the execution — depending on the security of the silicon in an Amazon data center kind of makes the most sense.”

On the regulatory side, the data protection story is a little more nuanced than the typical security startup spin.

While Europe’s GDPR certainly bakes security requirements into law, the flagship data protection regime also provides citizens with a suite of access rights attached to their personal data — a key element that’s often overlooked in developer-first discussions of ‘data privacy’.

Evervault concedes that data access rights haven’t been front of mind yet, with the team’s initial focus being squarely on encryption. But Curran tells us it plans — “over time” — to roll out products that will “simplify access rights as well”.

“In the future, Evervault will provide the following functionality: Encrypted data tagging (to, for example, time-lock data usage); programmatic role-based access (to, for example, prevent an employee seeing data in plaintext in a UI); and, programmatic compliance (e.g. data localization),” he further notes on that.

 

#api, #aws, #cryptography, #developer, #dublin, #encryption, #europe, #evervault, #general-data-protection-regulation, #homomorphic-encryption, #nitro-enclaves, #okra, #privacy, #security, #sequoia, #shane-curran, #tim-berners-lee

InfoSum raises $65M Series B as organizations embrace secure data sharing

InfoSum, a London-based startup that provides a decentralized platform for secure data sharing between organizations, has secured a $65 million Series B funding round led by Chrysalis Investments.

The investment comes less than a year after InfoSum closed a $15.1 million Series A round co-led by Upfront Ventures and IA Ventures. Since, the data privacy startup has tripled its revenue, doubled its employee base, and secured more than fifty new customers, including AT&T, Disney, Omnicom and Merkle.

Its growth was boosted by businesses that are increasingly focused on data privacy, largely as a result of the mass shift to remote working and cloud-based collaboration necessitated by the pandemic. InfoSum’s data collaboration platform uses patented technology to connect customer records between and amongst companies, without moving or sharing data. It helps organizations to alleviate security concerns, according to the startup, and is compliant with all current privacy laws, including GDPR.

The platform was bolstered earlier this year with the launch of InfoSum Bridge, a product which it claims significantly expands the customer identity linking capabilities of its platform. It is designed to connect advertising identifiers along with its own “bunkered” data sets to better facilitate ad targeting based on first-party data.

“The technology that enables companies to safely and securely compare customer data is thankfully entering a new phase, driven by privacy-conscious consumers and companies focused on value and control. InfoSum is proud to be leading the way,” said Brian Lesser, chairman and CEO of InfoSum. “Companies are looking for solutions to help resolve the existing friction and inefficiencies around data collaboration, and InfoSum is the company to drive this growth forward.”

The company, which says it is poised for “exponential growth” in 2021 as businesses continue to embrace privacy-focused tools and software, will use the newly raised investment to accelerate hiring across every aspect of its business, expand into new regions, and further the development of its platform.

Nick Halstead, who previously founded and led big data startup DataSift, founded InfoSum (then called CognitiveLogic) in 2015 with a vision to connect the world’s data without ever sharing it. The company currently has 80 employees spread across offices in the U.S., the U.K., and Germany.

#articles, #att, #chrysalis, #cloud-computing, #data-security, #datasift, #disney, #funding, #general-data-protection-regulation, #germany, #human-rights, #ia-ventures, #identity-management, #infosum, #london, #merkle, #nick-halstead, #omnicom, #open-data-institute, #privacy, #security, #social-issues, #united-kingdom, #united-states, #upfront-ventures

Stop using Zoom, Hamburg’s DPA warns state government

Hamburg’s state government has been formally warned against using Zoom over data protection concerns.

The German state’s data protection agency (DPA) took the step of issuing a public warning yesterday, writing in a press release that the Senate Chancellory’s use of the popular videoconferencing tool violates the European Union’s General Data Protection Regulation (GDPR) since user data is transferred to the US for processing.

The DPA’s concern follows a landmark ruling (Schrems II) by Europe’s top court last summer which invalidated a flagship data transfer arrangement between the EU and the US (Privacy Shield), finding US surveillance law to be incompatible with EU privacy rights.

The fallout from Schrems II has been slow to manifest — beyond an instant blanket of legal uncertainty. However a number of European DPAs are now investigating the use of US-based digital services because of the data transfer issue, and in some instances publicly warning against the use of mainstream US tools like Facebook and Zoom because user data cannot be adequately safeguarded when it’s taken over the pond.

German agencies are among the most proactive in this respect. But the EU’s data protection supervisor is also investigating the bloc’s use of cloud services from US giants Amazon and Microsoft over the same data transfer concern.

At the same time, negotiations between the European Commission and the Biden administration to seek a replacement data transfer deal remain ongoing. However EU lawmakers have repeatedly warned against any quick fix — saying reform of US surveillance law is likely required before there can be a revived Privacy Shield. And as the legal limbo continues a growing number of public bodies in Europe are facing pressure to ditch US-based services in favor of compliant local alternatives.

In the Hamburg case, the DPA says it took the step of issuing the Senate Chancellory with a public warning after the body did not provide an adequate response to concerns raised earlier.

The agency asserts that use of Zoom by the public body does not comply with the GDPR’s requirement for a valid legal basis for processing personal data, writing: “The documents submitted by the Senate Chancellery on the use of Zoom show that [GDPR] standards are not being adhered to.”

The DPA initiated a formal procedure earlier, via a hearing, on June 17, 2021 but says the Senate Chancellory failed to stop using the videoconferencing tool. Nor did it provide any additional documents or arguments to demonstrate compliance usage. Hence the DPA taking the step of a formal warning, under Article 58 (2) (a) of the GDPR.

In a statement, Ulrich Kühn, the acting Hamburg commissioner for data protection and freedom of information, dubbed it “incomprehensible” that the regional body was continuing to flout EU law in order to use Zoom — pointing out that a local alternative, provided by the German company Dataport (which supplies software to a number of state, regional and local government bodies) is readily available.

In the statement [translated with Google Translate], Kühn said: “Public bodies are particularly bound to comply with the law. It is therefore more than regrettable that such a formal step had to be taken. At the [Senate Chancellery of the Free and Hanseatic City of Hamburg], all employees have access to a tried and tested video conference tool that is unproblematic with regard to third-country transmission. As the central service provider, Dataport also provides additional video conference systems in its own data centers. These are used successfully in other regions such as Schleswig-Holstein. It is therefore incomprehensible why the Senate Chancellery insists on an additional and legally highly problematic system.”

We’ve reached out to the Hamburg DPA and Senate Chancellory with questions.

Zoom has also been contacted for comment.

#data-protection, #data-security, #dataport, #digital-rights, #eu-us-privacy-shield, #europe, #european-commission, #european-union, #general-data-protection-regulation, #government, #hamburg, #personal-data, #privacy, #schrems-ii, #surveillance-law, #united-states, #video-conferencing, #zoom

EU hits Amazon with record-breaking $887M GDPR fine over data misuse

Luxembourg’s National Commission for Data Protection (CNPD) has hit Amazon with a record-breaking €746 million ($887m) GDPR fine over the way it uses customer data for targeted advertising purposes.

Amazon disclosed the ruling in an SEC filing on Friday in which it slammed the decision as baseless and added that it intended to defend itself “vigorously in this matter.”

“Maintaining the security of our customers’ information and their trust are top priorities,” an Amazon spokesperson said in a statement. “There has been no data breach, and no customer data has been exposed to any third party. These facts are undisputed.

“We strongly disagree with the CNPD’s ruling, and we intend to appeal. The decision relating to how we show customers relevant advertising relies on subjective and untested interpretations of European privacy law, and the proposed fine is entirely out of proportion with even that interpretation.”

The penalty is the result of a 2018 complaint by French privacy rights group La Quadrature du Net, a group that claims to represent the interests of thousands of Europeans to ensure their data isn’t used by big tech companies to manipulate their behavior for political or commercial purposes. The complaint, which also targets Apple, Facebook Google and LinkedIn and was filed on behalf of more than 10,000 customers, alleges that Amazon manipulates customers for commercial means by choosing what advertising and information they receive.

La Quadrature du Net welcomed the fine issued by the CNPD, which “comes after three years of silence that made us fear the worst.”

“The model of economic domination based on the exploitation of our privacy and free will is profoundly illegitimate and contrary to all the values that our democratic societies claim to defend,” the group added in a blog post published on Friday.

The CNPD has also ruled that Amazon must commit to changing its business practices. However, the regulator has not publicly committed on its decision, and Amazon didn’t specify what revised business practices it is proposing.

The record penalty, which trumps the €50 million GDPR penalty levied against Google in 2019, comes amid heightened scrutiny of Amazon’s business in Europe. In November last year, the European Commission announced formal antitrust charges against the company, saying the retailer has misused its position to compete against third-party businesses using its platform. At the same time, the Commission a second investigation into its alleged preferential treatment of its own products on its site and those of its partners.

#amazon, #apple, #big-tech, #companies, #computing, #data-protection, #data-security, #europe, #european-commission, #facebook, #general-data-protection-regulation, #google, #policy, #privacy, #spokesperson, #tc, #u-s-securities-and-exchange-commission

Controversial WhatsApp policy change hit with consumer law complaint in Europe

Facebook has been accused of multiple breaches of European Union consumer protection law as a result of its attempts to force WhatsApp users to accept controversial changes to the messaging platforms’ terms of use — such as threatening users that the app would stop working if they did not accept the updated policies by May 15.

The consumer protection association umbrella group, the Beuc, said today that together with eight of its member organizations it’s filed a complaint with the European Commission and with the European network of consumer authorities.

“The complaint is first due to the persistent, recurrent and intrusive notifications pushing users to accept WhatsApp’s policy updates,” it wrote in a press release.

“The content of these notifications, their nature, timing and recurrence put an undue pressure on users and impair their freedom of choice. As such, they are a breach of the EU Directive on Unfair Commercial Practices.”

After earlier telling users that notifications about the need to accept the new policy would become persistent, interfering with their ability to use the service, WhatsApp later rowed back from its own draconian deadline.

However the app continues to bug users to accept the update — with no option not to do so (users can close the policy prompt but are unable to decline the new terms or stop the app continuing to pop-up a screen asking them to accept the update).

“In addition, the complaint highlights the opacity of the new terms and the fact that WhatsApp has failed to explain in plain and intelligible language the nature of the changes,” the Beuc went on. “It is basically impossible for consumers to get a clear understanding of what consequences WhatsApp’s changes entail for their privacy, particularly in relation to the transfer of their personal data to Facebook and other third parties. This ambiguity amounts to a breach of EU consumer law which obliges companies to use clear and transparent contract terms and commercial communications.”

The organization pointed out that WhatsApp’s policy updates remain under scrutiny by privacy regulations in Europe — which it argues is another factor that makes Facebook’s aggressive attempts to push the policy on users highly inappropriate.

And while this consumer-law focused complaint is separate to the privacy issues the Beuc also flags — which are being investigated by EU data protection authorities (DPAs) — it has called on those regulators to speed up their investigations, adding: “We urge the European network of consumer authorities and the network of data protection authorities to work in close cooperation on these issues.”

The Beuc has produced a report setting out its concerns about the WhatsApp ToS change in more detail — where it hits out at the “opacity” of the new policies, further asserting:

“WhatsApp remains very vague about the sections it has removed and the ones it has added. It is up to users to seek out this information by themselves. Ultimately, it is almost impossible for users to clearly understand what is new and what has been amended. The opacity of the new policies is in breach of Article 5 of the UCTD [Unfair Contract Terms Directive] and is also a misleading and unfair practice prohibited under Article 5 and 6 of the UCPD [Unfair Commercial Practices Directive].”

Reached for comment on the consumer complaint, a WhatsApp spokesperson told us:

“Beuc’s action is based on a misunderstanding of the purpose and effect of the update to our terms of service. Our recent update explains the options people have to message a business on WhatsApp and provides further transparency about how we collect and use data. The update does not expand our ability to share data with Facebook, and does not impact the privacy of your messages with friends or family, wherever they are in the world. We would welcome an opportunity to explain the update to Beuc and to clarify what it means for people.”

The Commission was also contacted for comment on the Beuc’s complaint — we’ll update this report if we get a response.

The complaint is just the latest pushback in Europe over the controversial terms change by Facebook-owned WhatsApp — which triggered a privacy warning from Italy back in January, followed by an urgency procedure in Germany in May when Hamburg’s DPA banned the company from processing additional WhatsApp user data.

Although, earlier this year, Facebook’s lead data regulator in the EU, Ireland’s Data Protection Commission, appeared to accept Facebook’s reassurances that the ToS changes do not affect users in the region.

German DPAs were less happy, though. And Hamburg invoked emergency powers allowed for in the General Data Protection Regulation (GDPR) in a bid to circumvent a mechanism in the regulation that (otherwise) funnels cross-border complaints and concerns via a lead regulator — typically where a data controller has their regional base (in Facebook/WhatsApp’s case that’s Ireland).

Such emergency procedures are time-limited to three months. But the European Data Protection Board (EDPB) confirmed today that its plenary meeting will discuss the Hamburg DPA’s request for it to make an urgent binding decision — which could see the Hamburg DPA’s intervention set on a more lasting footing, depending upon what the EDPB decides.

In the meanwhile, calls for Europe’s regulators to work together to better tackle the challenges posed by platform power are growing, with a number of regional competition authorities and privacy regulators actively taking steps to dial up their joint working — in a bid to ensure that expertise across distinct areas of law doesn’t stay siloed and, thereby, risk disjointed enforcement, with conflicting and contradictory outcomes for Internet users.

There seems to be a growing understanding on both sides of the Atlantic for a joined up approach to regulating platform power and ensuring powerful platforms don’t simply get let off the hook.

 

#beuc, #europe, #european-commission, #european-data-protection-board, #european-union, #facebook, #gdpr, #general-data-protection-regulation, #germany, #hamburg, #ireland, #policy, #privacy, #social, #social-media, #whatsapp

Kill the standard privacy notice

Privacy is a word on everyone’s mind nowadays — even Big Tech is getting in on it. Most recently, Apple joined the user privacy movement with its App Tracking Transparency feature, a cornerstone of the iOS 14.5 software update. Earlier this year, Tim Cook even mentioned privacy in the same breath as the climate crisis and labeled it one of the top issues of the 21st century.

Apple’s solution is a strong move in the right direction and sends a powerful message, but is it enough? Ostensibly, it relies on users to get informed about how apps track them and, if they wish to, regulate or turn off the tracking. In the words of Soviet satirists Ilf and Petrov, “The cause of helping the drowning is in the drowning’s own hands.” It’s a system that, historically speaking, has not produced great results.

Today’s online consumer is drowning indeed — in the deluge of privacy policies, cookie pop-ups, and various web and app tracking permissions. New regulations just pile more privacy disclosures on, and businesses are mostly happy to oblige. They pass the information burden to the end user, whose only rational move is to accept blindly because reading through the heaps of information does not make sense rationally, economically or subjectively. To save that overburdened consumer, we have only one option: We have to kill the standard privacy notice.

A notice that goes unnoticed

Studies show that online consumers often struggle with standard-form notices. A majority of online users expect that if a company has published a document with the title “privacy notice” or “privacy policy” on its website, then it will not collect, analyze or share their personal information with third parties. At the same time, a similar majority of consumers have serious concerns about being tracked and targeted for intrusive advertising.

Online businesses and major platforms gear their privacy notices and other relevant data disclosures toward obtaining consent, not toward educating and explaining.

It’s a privacy double whammy. To get on the platform, users have to accept the privacy notice. By accepting it, they allow tracking and intrusive ads. If they actually read the privacy notice before accepting, that costs them valuable time and can be challenging and frustrating. If Facebook’s privacy policy is as hard to comprehend as German philosopher Immanuel Kant’s “Critique of Pure Reason,” we have a problem. In the end, the option to decline is merely a formality; not accepting the privacy policy means not getting access to the platform.

So, what use is the privacy notice in its current form? For companies, on the one hand, it legitimizes their data-processing practices. It’s usually a document created by lawyers, for lawyers without thinking one second about the interests of the real users. Safe in the knowledge that nobody reads such disclosures, some businesses not only deliberately fail to make the text understandable, they pack it with all kinds of silly or refreshingly honest content.

One company even claimed its users’ immortal souls and their right to eternal life. For consumers, on the other hand, the obligatory checkmark next to the privacy notice can be a nuisance — or it can lull them into a false sense of data security.

On the unlikely occasion that a privacy notice is so blatantly disagreeable that it pushes users away from one platform and toward an alternative, this is often not a real solution, either. Monetizing data has become the dominant business model online, and personal data ultimately flows toward the same Big Tech giants. Even if you’re not directly on their platforms, many of the platforms you are on work with Big Tech through plugins, buttons, cookies and the like. Resistance seems futile.

A regulatory framework from another time

If companies are deliberately producing opaque privacy notices that nobody reads, maybe lawmakers and regulators could intervene and help improve users’ data privacy? Historically, this has not been the case. In pre-digital times, lawmakers were responsible for a multitude of pre-contractual disclosure mandates that resulted in the heaps of paperwork that accompany leasing an apartment, buying a car, opening a bank account or taking out a mortgage.

When it comes to the digital realm, legislation has been reactive, not proactive, and it lags behind technological development considerably. It took the EU about two decades of Google and one decade of Facebook to come up with the General Data Protection Regulation, a comprehensive piece of legislation that still does not rein in rampant data collection practices. This is just a symptom of a larger problem: Today’s politicians and legislators do not understand the internet. How do you regulate something if you don’t know how it works?

Many lawmakers on both sides of the Atlantic often do not understand how tech companies operate and how they make their money with user data — or pretend not to understand for various reasons. Instead of tackling the issue themselves, legislators ask companies to inform the users directly, in whatever “clear and comprehensible” language they see fit. It’s part laissez-faire, part “I don’t care.”

Thanks to this attitude, we are fighting 21st-century challenges — such as online data privacy, profiling and digital identity theft — with the legal logic of Ancient Rome: consent. Not to knock Roman law, but Marcus Aurelius never had to read the iTunes Privacy Policy in full.

Online businesses and major platforms, therefore, gear their privacy notices and other relevant data disclosures toward obtaining consent, not toward educating and explaining. It keeps the data flowing and it makes for great PR when the opportunity for a token privacy gesture appears. Still, a growing number of users are waking up to the setup. It is time for a change.

A call to companies to do the right thing

We have seen that it’s difficult for users to understand all the “legalese,” and they have nowhere to go even if they did. We have also noted lawmakers’ inadequate knowledge and motivation to regulate tech properly. It is up to digital businesses themselves to act, now that growing numbers of online users are stating their discontent and frustration. If data privacy is one of our time’s greatest challenges, it requires concerted action. Just like countries around the world pledged to lower their carbon emissions, enterprises must also band together and commit to protecting their users’ privacy.

So, here’s a plea to tech companies large and small: Kill your standard privacy notices! Don’t write texts that almost no user understands to protect yourselves against potential legal claims so that you can continue collecting private user data. Instead, use privacy notices that are addressed to your users and that everybody can understand.

And don’t stop there — don’t only talk the talk but walk the walk: Develop products that do not rely on the collection and processing of personal data. Return to the internet’s open-source, protocol roots, and deliver value to your community, not to Big Tech and their advertisers. It is possible, it is profitable and it is rewarding.

#apple, #column, #data-protection, #data-security, #digital-rights, #european-union, #facebook, #general-data-protection-regulation, #google, #human-rights, #opinion, #privacy, #privacy-policy, #tc, #terms-of-service

Italy’s DPA fines Glovo-owned Foodinho $3M, orders changes to algorithmic management of riders

Algorithmic management of gig workers has landed Glovo-owned on-demand delivery firm Foodinho in trouble in Italy where the country’s data protection authority issued a €2.6 million penalty (~$3M) yesterday after an investigation found a laundry list of problems.

The delivery company has been ordered to make a number of changes to how it operates in the market, with the Garante’s order giving it two months to correct the most serious violations found, and a further month (so three months total) to amend how its algorithms function — to ensure compliance with privacy legislation, Italy’s workers’ statute and recent legislation protecting platform workers.

One of the issues of concern to the data watchdog is the risk of discrimination arising from a rider rating system operated by Foodinho — which had some 19,000 riders operating on its platform in Italy at the time of the Garante’s investigation.

Likely of relevance here is a long running litigation brought by riders gigging for another food delivery brand in Italy, Foodora, which culminated in a ruling by the country’s Supreme Court last year that asserted riders should be treated as having workers rights, regardless of whether they are employed or self-employed — bolstering the case for challenges against delivery apps that apply algorithms to opaquely micromanage platform workers’ labor.

In the injunction against Foodinho, Italy’s DPA says it found numerous violations of privacy legislation, as well as a risk of discrimination against gig workers based on how Foodinho’s booking and assignments algorithms function, in addition to flagging concerns over how the system uses ratings and reputational mechanisms as further levers of labor control.

Article 22 of the European Union’s General Data Protection Regulation (GDPR) provides protections for individuals against being solely subject to automated decision-making including profiling where such decisions produce a legal or similarly substantial effect (and access to paid work would meet that bar) — giving them the right to get information on a specific decision and object to it and/or ask for human review.

But it does not appear that Foodinho provided riders with such rights, per the Garante’s assessment.

In a press release about the injunction (which we’ve translated from Italian with Google Translate), the watchdog writes:

“The Authority found a series of serious offences, in particular with regard to the algorithms used for the management of workers. The company, for example, had not adequately informed the workers on the functioning of the system and did not guarantee the accuracy and correctness of the results of the algorithmic systems used for the evaluation of the riders. Nor did it guarantee procedures to protect the right to obtain human intervention, express one’s opinion and contest the decisions adopted through the use of the algorithms in question, including the exclusion of a part of the riders from job opportunities.

“The Guarantor has therefore required the company to identify measures to protect the rights and freedoms of riders in the face of automated decisions, including profiling.

The watchdog also says it has asked Foodinho to verify the “accuracy and relevance” of data that feeds the algorithmic management system — listing a wide variety of signals that are factored in (such as chats, emails and phone calls between riders and customer care; geolocation data captured every 15 seconds and displayed on the app map; estimated and actual delivery times; details of the management of the order in progress and those already made; customer and partner feedback; remaining battery level of device etc).

“This is also in order to minimize the risk of errors and distortions which could, for example, lead to the limitation of the deliveries assigned to each rider or to the exclusion itself from the platform. These risks also arise from the rating system,” it goes on, adding: “The company will also need to identify measures that prevent improper or discriminatory use of reputational mechanisms based on customer and business partner feedback.”

Glovo, Foodinho’s parent entity — which is named as the owner of the platform in the Garante’s injunction — was contacted for comment on the injunction.

A company spokesperson told us they were discussing a response — so we’ll update this report if we get one.

Glovo acquired the Italian food delivery company Foodinho back in 2016, making its first foray into international expansion. The Barcelona-based business went on to try to build out a business in the Middle East and LatAm — before retrenching back to largely focus on Southern and Eastern Europe. (In 2018 Glovo also picked up the Foodora brand in Italy, which had been owned by German rival Delivery Hero.)

The Garante says it collaborated with Spain’s privacy watchdog, the AEDP — which is Glovo’s lead data protection supervisor under the GDPR — on the investigation into Foodinho and the platform tech provided to it by Glovo.

Its press release also notes that Glovo is the subject of “an independent procedure” carried out by the AEPD, which it says it’s also assisting with.

The Spanish watchdog confirmed to TechCrunch that joint working between the AEPD and the Garante had resulted in the resolution against the Glovo-owned company, Foodinho.

The AEPD also said it has undertaken its own procedures against Glovo — pointing to a 2019 sanction related to the latter not appointing a data protection officer, as is required by the GDPR. The watchdog later issued Glovo with a fined of €25,000 for that compliance failure.

However it’s not clear why the AEDP has — seemingly — not taken a deep dive look at Glovo’s own compliance with the Article 22 of the GDPR. (We’ve asked it for more on this and will update if we get a response.)

It did point us to recently published guidance on data protection and labor relations, which it worked on with Spain’s Ministry of Labor and the employers and trade union organizations, and which it said includes information on the right of a works council to be informed by a platform company of the parameters on which the algorithms or artificial intelligence systems are based — including “the elaboration of profiles, which may affect the conditions, access and maintenance of employment”.

Earlier this year the Spanish government agreed upon a labor reform to expand the protections available to platform workers by recognizing platform couriers as employees.

The amendments to the Spanish Workers Statute Law were approved by Royal Decree in May — but aren’t due to start being applied until the middle of next month, per El Pais.

Notably, the reform also contains a provision that requires workers’ legal representatives to be informed of the criteria powering any algorithms or AI systems that are used to manage them and which may affect their working conditions — such as those affecting access to employment or rating systems that monitor performance or profile workers. And that additional incoming algorithmic transparency provision has evidently been factored into the AEPD’s guidance.

So it may be that the watchdog is giving affected platforms like Glovo a few months’ grace to allow them to get their systems in order for the new rules.

Spanish labor law also of course remains distinct to Italian law, so there will be ongoing differences of application related to elements that concern delivery apps, regardless of what appears to be a similar trajectory on the issue of expanding platform workers rights.

Back in January, for example, an Italian court found that a reputation-ranking algorithm that had been used by another on-demand delivery app, Deliveroo, had discriminated against riders because it had failed to distinguish between legally protected reasons for withholding labour (e.g., because a rider was sick; or exercising their protected right to strike) and other reasons for not being as productive as they’d indicated they would be.

In that case, Deliveroo said the judgement referred to a historic booking system that it said was no longer used in Italy or any other markets.

More recently a tribunal ruling in Bologna — found a Collective Bargaining Agreement signed by, AssoDelivery, a trade association that represents a number of delivery platforms in the market (including Deliveroo and Glovo), and a minority union with far right affiliations, the UGL trade union, to be unlawful.

Deliveroo told us it planned to appeal that ruling.

The agreement attracted controversy because it seeks to derogate unfavorably from Italian law that protects workers and the signing trade body is not representative enough in the sector.

Zooming out, EU lawmakers are also looking at the issue of platform workers rights — kicking off a consultation in February on how to improve working conditions for gig workers, with the possibility that Brussels could propose legislation later this year.

However platform giants have seen the exercise as an opportunity to lobby for deregulation — pushing to reduce employment standards for gig workers across the EU. The strategy looks intended to circumvent or at least try to limit momentum for beefed up rules coming a national level, such as Spain’s labor reform.

#algorithmic-accountability, #artificial-intelligence, #barcelona, #deliveroo, #delivery-hero, #europe, #european-union, #food-delivery, #gdpr, #general-data-protection-regulation, #glovo, #italy, #labor, #online-food-ordering, #policy, #privacy, #spain

Dutch court will hear another Facebook privacy lawsuit

Privacy litigation that’s being brought against Facebook by two not-for-profits in the Netherlands can go ahead, an Amsterdam court has ruled. The case will be heard in October.

Since 2019, the Amsterdam-based Data Privacy Foundation (DPS) has been seeking to bring a case against Facebook over its rampant collection of Internet users’ data — arguing the company does not have a proper legal basis for the processing.

It has been joined in the action by the Dutch consumer protection not-for-profit, Consumentenbond.

The pair are seeking redress for Facebook users in the Netherlands for alleged violations of their privacy rights — both by suing for compensation for individuals; and calling for Facebook to end the privacy-hostile practices.

European Union law allows for collective redress across a number of areas, including data protection rights, enabling qualified entities to bring representative actions on behalf of rights holders. And the provision looks like an increasingly important tool for furthering privacy enforcement in the bloc, given how European data protection regulators’ have continued to lack uniform vigor in upholding rights set out in legislation such as the General Data Protection Regulation (which, despite coming into application in 2018, has yet to be seriously applied against platform giants like Facebook).

Returning to the Dutch litigation, Facebook denies any abuse and claims it respects user privacy and provides people with “meaningful control” over how their data gets exploited.

But it has fought the litigation by seeking to block it on procedural grounds — arguing for the suit to be tossed by claiming the DPS does not fit the criteria for bringing a privacy claim on behalf of others and that the Amsterdam court has no jurisdiction as its European business is subject to Irish, rather than Dutch, law.

However the Amsterdam District Court rejected its arguments, clearing the way for the litigation to proceed.

Contacted for comment on the ruling, a Facebook spokesperson told us:

“We are currently reviewing the Court’s decision. The ruling was about the procedural part of the case, not a finding on the merits of the action, and we will continue to defend our position in court. We care about our users in the Netherlands and protecting their privacy is important to us. We build products to help people connect with people and content they care about while honoring their privacy choices. Users have meaningful control over the data that they share on Facebook and we provide transparency around how their data is used. We also offer people tools to access, download, and delete their information and we are committed to the principles of GDPR.”

In a statement today, the Consumentenbond‘s director, Sandra Molenaar, described the ruling as “a big boost for the more than 10 million victims” of Facebook’s practices in the country.

“Facebook has tried to throw up all kinds of legal hurdles and to delay this case as much as possible but fortunately the company has not succeeded. Now we can really get to work and ensure that consumers get what they are entitled to,” she added in the written remarks (translated from Dutch with Google Translate).

In another supporting statement, Dick Bouma, chairman of DPS, added: “This is a nice and important first step for the court. The ruling shows that it pays to take a collective stand against tech giants that violate privacy rights.”

The two not-for-profits are urging Facebook users in the Netherlands to sign up to be part of the representative action (and potentially receive compensation) — saying more than 185,000 people have registered so far.

The suit argues that Facebook users are ‘paying’ for the ‘free’ service with their data — contending the tech giant does not have a valid legal basis to process people’s information because it has not provided users with comprehensive information about the data it is gathering from and on them, nor what it does with it.

So — in essence — the argument is that Facebook’s tracking and targeting is in breach of EU privacy law.

The legal challenge follows an earlier investigation (back in 2014) of Facebook’s business by the Dutch data protection authority which identified problems with its privacy policy and — in a 2017 report — found the company to be processing users’ data without their knowledge or consent.

However, since 2018, Europe’s GDPR has been in application and a ‘one-stop-shop’ mechanism baked into the regulation — to streamline the handling of cross-border cases — has meant complaints against Facebook have been funnelled through Ireland’s Data Protection Commission. The Irish DPC has yet to issue a single decision against Facebook despite receiving scores of complaints. (And it’s notable that  ‘forced consent‘ complaints were filed against Facebook the day GDPR begun being applied — yet still remain undecided by Ireland.)

The GDPR’s enforcement bottleneck makes collective redress actions, such as this one in the Netherlands a potentially important route for Europeans to get rights relief against powerful platforms which seek to shrink the risk of regulatory enforcement via forum shopping.

Although national rules — and courts’ interpretations of them — can vary. So the chance of litigation succeeding is not uniform.

In this case, the Amsterdam court allowed the suit to proceed on the grounds that the Facebook data subjects in question reside in the Netherlands.

It also took the view that a local Facebook corporate entity in the Netherlands is an establishment of Facebook Ireland, among other reasons for rejecting Facebook’s arguments.

How Facebook will seek to press a case against the substance of the Dutch privacy litigation remains to be seen. It may well have other procedural strategies up its sleeve.

The tech giant has used similar stalling tactics against far longer-running privacy litigation in Austria, for example.

In that case, brought by privacy campaigner Max Schrems and his not-for-profit noyb, Facebook has sought to claim that the GDPR’s consent requirements do not apply to its advertising business because it now includes “personalized advertising” in its T&Cs — and therefore has a ‘duty’ to provide privacy-hostile ads to users — seeking to bypass the GDPR by claiming it must process users’ data because it’s “necessary for the performance of a contract”, as noyb explains here.

A court in Vienna accepted this “GDPR consent bypass” sleight-of-hand, dealing a blow to European privacy campaigners.

But an appeal reached the Austrian Supreme Court in March — and a referral could be made to Europe’s top court.

If that happens it would then be up to the CJEU to weigh in whether such a massive loophole in the EU’s flagship data protection framework should really be allowed to stand. But that process could still take over a year or longer.

In the short term, the result is yet more delay for Europeans trying to exercise their rights against platform giants and their in-house armies of lawyers.

In a more positive development for privacy rights, a recent ruling by the CJEU bolstered the case for data protection agencies across the EU to bring actions against tech giants if they see an urgent threat to users — and believe a lead supervisor is failing to act.

That ruling could help unblock some GDPR enforcement against the most powerful tech companies at the regulatory level, potentially reducing the blockages created by bottlenecks such as Ireland.

Facebook’s EU-to-US data flows are also now facing the possibility of a suspension order in a matter of months — related to another piece of litigation brought by Schrems which hinges on the conflict between EU fundamental rights and US surveillance law.

The CJEU weighed in on that last summer with a judgement that requires regulators like Ireland to act when user data is at risk. (And Germany’s federal data protection commissioner, for instance, has warned government bodies to shut their official Facebook pages ahead of planned enforcement action at the start of next year.)

So while Facebook has been spectacularly successful at kicking Europe’s privacy rights claims down the road, for well over a decade, its strategy of legal delay tactics to shield a privacy-hostile business model could finally hit a geopolitical brick wall.

The tech giant has sought to lobby against this threat to its business by suggesting it might switch off its service in Europe if the regulator follows through on a preliminary suspension order last year.

But it has also publicly denied it would actually follow through and close service in Europe.

How might Facebook actually comply if ordered to cut off EU data flows? Schrems has argued it may need to federate its service and store European users’ data inside the EU in order to comply with the eponymous Schrems II CJEU ruling.

Albeit, Facebook has certainly shown itself adept at exploiting the gaps between Europeans’ on-paper rights, national case law and the various EU and Member State institutions involved in oversight and enforcement as a tactic to defend its commercial priorities — playing different players and pushing agendas to further its business interests. So whether any single piece of EU privacy litigation will prove to be the silver bullet that forces a reboot of its privacy-hostile business model very much remains to be seen.

A perhaps more likely scenario is that each of these cases further erodes user trust in Facebook’s services — reducing people’s appetite to use its apps and expanding opportunities for rights-respecting competitors to poach custom by offering something better. 

 

#amsterdam, #austria, #data-protection, #data-protection-commission, #digital-rights, #europe, #european-union, #facebook, #general-data-protection-regulation, #germany, #human-rights, #ireland, #lawsuit, #max-schrems, #netherlands, #noyb, #privacy, #surveillance-law, #vienna

German government bodies urged to remove their Facebook Pages before next year

Germany’s federal information commissioner has run out of patience with Facebook.

Last month, Ulrich Kelber wrote to government agencies “strongly recommend[ing]” they to close down their official Facebook Pages because of ongoing data protection compliance problems and the tech giant’s failure to fix the issue.

In the letter, Kelber warns the government bodies that he intends to start taking enforcement action from January 2022 — essentially giving them a deadline of next year to pull their pages from Facebook.

So expect not to see official Facebook Pages of German government bodies in the coming months.

While Kelber’s own agency, the BfDi, does not appear to have a Facebook Page (although Facebook’s algorithms appear to generate this artificial stub if you try searching for one) plenty of other German federal bodies do — such as the Ministry of Health, whose public page has more than 760,000 followers.

The only alternative to such pages vanishing from Facebook’s platform by Christmas — or else being ordered to be taken down early next year by Kelber — seems to be for the tech giant to make more substantial changes to how its platform operators than it has offered so far, allowing the Pages to be run in Germany in a way that complies with EU law.

However Facebook has a long history of ignoring privacy expectations and data protection laws.

It has also, very recently, shown itself more than willing to reduce the quality of information available to users — if doing so further its business interests (such as to lobby against a media code law, as users in Australia can attest).

So it looks rather more likely that German government agencies will be the ones having to quietly bow off the platform soon…

Kelber says he’s avoided taking action over the ministries’ Facebook Pages until now on account of the public bodies arguing that their Facebook Pages are an important way for them to reach citizens.

However his letter points out that government bodies must be “role models” in matters of legal compliance — and therefore have “a particular duty” to comply with data protection law. (The EDPS is taking a similar tack by reviewing EU institutions’ use of US cloud services giants.)

Per his assessment, an “addendum” provided by Facebook in 2019 does not rectify the compliance problem and he concludes that Facebook has made no changes to its data processing operations to enable Page operators to comply with requirements set out in the EU’s General Data Protection Regulation.

A ruling by Europe’s top court, back in June 2018, is especially relevant here — as it held that the administrator of a fan page on Facebook is jointly responsible with Facebook for the processing of the data of visitors to the page.

That means that the operators of such pages also face data protection compliance obligations, and cannot simply assume that Facebook’s T&Cs provide them with legal cover for the data processing the tech giant undertakes.

The problem, in a nutshell, is that Facebook does not provide Pages operates with enough information or assurances about how it processes users’ data — meaning they’re unable to comply with GDPR principles of accountability and transparency because, for example, they’re unable to adequately inform followers of their Facebook Page what is being done with their data.

There is also no way for Facebook Page operators to switch off (or otherwise block) wider processing of their Page followers by Facebook. Even if they don’t make use of any of the analytics features Facebook provides to Page operators.

The processing still happens.

This is because Facebook operates a take-it-or-leave it ‘data maximizing’ model — to feed its ad-targeting engines.

But it’s an approach that could backfire if it ends up permanently reducing the quality of the information available on its network because there’s a mass migration of key services off its platform. Such as, for example, every government agency in the EU deleted its Facebook Page.

A related blog post on the BfDi’s website also holds out the hope that “data protection-compliant social networks” might develop in the Facebook compliance vacuum.

Certainly there could be a competitive opportunity for alternative platforms that seek to sell services based on respecting users’ rights.

The German Federal Ministry of Health’s verified Facebook Page (Screengrab: TechCrunch/Natasha Lomas)

Discussing the BfDis intervention, Luca Tosoni, a research fellow at the University of Oslo’s Norwegian Research Center for Computers and Law, told TechCrunch: “This development is strictly connected to recent CJEU case law on joint controllership. In particular, it takes into account the Wirtschaftsakademie ruling, which found that the administrator of a Facebook page should be considered a joint controller with Facebook in respect of processing the personal data of the visitors of the page.

“This does not mean that the page administrator and Facebook share equal responsibility for all stages of the data processing activities linked to the use of the Facebook page. However, they must have an agreement in place with a clear allocation of roles and responsibilities. According to the German Federal Commissioner for Data Protection and Freedom of Information, Facebook’s current data protection ‘Addendum’ would not seem to be sufficient to meet the latter requirement.”

“It is worth noting that, in its Fashion ID ruling, the CJEU has taken the view that the GDPR’s obligations for joint controllers are commensurate with those data processing stages in which they actually exercise control,” Tosoni added. “This means that the data protection obligations a Facebook page administrator would normally tend to be quite limited.”

Warnings for other social media services

This particular compliance issue affects Facebook in Germany — and potentially any other EU market. But other social media services may face similar problems too.

For example, Kelber’s letter flags an ongoing audit of Instagram, TikTok and Clubhouse — warning of “deficits” in the level of data protection they offer too.

He goes on to recommend that agencies avoid using the three apps on business devices.  

In an earlier, 2019 assessment of government bodies’ use of social media services, the BfDi suggested usage of Twitter could — by contrast — be compliant with data protection rules. At least if privacy settings were fully enabled and analytics disabled, for example.

At the time the BfDi also warned that Facebook-owned Instagram faced similar compliance problems to Facebook, being subject to the same “abusive” approach to consent he said was taken by the whole group.

Reached for comment on Kelber’s latest recommendations to government agencies, Facebook did not engage with our specific questions — sending us this generic statement instead:

“At the end of 2019, we updated the Page Insights addendum and clarified the responsibilities of Facebook and Page administrators, for which we took questions regarding transparency of data processing into account. It is important to us that also federal agencies can use Facebook Pages to communicate with people on our platform in a privacy-compliant manner.”

An additional complication for Facebook has arisen in the wake of the legal uncertainty following last summer’s Schrems II ruling by the CJEU.

Europe’s top court invalidated the EU-US Privacy Shield arrangement, which had allowed companies to self-certify an adequate level of data protection, removing the easiest route for transferring EU users’ personal data over to the US. And while the court did not outlaw international transfers of EU users’ personal data altogether it made it clear that data protection agencies must intervene and suspend data flows if they suspect information is being moved to a place, and in in such a way, that it’s put at risk.

Following Schrems II, transfers to the US are clearly problematic where the data is being processed by a US company that’s subject to FISA 702, as is the case with Facebook.

Indeed, Facebook’s EU-to-US data transfers were the original target of the complainant in the Schrems II case (by the eponymous Max Schrems). And a decision remains pending on whether the tech giant’s lead EU data supervisor will follow through on a preliminary order last year to it should suspend its EU data flows — due in the coming months.

Even ahead of that long-anticipated reckoning in Ireland, other EU DPAs are now stepping in to take action — and Kelber’s letter references the Schrems II ruling as another issue of concern.

Tosoni agrees that GDPR enforcement is finally stepping up a gear. But he also suggested that compliance with the Schrems II ruling comes with plenty of nuance, given that each data flow must be assessed on a case by case basis — with a range of supplementary measures that controllers may be able to apply.

“This development also shows that European data protection authorities are getting serious about enforcing the GDPR data transfer requirements as interpreted by the CJEU in Schrems II, as the German Federal Commissioner for Data Protection and Freedom flagged this as another pain point,” he said.

“However, the German Federal Commissioner sent out his letter on the use of Facebook pages a few days before the EDPB adopted the final version its recommendations on supplementary measures for international data transfers following the CJEU Schrems II ruling. Therefore, it remains to be seen how German data protection authorities will take these new recommendations into account in the context of their future assessment of the GDPR compliance of the use of Facebook pages by German public authorities.

“Such recommendations do not establish a blanket ban on data transfers to the US but impose the adoption of stringent safeguards, which will need to be followed to keep on transferring the data of German visitors of Facebook pages to the US.”

Another recent judgment by the CJEU reaffirmed that EU data protection agencies can, in certain circumstances, take action when they are not the lead data supervisor for a specific company under the GDPR’s one-stop-shop mechanism — expanding the possibility for litigation by watchdogs in Member States if a local agency believes there’s an urgent need to act.

Although, in the case of the German government bodies’ use of Facebook Pages, the earlier CJEU ruling finding on joint law controllership means the BfDi already has clear jurisdiction to target these agencies’ Facebook Pages itself.

 

#advertising-tech, #australia, #cjeu, #data-processing, #data-protection, #data-security, #digital-rights, #eu-us-privacy-shield, #europe, #european-union, #facebook, #facebook-pages, #general-data-protection-regulation, #germany, #instagram, #ireland, #law, #max-schrems, #policy, #privacy, #twitter, #united-states

UK gets data flows deal from EU — for now

The UK’s digital businesses can breathe a sign of relief today as the European Commission has officially signed off on data adequacy for the (now) third country, post-Brexit.

It’s a big deal for UK businesses as it means the country will be treated by Brussels as having essentially equivalent data protection rules as markets within the bloc, despite no longer being a member itself — enabling personal data to continue to flow freely from the EU to the UK, and avoiding any new legal barriers.

The granting of adequacy status has been all but assured in recent weeks, after European Union Member States signed off on a draft adequacy arrangement. But the Commission’s adoption of the decision marks the final step in the process — at least for now.

It’s notable that the Commission’s PR includes a clear warning that if the UK seeks to weaken protections afforded to people’s data under the current regime it “will intervene”.

In a statement, Věra Jourová, Commission VP for values and transparency, said:

The UK has left the EU but today its legal regime of protecting personal data is as it was. Because of this, we are adopting these adequacy decisions today. At the same time, we have listened very carefully to the concerns expressed by the Parliament, the Members States and the European Data Protection Board, in particular on the possibility of future divergence from our standards in the UK’s privacy framework. We are talking here about a fundamental right of EU citizens that we have a duty to protect. This is why we have significant safeguards and if anything changes on the UK side, we will intervene.”

The UK adequacy decision comes with a Sword of Damocles baked in: A sunset clause of four years. It’s a first — so, er, congratulations to the UK government for projecting a perception of itself as untrustworthy over the short run.

This clause means the UK’s regime will face full scrutiny again in 2025, with no automatic continuation if its standards are deemed to have slipped (as many fear they will).

The Commission also emphasizes that its decision does not mean the UK has four ‘guaranteed’ years in the clear. On the contrary, it says it will “continue to monitor the legal situation in the UK and could intervene at any point, if the UK deviates from the level of protection currently in place”.

Third countries without an adequacy agreement — such as the US, which has adequacy twice struck down by Europe’s top court (after it found US surveillance law incompatible with EU fundamental rights) — do not enjoy ‘seamless’ legal certainty around personal data flows; and must instead take steps to assess each of these transfers individually to determine whether (and how) they can move data legally.

Last week, the European Data Protection Board (EDPB) put out its final bit of guidance for third countries wanting to transfer personal data outside the bloc. And the advice makes it clear that some types of transfers are unlikely to be possible.

For other types of transfers, the advice discusses a number of of supplementary measures (including technical steps like robust encryption) that may be possible for a data controller to use in order to, through their own technical, contractual and organizational effort, ramp up the level of protection to achieve the required standard.

It is, in short, a lot of work. And without today’s adequacy decision UK businesses would have had to get intimately acquainted with the EDPB’s guidance. For now, though, they’ve dodged that bullet.

The qualifier is still very necessary, though, because the UK government has signalled that it intends to rethink data protection.

How exactly it goes about that — and to what extent it changes the current ‘essentially equivalent’ regime — may make all the difference. For example, Digital minister Oliver Dowden has talked about data being “a great opportunity” for the UK, post-Brexit.

And writing in the FT back in February he suggested there will be room for the UK to rewrite its national data protection rules without diverging so much that it puts adequacy at risk. “We fully intend to maintain those world-class standards. But to do so, we do not need to copy and paste the EU’s rule book, the General Data Protection Regulation, word-for-word,” he suggested then, adding that: “Countries as diverse as Israel and Uruguay have successfully secured adequacy with Brussels despite having their own data regimes. Not all of those were identical to GDPR, but equal doesn’t have to mean the same. The EU doesn’t hold the monopoly on data protection.”

The devil will, as they say, be in the detail. But some early signals are concerning — and the UK’s startup ecosystem would be well advised to take an active role in impressing upon government the importance to stay aligned with European data standards.

Moreover, there’s also the prospect of a legal challenge to the adequacy decision — even as is, i.e. based on current UK standards (which find plenty of critics). Certainly it can’t be ruled out — and the CJEU hasn’t shied away from quashing other adequacy arrangements it judged to be invalid…

Today, though, the Department for Digital, Media, Culture and Sport (DCMS) has seized the chance to celebrate a PR win, writing that the Commission’s decision “rightly recognises the country’s high data protection standards”.

The department also reiterated the UK government’s intention to “promote the free flow of personal data globally and across borders”, including through what it bills as “ambitious new trade deals and through new data adequacy agreements with some of the fastest growing economies” — simultaneously claiming it would do so “while ensuring people’s data continues to be protected to a high standard”. Pinky promise.

“All future decisions will be based on what maximises innovation and keeps up with evolving tech,” the DCMS added in a press release. “As such, the government’s approach will seek to minimise burdens on organisations seeking to use data to tackle some of the most pressing global issues, including climate change and the prevention of disease.”

In a statement, Dowden also made a point of combining both streams, saying: “We will now focus on unlocking the power of data to drive innovation and boost the economy while making sure we protect people’s safety and privacy.”

UK business and tech associations were just as quick to welcome the Commission’s adequacy decision. The alternative would of course have been very costly disruption.

In a statement, John Foster, director of policy for the Confederation of British Industry, said: “This breakthrough in the EU-UK adequacy decision will be welcomed by businesses across the country. The free flow of data is the bedrock of the modern economy and essential for firms across all sectors– from automotive to logistics — playing an important role in everyday trade of goods and services. This positive step will help us move forward as we develop a new trading relationship with the EU.”

In another supporting statement, Julian David, CEO of techUK, added: “Securing an EU-UK adequacy decision has been a top priority for techUK and the wider tech industry since the day after the 2016 referendum. The decision that the UK’s data protection regime offers an equivale