InfoSum outs an identity linking tool that’s exciting marketing firms like Experian

InfoSum, a startup which takes a federated approach to third party data enrichment, has launched a new product (called InfoSum Bridge) that it says significantly expands the customer identity linking capabilities of its platform.

“InfoSum Bridge incorporates multiple identity providers across every identity type — both online and offline, in any technical framework — including deterministic, probabilistic, and cohort-level matches,” it writes in a press release.

It’s also disclosing some early adopters of the product — naming data-for-ads and data-aggregator giants Merkle, MMA and Experian as dipping in.

Idea being they can continue to enrich (first party) data by being able to make linkages, via InfoSum’s layer, with other ‘trusted partners’ who may have gleaned more tidbits of info on those self-same users.

InfoSum says it has 50 enterprise customers using InfoSum Bridge at this point. The three companies it’s named in the release all play in the digital marketing space.

The 2016-founded startup (then called CognitiveLogic) sells customers a promise of ‘privacy-safe’ data enrichment run via a technical architecture that allows queries to be run — and insights gleaned — across multiple databases yet maintains each pot as a separate silo. This means the raw data isn’t being passed around between interested entities. 

Why is that important? Third party data collection is drying up, after one (thousand) too many privacy scandals in recent years — combing with the legal risk attached to background trading of people’s data as a result of data protection regimes like Europe’s General Data Protection Regulation.

That puts the spotlight squarely on first party data. However businesses whose models have been dependent on access to big data about people — i.e. being able to make scores of connections by joining up information on people from different databases/sources (aka profiling) — are unlikely to be content with relying purely on what they’ve been able to learn by themselves.

This is where InfoSum comes in, billing itself as a “neutral data collaboration platform”.

Companies that may have been accustomed to getting their hands on lashings of personal data in years past, as a result of rampant, industry-wide third party data collection (via technologies like tracking cookies) combined with (ehem) lax data governance — are having to cast around for alternatives. And that appears to be stoking InfoSum’s growth.

And on the marketing front, remember, third party cookies are in the process of going away as Google tightens that screw

“We are growing faster than Slack (at equivalent stage e.g. Series A->B) because we are the one solution that is replacing the old way of doing things,” founder Nick Halstead tells TechCrunch. “Experian, Liveramp, Axciom, TransUnion, they all offer solutions to take your data. InfoSum is offering the equivalent of the ‘Cisco router for customer data’ — we don’t own the data we are just selling boxes to make it all connect.”

“The announcement today — ‘InfoSum Bridge’ — is the next generation of building the ultimate network to ‘Bridge the industry chasm’ it has right now of 100’s of competing ID’s, technical solutions and identity types, bringing a infrastructure approach,” he adds.

We took a deep dive into InfoSum’s first product back in 2018 — when it was just offering early adopters a glimpse of the “art of the possible”, as it put it then.

Three+ years on it’s touting a significantly expansion of its pipeline, having baked in support for multiple ID vendors/types, as well as adding probabilistic capabilities (to do matching on users where there is no ID).

Per a spokesman: “InfoSum Bridge is an extension of our existing and previous infrastructure. It enables a significant expansion of both our customer identity linking, and the limits of what is possible for data collaboration in a secure and privacy-focused manner. This is a combination of new product enhancements and announcement of partnerships. We’ve built capabilities to support across all ID vendors and types but also probabilistic and support for those publishers with unauthenticated audiences.”

InfoSum bills its platform as “the future of identity connectivity”. Although, as Halstead notes, there is now growing competition for that concept, as the adtech industry scrambles to build out alternative tracking systems and ID services ahead of Google crushing their cookies for good.

But it’s essentially making a play to be the trusted, independent layer that can link them all.

Exactly what this technical wizardry means for Internet users’ privacy is difficult to say. If, for example, it continues to enable manipulative microtargeting that’s hardly going to sum to progress.

InfoSum has previously told us its approach is designed to avoid individuals being linked and identified via the matching — with, for exmaple, limits placed on the bin sizes. Although its platform is also configurable (which puts privacy levers in its customers hands). Plus there could be edge cases where overlapped datasets result in a 100% match for an individual. So a lot is unclear.

The security story looks cleaner, though.

If the data is properly managed by InfoSum (and it touts “comprehensive independent audits”, as well as pointing to the decentralized architecture as an advantage) that’s a big improvement on — at least — one alternative scenario of whole databases being passed around between businesses which may be (to put it politely) disinterested in securing people’s data themselves.

InfoSum’s PR includes the three canned quotes (below) from the trio of marketing industry users it’s disclosing today.

All of whom sound very happy indeed that they’ve found a way to keep their “data-driven” marketing alive while simultaneously getting to claim it’s “privacy-safe”…

John Lee, Global Chief Strategy Officer, Merkle: “The conversation around identity is continuing to be top of mind for marketers across the industry, and as the landscape rapidly changes, it’s essential that brands have avenues to work together using first-party identity and data in a privacy-safe way. The InfoSum Bridge solution provides our clients and partners a way to collaborate using their first-party data, resolved to Merkury IDs and data, with even greater freedom and confidence than with traditional clean room or safe haven approaches.”

Lou Paskalis, Chairman, MMA Global Media and Data Board: “As marketers struggle to better leverage their first-party data in the transition from the cookie era to the consent era, I would have expected more innovative solutions to emerge.  One bright spot is InfoSum, which offers a proprietary technology to connect data, yet never share that data. This is the most customer-friendly and compliant technology that I’ve seen that enables marketers to fully realize the true potential of their first party data. What InfoSum has devised is an elegant way to respect consumers’ privacy choices while enabling marketers to realize the full benefit of their first party data.”

Colin Grieves, Managing Director Experian: “At Experian we are committed to a culture of customer-centric data innovation, helping develop more meaningful and seamless connections between brands and their audiences. InfoSum Bridge gives us a scalable environment for secure, data connectivity and collaboration. Bridge is at the core of the Experian Match offering, which allows brands and publishers alike the ability to understand and engage the right consumers in the digital arena at scale, whilst safeguarding consumer data and privacy.”

Thing is, clever technical architecture that enables big data fuelled modelling and profiling of people to continue, via pattern matching to identify ‘lookalike’ customers who can (for example) be bucketed and targeted with ads, doesn’t actually sum to privacy as most people would understand it… But, for sure, impressive tech architecture guys.

The same issue attaches to FloCs, Google’s proposed replacement for tracking cookies — which also relies on federation (and which the EFF has branded a “terrible idea”, warning that such an approach actually risks amplifying predatory targeting).

The tenacity with which the marketing industry seeks to cling to microtargeting does at least underline why rights-focused regulatory oversight of adtech is going to be essential if we’re to stamp out systematic societal horrors like ads that scale bias by discriminating against protected groups, or the anti-democratic manipulation of voters that’s enabled by opaque targeting and hyper-targeted messaging, circumventing the necessary public scrutiny.

Tl;dr: Privacy is not just important for the individual. It’s a collective good. And keeping that collective commons safe from those who would seek to exploit it — for a quick buck or worse — is going to require a whole other type of oversight architecture.

#advertising-tech, #big-data, #data-protection, #digital-marketing, #europe, #experian, #google, #infosum, #liveramp, #marketing, #merkle, #mma, #nick-halstead, #online-advertising, #privacy, #targeted-advertising

0

UK PM Boris Johnson’s Tories guilty of spamming voters

The governing party of the UK has been fined £10k by the national data protection watchdog for sending spam.

The Information Commissioner’s (ICO) Office has sanctioned the Conservative Party following an investigation triggered by complaints from 51 recipients of unwanted marketing emails sent in the name of prime minister, Boris Johnson.

The emails in question were sent during eight days in July 2019 after Johnson had been elected as Party leader (and also therefore became UK PM) — urging the recipients to click on a link that directed them to a website for joining the Conservative Party.

Direct marketing is regulated in the UK by PECR (the Privacy and Electronic Communications Regulations) — which requires senders to obtain individual consent to distribute digital marketing missives.

But the ICO’s investigation found that the Conservative Party lacked written policies addressing PECR and appeared to be operating under the misguided assumption that their “legitimate interests” overrode the legal requirements related to sending this type of direct marketing.

The Party had also switched bulk email provider — during which unsubscribe records were apparently lost. But ofc that’s not an excuse for breaking the law. (Indeed, record-keeping is a core requirement of UK data protection law, especially since the EU General Data Protection Regulation was transposed into national law back in 2018.) And the ICO found the Tories were unable to adequately explain what had gone wrong.

In another damningly twist, the Conservative Party had been subject to what the ICO calls “detailed engagement” at the time it was spamming people.

This was a result of wider action by the regulator, looking into the ecosystem and ethics around online political ads in the wake of the Cambridge Analytica scandal — and the Party had already been warned of inadequate standards in its compliance with data protection and privacy law. But it went ahead and spammed people anyway. 

So while ‘only’ 51 complaints were received by the ICO from individual recipients of Boris Johnson’s spam, the ICO found the Tories could not fully demonstrate they had the proper consents for over a million (1,190,280) direct marketing emails sent between July 24 and 31 2019. (The ICO takes that view that at least 549,030 of those, which were send to non-Party members, were “inherently likely” to have the same compliance issues as were identified with the emails sent to the 51 complainants.)

Moreover, the Party continued to have scant regard for the law as it spun up its spam engines ahead of the 2019 General Election — which saw Johnson gain a landslide majority of 80 seats in a winter ballot.

“During the course of the Commissioner’s investigation, the Party proceeded to engage in an industrial-scale direct marketing email exercise during the 2019 General Election campaign, sending nearly 23M emails,” the ICO notes. “This generated a further 95 complaints to the Commissioner, which are likely to have resulted from the Party’s failure to address the compliance issues identified in the Commissioner’s investigation into the July 2019 email campaign and the wider audit of the Party’s processing of personal data.”

Its report also chronicles “extensive delays” by the Conservative Party in responding to its requests for information and clarification — so while it was not found to have obstructed the investigation the regulator does write that its conduct “cannot be characterised as a mitigating factor”.

While the ICO penalty is an embarrassing slap for Boris Johnson’s Tories, a data audit of all the main UK political parties it put out last year spared no blushes — with all parties found wanting in how they handle and safeguard voter information.

However it’s only the Conservatives’ fast and loose attitude toward people’s data and privacy online that could have contributed to them being able to consolidate power at the last election.

#boris-johnson, #cambridge-analytica, #computing, #conservative-party, #data-protection, #data-protection-law, #data-security, #digital-marketing, #email, #european-union, #general-data-protection-regulation, #general-election, #leader, #marketing, #spamming, #tc, #united-kingdom

0

Europe’s cookie consent reckoning is coming

Cookie pop-ups getting you down? Complaints that the web is ‘unusable’ in Europe because of frustrating and confusing ‘data choices’ notifications that get in the way of what you’re trying to do online certainly aren’t hard to find.

What is hard to find is the ‘reject all’ button that lets you opt out of non-essential cookies which power unpopular stuff like creepy ads. Yet the law says there should be an opt-out clearly offered. So people who complain that EU ‘regulatory bureaucracy’ is the problem are taking aim at the wrong target.

EU law on cookie consent is clear: Web users should be offered a simple, free choice — to accept or reject.

The problem is that most websites simply aren’t compliant. They choose to make a mockery of the law by offering a skewed choice: Typically a super simple opt-in (to hand them all your data) vs a highly confusing, frustrating, tedious opt-out (and sometimes even no reject option at all).

Make no mistake: This is ignoring the law by design. Sites are choosing to try to wear people down so they can keep grabbing their data by only offering the most cynically asymmetrical ‘choice’ possible.

However since that’s not how cookie consent is supposed to work under EU law sites that are doing this are opening themselves to large fines under the General Data Protection Regulation (GDPR) and/or ePrivacy Directive for flouting the rules.

See, for example, these two whopping fines handed to Google and Amazon in France at the back end of last year for dropping tracking cookies without consent…

While those fines were certainly head-turning, we haven’t generally seen much EU enforcement on cookie consent — yet.

This is because data protection agencies have mostly taken a softly-softly approach to bringing sites into compliance. But there are signs enforcement is going to get a lot tougher. For one thing, DPAs have published detailed guidance on what proper cookie compliance looks like — so there are zero excuses for getting it wrong.

Some agencies had also been offering compliance grace periods to allow companies time to make the necessary changes to their cookie consent flows. But it’s now a full three years since the EU’s flagship data protection regime (GDPR) came into application. So, again, there’s no valid excuse to still have a horribly cynical cookie banner. It just means a site is trying its luck by breaking the law.

There is another reason to expect cookie consent enforcement to dial up soon, too: European privacy group noyb is today kicking off a major campaign to clean up the trashfire of non-compliance — with a plan to file up to 10,000 complaints against offenders over the course of this year. And as part of this action it’s offering freebie guidance for offenders to come into compliance.

Today it’s announcing the first batch of 560 complaints already filed against sites, large and small, located all over the EU (33 countries are covered). noyb said the complaints target companies that range from large players like Google and Twitter to local pages “that have relevant visitor numbers”.

“A whole industry of consultants and designers develop crazy click labyrinths to ensure imaginary consent rates. Frustrating people into clicking ‘okay’ is a clear violation of the GDPR’s principles. Under the law, companies must facilitate users to express their choice and design systems fairly. Companies openly admit that only 3% of all users actually want to accept cookies, but more than 90% can be nudged into clicking the ‘agree’ button,” said noyb chair and long-time EU privacy campaigner, Max Schrems, in a statement.

“Instead of giving a simple yes or no option, companies use every trick in the book to manipulate users. We have identified more than fifteen common abuses. The most common issue is that there is simply no ‘reject’ button on the initial page,” he added. “We focus on popular pages in Europe. We estimate that this project can easily reach 10,000 complaints. As we are funded by donations, we provide companies a free and easy settlement option — contrary to law firms. We hope most complaints will quickly be settled and we can soon see banners become more and more privacy friendly.”

To scale its action, noyb developed a tool which automatically parses cookie consent flows to identify compliance problems (such as no opt out being offered at the top layer; or confusing button coloring; or bogus ‘legitimate interest’ opt-ins, to name a few of the many chronicled offences); and automatically create a draft report which can be emailed to the offender after it’s been reviewed by a member of the not-for-profit’s legal staff.

It’s an innovative, scalable approach to tackling systematically cynical cookie manipulation in a way that could really move the needle and clean up the trashfire of horrible cookie pop-ups.

noyb is even giving offenders a warning first — and a full month to clean up their ways — before it will file an official complaint with their relevant DPA (which could lead to an eye-watering fine).

Its first batch of complaints are focused on the OneTrust consent management platform (CMP), one of the most popular template tools used in the region — and which European privacy researchers have previously shown (cynically) provides its client base with ample options to set non-compliant choices like pre-checked boxes… Talk about taking the biscuit.

A noyb spokeswoman said it’s started with OneTrust because its tool is popular but confirmed the group will expand the action to cover other CMPs in the future.

The first batch of noyb’s cookie consent complaints reveal the rotten depth of dark patterns being deployed — with 81% of the 500+ pages not offering a reject option on the initial page (meaning users have to dig into sub-menus to try to find it); and 73% using “deceptive colors and contrasts” to try to trick users into clicking the ‘accept’ option.

noyb’s assessment of this batch also found that a full 90% did not provide a way to easily withdraw consent as the law requires.

Cookie compliance problems found in the first batch of sites facing complaints (Image credit: noyb)

It’s a snapshot of truly massive enforcement failure. But dodgy cookie consents are now operating on borrowed time.

Asked if it was able to work out how prevalent cookie abuse might be across the EU based on the sites it crawled, noyb’s spokeswoman said it was difficult to determine, owing to technical difficulties encountered through its process, but she said an initial intake of 5,000 websites was whittled down to 3,600 sites to focus on. And of those it was able to determine that 3,300 violated the GDPR.

That still left 300 — as either having technical issues or no violations — but, again, the vast majority (90%) were found to have violations. And with so much rule-breaking going on it really does require a systematic approach to fixing the ‘bogus consent’ problem — so noyb’s use of automation tech is very fitting.

More innovation is also on the way from the not-for-profit — which told us it’s working on an automated system that will allow Europeans to “signal their privacy choices in the background, without annoying cookie banners”.

At the time of writing it couldn’t provide us with more details on how that will work (presumably it will be some kind of browser plug-in) but said it will be publishing more details “in the next weeks” — so hopefully we’ll learn more soon.

A browser plug-in that can automatically detect and select the ‘reject all’ button (even if only from a subset of the most prevalent CMPs) sounds like it could revive the ‘do not track’ dream. At the very least, it would be a powerful weapon to fight back against the scourge of dark patterns in cookie banners and kick non-compliant cookies to digital dust.

 

#advertising-tech, #cookie-consent, #data-protection, #eprivacy, #europe, #european-union, #gdpr, #general-data-protection-regulation, #max-schrems, #noyb, #policy, #privacy, #tc

0

EU bodies’ use of US cloud services from AWS, Microsoft being probed by bloc’s privacy chief

Europe’s lead data protection regulator has opened two investigations into EU institutions’ use of cloud services from U.S. cloud giants, Amazon and Microsoft, under so called Cloud II contracts inked earlier between European bodies, institutions and agencies and AWS and Microsoft.

A separate investigation has also been opened into the European Commission’s use of Microsoft Office 365 to assess compliance with earlier recommendations, the European Data Protection Supervisor (EDPS) said today.

Wojciech Wiewiórowski is probing the EU’s use of U.S. cloud services as part of a wider compliance strategy announced last October following a landmark ruling by the Court of Justice (CJEU) — aka, Schrems II — which struck down the EU-US Privacy Shield data transfer agreement and cast doubt upon the viability of alternative data transfer mechanisms in cases where EU users’ personal data is flowing to third countries where it may be at risk from mass surveillance regimes.

In October, the EU’s chief privacy regulator asked the bloc’s institutions to report on their transfers of personal data to non-EU countries. This analysis confirmed that data is flowing to third countries, the EDPS said today. And that it’s flowing to the U.S. in particular — on account of EU bodies’ reliance on large cloud service providers (many of which are U.S.-based).

That’s hardly a surprise. But the next step could be very interesting as the EDPS wants to determine whether those historical contracts (which were signed before the Schrems II ruling) align with the CJEU judgement or not.

Indeed, the EDPS warned today that they may not — which could thus require EU bodies to find alternative cloud service providers in the future (most likely ones located within the EU, to avoid any legal uncertainty). So this investigation could be the start of a regulator-induced migration in the EU away from U.S. cloud giants.

Commenting in a statement, Wiewiórowski said: “Following the outcome of the reporting exercise by the EU institutions and bodies, we identified certain types of contracts that require particular attention and this is why we have decided to launch these two investigations. I am aware that the ‘Cloud II contracts’ were signed in early 2020 before the ‘Schrems II’ judgement and that both Amazon and Microsoft have announced new measures with the aim to align themselves with the judgement. Nevertheless, these announced measures may not be sufficient to ensure full compliance with EU data protection law and hence the need to investigate this properly.”

Amazon and Microsoft have been contacted with questions regarding any special measures they have applied to these Cloud II contracts with EU bodies.

The EDPS said it wants EU institutions to lead by example. And that looks important given how, despite a public warning from the European Data Protection Board (EDPB) last year — saying there would be no regulatory grace period for implementing the implications of the Schrems II judgement — there hasn’t been any major data transfer fireworks yet.

The most likely reason for that is a fair amount of head-in-the-sand reaction and/or superficial tweaks made to contracts in the hopes of meeting the legal bar (but which haven’t yet been tested by regulatory scrutiny).

Final guidance from the EDPB is also still pending, although the Board put out detailed advice last fall.

The CJEU ruling made it plain that EU law in this area cannot simply be ignored. So as the bloc’s data regulators start scrutinizing contracts that are taking data out of the EU some of these arrangement are, inevitably, going to be found wanting — and their associated data flows ordered to stop.

To wit: A long-running complaint against Facebook’s EU-US data transfers — filed by the eponymous Max Schrems, a long-time EU privacy campaigners and lawyer, all the way back in 2013 — is slowing winding toward just such a possibility.

Last fall, following the Schrems II ruling, the Irish regulator gave Facebook a preliminary order to stop moving Europeans’ data over the pond. Facebook sought to challenge that in the Irish courts but lost its attempt to block the proceeding earlier this month. So it could now face a suspension order within months.

How Facebook might respond is anyone’s guess but Schrems suggested to TechCrunch last summer that the company will ultimately need to federate its service, storing EU users’ data inside the EU.

The Schrems II ruling does generally look like it will be good news for EU-based cloud service providers which can position themselves to solve the legal uncertainty issue (even if they aren’t as competitively priced and/or scalable as the dominant US-based cloud giants).

Fixing U.S. surveillance law, meanwhile — so that it gets independent oversight and accessible redress mechanisms for non-citizens in order to no longer be considered a threat to EU people’s data, as the CJEU judges have repeatedly found — is certainly likely to take a lot longer than ‘months’. If indeed the US authorities can ever be convinced of the need to reform their approach.

Still, if EU regulators finally start taking action on Schrems II — by ordering high profile EU-US data transfers to stop — that might help concentrate US policymakers’ minds toward surveillance reform. Otherwise local storage may be the new future normal.

#amazon, #aws, #cloud, #cloud-services, #data-protection, #data-protection-law, #data-security, #eu-us-privacy-shield, #europe, #european-commission, #european-data-protection-board, #european-union, #facebook, #lawyer, #max-schrems, #microsoft, #privacy, #surveillance-law, #united-states, #wojciech-wiewiorowski

0

European Parliament amps up pressure on EU-US data flows and GDPR enforcement

European Union lawmakers are facing further pressure to step in and do something about lackadaisical enforcement of the bloc’s flagship data protection regime after the European Parliament voted yesterday to back a call urging the Commission to start an infringement proceeding against Ireland’s Data Protection Commission (DPC) for not “properly enforcing” the regulation.

The Commission and the DPC have been contacted for comment on the parliament’s call.

Last summer the Commission’s own two-year review of the General Data Protection Regulation (GDPR) highlighted a lack of uniformly vigorous enforcement — but commissioners were keener to point out the positives, lauding the regulation as a “global reference point”.

But it’s now nearly three years since the regulation begun being applied and criticism over weak enforcement is getting harder for the EU’s executive to ignore.

The parliament’s resolution — which, while non-legally binding, fires a strong political message across the Commission’s bow — singles out the DPC for specific criticism given its outsized role in enforcement of the General Data Protection Regulation (GDPR). It’s the lead supervisory authority for complaints brought against the many big tech companies which choose to site their regional headquarters in the country (on account of its corporate-friendly tax system).

The text of the resolution expresses “deep concern” over the DPC’s failure to reach a decision on a number of complaints against breaches of the GDPR filed the day it came into application, on May 25, 2018 — including against Facebook and Google — and criticises the Irish data watchdog for interpreting ‘without delay’ in Article 60(3) of the GDPR “contrary to the legislators’ intention – as longer than a matter of months”, as they put it.

To date the DPC has only reached a final decision on one cross-border GDPR case — against Twitter.

The parliament also says it’s “concerned about the lack of tech specialists working for the DPC and their use of outdated systems” (which Brave also flagged last year) — as well as criticizing the watchdog’s handling of a complaint originally brought by privacy campaigner Max Schrems years before the GDPR came into application, which relates to the clash between EU privacy rights and US surveillance laws, and which still hasn’t resulted in a decision.

The DPC’s approach to handling Schrems’ 2013 complaint led to a 2018 referral to the CJEU — which in turn led to the landmark Schrems II judgement last summer invalidating the flagship EU-US data transfer arrangement, Privacy Shield.

That ruling did not outlaw alternative data transfer mechanisms but made it clear that EU DPAs have an obligation to step in and suspend data transfers if European’s information is being taken to a third country that does not have essentially equivalent protections to those they have under EU law — thereby putting the ball back in the DPC’s court on the Schrems complaint.

The Irish regulator then sent a preliminary order to Facebook to suspend its data transfers and the tech giant responded by filing for a judicial review of the DPC’s processes. However the Irish High Court rejected Facebook’s petition last week. And a stay on the DPC’s investigation was lifted yesterday — so the DPC’s process of reaching a decision on the Facebook data flows complaint has started moving again.

A final decision could still take several months more, though — as we’ve reported before — as the DPC’s draft decision will also need to be put to the other EU DPAs for review and the chance to object.

The parliament’s resolution states that it “is worried that supervisory authorities have not taken proactive steps under Article 61 and 66 of the GDPR to force the DPC to comply with its obligations under the GDPR”, and — in more general remarks on the enforcement of GDPR around international data transfers — it states that it:

Is concerned about the insufficient level of enforcement of the GDPR, particularly in the area of international transfers; expresses concerns at the lack of prioritisation and overall scrutiny by national supervisory authorities with regard to personal data transfers to third countries, despite the significant CJEU case law developments over the past five years; deplores the absence of meaningful decisions and corrective measures in this regard, and urges the EDPB [European Data Protection Board] and national supervisory authorities to include personal data transfers as part of their audit, compliance and enforcement strategies; points out that harmonised binding administrative procedures on the representation of data subjects and admissibility are needed to provide legal certainty and deal with crossborder complaints;

The knotty, multi-year saga of Schrems’ Facebook data-flows complaint, as played out via the procedural twists of the DPC and Facebook’s lawyers’ delaying tactics, illustrates the multi-layered legal, political and commercial complexities bound up with data flows out of the EU (post-Snowden’s 2013 revelations of US mass surveillance programs) — not to mention the staggering challenge for EU data subjects to actually exercise the rights they have on paper. But these intersecting issues around international data flows do seem to be finally coming to a head, in the wake of the Schrems II CJEU ruling.

The clock is now ticking for the issuing of major data suspension orders by EU data protection agencies, with Facebook’s business first in the firing line.

Other US-based services that are — similarly — subject to the US’ FISA regime (and also move EU users data over the pond for processing; and whose businesses are such they cannot shield user data via ‘zero access’ encryption architecture) are equally at risk of receiving an order to shut down their EU-US data-pipes. Or else having to shift data processing for these users inside the EU.

US-based services aren’t the only ones facing increasing legal uncertainty, either.

The UK, post-Brexit, is also classed as a third country (in EU law terms). And in a separate resolution today the parliament adopted a text on the UK adequacy agreement, granted earlier this year by the Commission, which raises objections to the arrangement — including by flagging a lack of GDPR enforcement in the UK as problematic.

On that front the parliament highlights how adtech complaints filed with the ICO have failed to yield a decision. (It writes that it’s concerned “non-enforcement is a structural problem” in the UK — which it suggests has left “a large number of data protection law breaches… [un]remedied”.)

It also calls out the UK’s surveillance regime, questioning its compatibility with the CJEU’s requirements for essential equivalence — while also raising concerns about the risk that the UK could undermine protections on EU citizens data via onward transfers to jurisdictions the EU does not have an adequacy agreement with, among other objections.

The Commission put a four year lifespan on the UK’s adequacy deal — meaning there will be another major review ahead of any continuation of the arrangement in 2025.

It’s a far cry from the ‘hands-off’ fifteen years the EU-US ‘Safe Harbor’ agreement stood for, before a Schrems challenge finally led to the CJEU striking it down back in 2015. So the takeaway here is that data deals that allow for people’s information to leave Europe aren’t going to be allowed to stand unchecked for years; close scrutiny and legal accountability are now firmly up front — and will remain in the frame going forward.

The global nature of the Internet and the ease with which data can digitally flow across borders of course brings huge benefits for businesses — but the resulting interplay between different legal regimes is leading to increasing levels of legal uncertainty for companies seeking to take people’s data across borders.

In the EU’s case, the issue is that data protection is regulated within the bloc and these laws require that protection stays with people’s information, no matter where it goes. So if the data flows to countries that do not offer the same safeguards — be that the US or indeed China or India (or even the UK) — then that risk is that it can’t, legally, be taken there.

How to resolve this clash, between data protection laws based on individual privacy rights and data access mandates driven by national security priorities, has no easy answers.

For the US, and for the transatlantic data flows between the EU and the US, the Commission has warned there will be no quick fix this time — as happened when it slapped a sticking plaster atop the invalidated Safe Harbor, hailing a new ‘Privacy Shield’ regime; only for the CJEU to blast that out of the water for much the same reasons a few years later. (The parliament resolution is particularly withering in its assessment of the Commission’s historic missteps there.)

For a fix to stick, major reform of US surveillance law is going to be needed. And the Commission appears to have accepted that’s not going to come overnight, so it seems to be trying to brace businesses for turbulence…

The parliament’s resolution on Schrems II also makes it clear that it expects DPAs to step in and cut off risky data flows — with MEPs writing that “if no arrangement with the US is swiftly found which guarantees an essentially equivalent and therefore adequate level of protection to that provided by the GDPR and the Charter, that these transfers will be suspended until the situation is resolved”.

So if DPAs fail to do this — and if Ireland keeps dragging its feet on closing out the Schrems complaint — they should expect more resolutions to be blasted at them from the parliament.

MEPs emphasize the need for any future EU-US data transfer agreement “to address the problems identified by the Court ruling in a sustainable manner” — pointing out that “no contract between companies can provide protection from indiscriminate access by intelligence authorities to the content of electronic communications, nor can any contract between companies provide sufficient legal remedies against mass surveillance”.

“This requires a reform of US surveillance laws and practices with a view to ensuring that access of US security authorities to data transferred from the EU is limited to what is necessary and proportionate, and that European data subjects have access to effective judicial redress before US courts,” the parliament adds.

It’s still true that businesses may be able to legally move EU personal data out of the bloc. Even, potentially, to the US — depending on the type of business; the data itself; and additional safeguards that could be applied.

However for data-mining companies like Facebook — which are subject to FISA and whose businesses rely on accessing people’s data — then achieving essential equivalence with EU privacy protections looks, well, essentially impossible.

And while the parliament hasn’t made an explicit call in the resolution for Facebook’s EU data flows to be cut off that is the clear implication of it urging infringement proceedings against the DPC (and deploring “the absence of meaningful decisions and corrective measures” in the area of international transfers).

The parliament says it wants to see “solid mechanisms compliant with the CJEU judgement” set out — for the benefit of businesses with the chance to legally move data out of the EU — saying, for example, that the Commission’s proposal for a template for Standard Contractual Clauses (SCCs) should “duly take into account all the relevant recommendations of the EDPB“.

It also says it supports the creation of a tool box of supplementary measures for such businesses to choose from — in areas like security and data protection certification; encryption safeguards; and pseudonymisation — so long as the measures included are accepted by regulators.

It also wants to see publicly available resources on the relevant legislation of the EU’s main trading partners to help businesses that have the possibility of being able to legally move data out of the bloc get guidance to help them do so with compliance.

The overarching message here is that businesses should buckle up for disruption of cross-border data flows — and tool up for compliance, where possible.

In another segment of the resolution, for example, the parliament calls on the Commission to “analyse the situation of cloud providers falling under section 702 of the FISA who transfers data using SCCs” — going on to suggest that support for European alternatives to US cloud providers may be needed to plug “gaps in the protection of data of European citizens transferred to the United States” and “reduce the dependence of the Union in storage capacities vis-à-vis third countries and to strengthen the Union’s strategic autonomy in terms of data management and protection”.

#brexit, #china, #cloud, #data-mining, #data-protection, #data-protection-commission, #data-security, #encryption, #eu-us-privacy-shield, #europe, #european-data-protection-board, #european-parliament, #european-union, #facebook, #general-data-protection-regulation, #google, #india, #ireland, #lawsuit, #max-schrems, #noyb, #privacy, #safe-harbor, #surveillance-law, #twitter, #united-kingdom, #united-states

0

Facebook loses last ditch attempt to derail DPC decision on its EU-US data flows

Facebook has failed in its bid to prevent its lead EU data protection regulator from pushing ahead with a decision on whether to order suspension of its EU-US data flows.

The Irish High Court has just issued a ruling dismissing the company’s challenge to the Irish Data Protection Commission’s (DPC) procedures.

The case has huge potential operational significance for Facebook which may be forced to store European users’ data locally if it’s ordered to stop taking their information to the U.S. for processing.

Last September Irish data watchdog made a preliminary order warning Facebook it may have to suspend EU-US data flows. Facebook responding by filing for a judicial review and obtaining a stay on the DPC’s procedure. That block is now being unblocked.

We understand the involved parties have been given a few days to read the High Court judgement ahead of another hearing on Thursday — when the court is expected to formally lift Facebook’s stay on the DPC’s investigation (and settle the matter of case costs).

The DPC declined to comment on today’s ruling in any detail — or on the timeline for making a decision on Facebook’s EU-US data flows — but deputy commissioner Graham Doyle told us it “welcomes today’s judgment”.

Its preliminary suspension order last fall followed a landmark judgement by Europe’s top court in the summer — when the CJEU struck down a flagship transatlantic agreement on data flows, on the grounds that US mass surveillance is incompatible with the EU’s data protection regime.

The fall-out from the CJEU’s invalidation of Privacy Shield (as well as an earlier ruling striking down its predecessor Safe Harbor) has been ongoing for years — as companies that rely on shifting EU users’ data to the US for processing have had to scramble to find valid legal alternatives.

While the CJEU did not outright ban data transfers out of the EU, it made it crystal clear that data protection agencies must step in and suspend international data flows if they suspect EU data is at risk. And EU to US data flows were signalled as at clear risk given the court simultaneously struck down Privacy Shield.

The problem for some businesses is that there may simply not be a valid legal alternative. And that’s where things look particularly sticky for Facebook, since its service falls under NSA surveillance via Section 702 of the FISA (which is used to authorize mass surveillance programs like Prism).

So what happens now for Facebook, following the Irish High Court ruling?

As ever in this complex legal saga — which has been going on in various forms since an original 2013 complaint made by European privacy campaigner Max Schrems — there’s still some track left to run.

After this unblocking the DPC will have two enquiries in train: Both the original one, related to Schrems’ complaint, and an own volition enquiry it decided to open last year — when it said it was pausing investigation of Schrems’ original complaint.

Schrems, via his privacy not-for-profit noyb, filed for his own judicial review of the DPC’s proceedings. And the DPC quickly agreed to settle — agreeing in January that it would ‘swiftly’ finalize Schrems’ original complaint. So things were already moving.

The tl;dr of all that is this: The last of the bungs which have been used to delay regulatory action in Ireland over Facebook’s EU-US data flows are finally being extracted — and the DPC must decide on the complaint.

Or, to put it another way, the clock is ticking for Facebook’s EU-US data flows. So expect another wordy blog post from Nick Clegg very soon.

Schrems previously told TechCrunch he expects the DPC to issue a suspension order against Facebook within months — perhaps as soon as this summer (and failing that by fall).

In a statement reacting to the Court ruling today he reiterated that position, saying: “After eight years, the DPC is now required to stop Facebook’s EU-US data transfers, likely before summer. Now we simply have two procedures instead of one.”

When Ireland (finally) decides it won’t mark the end of the regulatory procedures, though.

A decision by the DPC on Facebook’s transfers would need to go to the other EU DPAs for review — and if there’s disagreement there (as seems highly likely, given what’s happened with draft DPC GDPR decisions) it will trigger a further delay (weeks to months) as the European Data Protection Board seeks consensus.

If a majority of EU DPAs can’t agree the Board may itself have to cast a deciding vote. So that could extend the timeline around any suspension order. But an end to the process is, at long last, in sight.

And, well, if a critical mass of domestic pressure is ever going to build for pro-privacy reform of U.S. surveillance laws now looks like a really good time…

“We now expect the DPC to issue a decision to stop Facebook’s data transfers before summer,” added Schrems. “This would require Facebook to store most data from Europe locally, to ensure that Facebook USA does not have access to European data. The other option would be for the US to change its surveillance laws.”

Facebook has been contacted for comment on the Irish High Court ruling.

Update: The company has now sent us this statement:

“Today’s ruling was about the process the IDPC followed. The larger issue of how data can move around the world remains of significant importance to thousands of European and American businesses that connect customers, friends, family and employees across the Atlantic. Like other companies, we have followed European rules and rely on Standard Contractual Clauses, and appropriate data safeguards, to provide a global service and connect people, businesses and charities. We look forward to defending our compliance to the IDPC, as their preliminary decision could be damaging not only to Facebook, but also to users and other businesses.”

#data-protection, #data-security, #digital-rights, #dpc, #eu-us-privacy-shield, #europe, #european-data-protection-board, #european-union, #facebook, #human-rights, #ireland, #lawsuit, #max-schrems, #nick-clegg, #noyb, #policy, #privacy, #safe-harbor, #united-states

0

Facebook ordered not to apply controversial WhatsApp T&Cs in Germany

The Hamburg data protection agency has banned Facebook from processing the additional WhatsApp user data that the tech giant is granting itself access to under a mandatory update to WhatsApp’s terms of service.

The controversial WhatsApp privacy policy update has caused widespread confusion around the world since being announced — and already been delayed by Facebook for several months after a major user backlash saw rivals messaging apps benefitting from an influx of angry users.

The Indian government has also sought to block the changes to WhatApp’s T&Cs in court — and the country’s antitrust authority is investigating.

Globally, WhatsApp users have until May 15 to accept the new terms (after which the requirement to accept the T&Cs update will become persistent, per a WhatsApp FAQ).

The majority of users who have had the terms pushed on them have already accepted them, according to Facebook, although it hasn’t disclosed what proportion of users that is.

But the intervention by Hamburg’s DPA could further delay Facebook’s rollout of the T&Cs — at least in Germany — as the agency has used an urgency procedure, allowed for under the European Union’s General Data Protection Regulation (GDPR), to order the tech giant not to share the data for three months.

A WhatsApp spokesperson disputed the legal validity of Hamburg’s order — calling it “a fundamental misunderstanding of the purpose and effect of WhatsApp’s update” and arguing that it “therefore has no legitimate basis”.

“Our recent update explains the options people have to message a business on WhatsApp and provides further transparency about how we collect and use data. As the Hamburg DPA’s claims are wrong, the order will not impact the continued roll-out of the update. We remain fully committed to delivering secure and private communications for everyone,” the spokesperson added, suggesting that Facebook-owned WhatsApp may be intending to ignore the order.

We understand that Facebook is considering its options to appeal Hamburg’s procedure.

The emergency powers Hamburg is using can’t extend beyond three months but the agency is also applying pressure to the European Data Protection Board (EDPB) to step in and make what it calls “a binding decision” for the 27 Member State bloc.

We’ve reached out to the EDPB to ask what action, if any, it could take in response to the Hamburg DPA’s call.

The body is not usually involved in making binding GDPR decisions related to specific complaints — unless EU DPAs cannot agree over a draft GDPR decision brought to them for review by a lead supervisory authority under the one-stop-shop mechanism for handling cross-border cases.

In such a scenario the EDPB can cast a deciding vote — but it’s not clear that an urgency procedure would qualify.

In taking the emergency action, the German DPA is not only attacking Facebook for continuing to thumb its nose at EU data protection rules, but throwing shade at its lead data supervisor in the region, Ireland’s Data Protection Commission (DPC) — accusing the latter of failing to investigate the very widespread concerns attached to the incoming WhatsApp T&Cs.

(“Our request to the lead supervisory authority for an investigation into the actual practice of data sharing was not honoured so far,” is the polite framing of this shade in Hamburg’s press release).

We’ve reached out to the DPC for a response and will update this report if we get one.

Ireland’s data watchdog is no stranger to criticism that it indulges in creative regulatory inaction when it comes to enforcing the GDPR — with critics charging commissioner Helen Dixon and her team of failing to investigate scores of complaints and, in the instances when it has opened probes, taking years to investigate — and opting for weak enforcements at the last.

The only GDPR decision the DPC has issued to date against a tech giant (against Twitter, in relation to a data breach) was disputed by other EU DPAs — which wanted a far tougher penalty than the $550k fine eventually handed down by Ireland.

GDPR investigations into Facebook and WhatsApp remain on the DPC’s desk. Although a draft decision in one WhatsApp data-sharing transparency case was sent to other EU DPAs in January for review — but a resolution has still yet to see the light of day almost three years after the regulation begun being applied.

In short, frustrations about the lack of GDPR enforcement against the biggest tech giants are riding high among other EU DPAs — some of whom are now resorting to creative regulatory actions to try to sidestep the bottleneck created by the one-stop-shop (OSS) mechanism which funnels so many complaints through Ireland.

The Italian DPA also issued a warning over the WhatsApp T&Cs change, back in January — saying it had contacted the EDPB to raise concerns about a lack of clear information over what’s changing.

At that point the EDPB emphasized that its role is to promote cooperation between supervisory authorities. It added that it will continue to facilitate exchanges between DPAs “in order to ensure a consistent application of data protection law across the EU in accordance with its mandate”. But the always fragile consensus between EU DPAs is becoming increasingly fraught over enforcement bottlenecks and the perception that the regulation is failing to be upheld because of OSS forum shopping.

That will increase pressure on the EDPB to find some way to resolve the impasse and avoid a wider break down of the regulation — i.e. if more and more Member State agencies resort to unilateral ’emergency’ action.

The Hamburg DPA writes that the update to WhatsApp’s terms grant the messaging platform “far-reaching powers to share data with Facebook” for the company’s own purposes (including for advertising and marketing) — such as by passing WhatApp users’ location data to Facebook and allowing for the communication data of WhatsApp users to be transferred to third-parties if businesses make use of Facebook’s hosting services.

Its assessment is that Facebook cannot rely on legitimate interests as a legal base for the expanded data sharing under EU law.

And if the tech giant is intending to rely on user consent it’s not meeting the bar either because the changes are not clearly explained nor are users offered a free choice to consent or not (which is the required standard under GDPR).

“The investigation of the new provisions has shown that they aim to further expand the close connection between the two companies in order for Facebook to be able to use the data of WhatsApp users for their own purposes at any time,” Hamburg goes on. “For the areas of product improvement and advertising, WhatsApp reserves the right to pass on data to Facebook companies without requiring any further consent from data subjects. In other areas, use for the company’s own purposes in accordance to the privacy policy can already be assumed at present.

“The privacy policy submitted by WhatsApp and the FAQ describe, for example, that WhatsApp users’ data, such as phone numbers and device identifiers, are already being exchanged between the companies for joint purposes such as network security and to prevent spam from being sent.”

DPAs like Hamburg may be feeling buoyed to take matters into their own hands on GDPR enforcement by a recent opinion by an advisor to the EU’s top court, as we suggested in our coverage at the time. Advocate General Bobek took the view that EU law allows agencies to bring their own proceedings in certain situations, including in order to adopt “urgent measures” or to intervene “following the lead data protection authority having decided not to handle a case.”

The CJEU ruling on that case is still pending — but the court tends to align with the position of its advisors.

 

#data-protection, #data-protection-commission, #data-protection-law, #europe, #european-data-protection-board, #european-union, #facebook, #gdpr, #general-data-protection-regulation, #germany, #hamburg, #helen-dixon, #ireland, #privacy, #privacy-policy, #social, #social-media, #terms-of-service, #whatsapp

0

Disqus facing $3M fine in Norway for tracking users without consent

Disqus, a commenting plugin that’s used by a number of news websites and which can share user data for ad targeting purposes, has got into hot water in Norway for tracking users without their consent.

The local data protection agency said today it has notified the U.S.-based company of an intent to fine it €2.5 million (~$3M) for failures to comply with requirements in Europe’s General Data Protection Regulation (GDPR) on accountability, lawfulness and transparency.

Disqus’ parent, Zeta Global, has been contacted for comment.

Datatilsynet said it acted following a 2019 investigation in Norway’s national press — which found that default settings buried in the Disqus’ plug-in opted sites into sharing user data on millions of users in markets including the U.S.

And while in most of Europe the company was found to have applied an opt-in to gather consent from users to be tracked — likely in order to avoid trouble with the GDPR — it appears to have been unaware that the regulation applies in Norway.

Norway is not a member of the European Union but is in the European Economic Area — which adopted the GDPR in July 2018, slightly after it came into force elsewhere in the EU. (Norway transposed the regulation into national law also in July 2018.)

The Norwegian DPA writes that Disqus’ unlawful data-sharing has “predominantly been an issue in Norway” — and says that seven websites are affected: NRK.no/ytring, P3.no, tv.2.no/broom, khrono.no, adressa.no, rights.no and document.no.

“Disqus has argued that their practices could be based on the legitimate interest balancing test as a lawful basis, despite the company being unaware that the GDPR applied to data subjects in Norway,” the DPA’s director-general, Bjørn Erik Thon, goes on.

“Based on our investigation so far, we believe that Disqus could not rely on legitimate interest as a legal basis for tracking across websites, services or devices, profiling and disclosure of personal data for marketing purposes, and that this type of tracking would require consent.”

“Our preliminary conclusion is that Disqus has processed personal data unlawfully. However, our investigation also discovered serious issues regarding transparency and accountability,” Thon added.

The DPA said the infringements are serious and have affected “several hundred thousands of individuals”, adding that the affected personal data “are highly private and may relate to minors or reveal political opinions”.

“The tracking, profiling and disclosure of data was invasive and nontransparent,” it added.

The DPA has given Disqus until May 31 to comment on the findings ahead of issuing a fine decision.

Publishers reminded of their responsibility

Datatilsynet has also fired a warning shot at local publishers who were using the Disqus platform — pointing out that website owners “are also responsible under the GDPR for which third parties they allow on their websites”.

So, in other words, even if you didn’t know about a default data-sharing setting that’s not an excuse because it’s your legal responsibility to know what any code you put on your website is doing with user data.

The DPA adds that “in the present case” it has focused the investigation on Disqus — providing publishers with an opportunity to get their houses in order ahead of any future checks it might make.

Norway’s DPA also has some admirably plain language to explain the “serious” problem of profiling people without their consent. “Hidden tracking and profiling is very invasive,” says Thon. “Without information that someone is using our personal data, we lose the opportunity to exercise our rights to access, and to object to the use of our personal data for marketing purposes.

“An aggravating circumstance is that disclosure of personal data for programmatic advertising entails a high risk that individuals will lose control over who processes their personal data.”

Zooming out, the issue of adtech industry tracking and GDPR compliance has become a major headache for DPAs across Europe — which have been repeatedly slammed for failing to enforce the law in this area since GDPR came into application in May 2018.

In the UK, for example (which transposed the GDPR before Brexit so still has an equivalent data protection framework for now), the ICO has been investigating GDPR complaints against real-time bidding’s (RTB) use of personal data to run behavioral ads for years — yet hasn’t issued a single fine or order, despite repeatedly warning the industry that it’s acting unlawfully.

The regulator is now being sued by complainants over its inaction.

Ireland’s DPC, meanwhile — which is the lead DPA for a swathe of adtech giants which site their regional HQ in the country — has a number of open GDPR investigations into adtech (including RTB). But has also failed to issue any decisions in this area almost three years after the regulation begun being applied.

Its lack of action on adtech complaints has contributed significantly to rising domestic (and international) pressure on its GDPR enforcement record more generally, including from the European Commission. (And it’s notable that the latter’s most recent legislative proposals in the digital arena include provisions that seek to avoid the risk of similar enforcement bottlenecks.)

The story on adtech and the GDPR looks a little different in Belgium, though, where the DPA appears to be inching toward a major slap-down of current adtech practices.

A preliminary report last year by its investigatory division called into question the legal standard of the consents being gathered via a flagship industry framework, designed by the IAB Europe. This so-called ‘Transparency and Consent’ framework (TCF) was found not to comply with the GDPR’s principles of transparency, fairness and accountability, or the lawfulness of processing.

A final decision is expected on that case this year — but if the DPA upholds the division’s findings it could deal a massive blow to the behavioral ad industry’s ability to track and target Europeans.

Studies suggest Internet users in Europe would overwhelmingly choose not to be tracked if they were actually offered the GDPR standard of a specific, clear, informed and free choice, i.e. without any loopholes or manipulative dark patterns.

#advertising-tech, #belgium, #data-protection, #data-security, #disqus, #europe, #european-commission, #european-union, #gdpr, #general-data-protection-regulation, #ireland, #norway, #personal-data, #privacy, #programmatic-advertising, #united-kingdom, #zeta-global

0

Facebook faces ‘mass action’ lawsuit in Europe over 2019 breach

Facebook is to be sued in Europe over the major leak of user data that dates back to 2019 but which only came to light recently after information on 533M+ accounts was found posted for free download on a hacker forum.

Today Digital Rights Ireland (DRI) announced it’s commencing a “mass action” to sue Facebook, citing the right to monetary compensation for breaches of personal data that’s set out in the European Union’s General Data Protection Regulation (GDPR).

Article 82 of the GDPR provides for a ‘right to compensation and liability’ for those affected by violations of the law. Since the regulation came into force, in May 2018, related civil litigation has been on the rise in the region.

The Ireland-based digital rights group is urging Facebook users who live in the European Union or European Economic Area to check whether their data was breach — via the haveibeenpwned website (which lets you check by email address or mobile number) — and sign up to join the case if so.

Information leaked via the breach includes Facebook IDs, location, mobile phone numbers, email address, relationship status and employer.

Facebook has been contacted for comment on the litigation.

The tech giant’s European headquarters is located in Ireland — and earlier this week the national data watchdog opened an investigation, under EU and Irish data protection laws.

A mechanism in the GDPR for simplifying investigation of cross-border cases means Ireland’s Data Protection Commission (DPC) is Facebook’s lead data regulator in the EU. However it has been criticized over its handling of and approach to GDPR complaints and investigations — including the length of time it’s taking to issue decisions on major cross-border cases. And this is particularly true for Facebook.

With the three-year anniversary of the GDPR fast approaching, the DPC has multiple open investigations into various aspects of Facebook’s business but has yet to issue a single decision against the company.

(The closest it’s come is a preliminary suspension order issued last year, in relation to Facebook’s EU to US data transfers. However that complaint long predates GDPR; and Facebook immediately filed to block the order via the courts. A resolution is expected later this year after the litigant filed his own judicial review of the DPC’s processes).

Since May 2018 the EU’s data protection regime has — at least on paper — baked in fines of up to 4% of a company’s global annual turnover for the most serious violations.

Again, though, the sole GDPR fine issued to date by the DPC against a tech giant (Twitter) is very far off that theoretical maximum. Last December the regulator announced a €450k (~$547k) sanction against Twitter — which works out to around just 0.1% of the company’s full-year revenue.

That penalty was also for a data breach — but one which, unlike the Facebook leak, had been publicly disclosed when Twitter found it in 2019. So Facebook’s failure to disclose the vulnerability it discovered and claims it fixed by September 2019, which led to the leak of 533M accounts now, suggests it should face a higher sanction from the DPC than Twitter received.

However even if Facebook ends up with a more substantial GDPR penalty for this breach the watchdog’s caseload backlog and plodding procedural pace makes it hard to envisage a swift resolution to an investigation that’s only a few days old.

Judging by past performance it’ll be years before the DPC decides on this 2019 Facebook leak — which likely explains why the DRI sees value in instigating class-action style litigation in parallel to the regulatory investigation.

“Compensation is not the only thing that makes this mass action worth joining. It is important to send a message to large data controllers that they must comply with the law and that there is a cost to them if they do not,” DRI writes on its website.

It also submitted a complaint about the Facebook breach to the DPC earlier this month, writing then that it was “also consulting with its legal advisors on other options including a mass action for damages in the Irish Courts”.

It’s clear that the GDPR enforcement gap is creating a growing opportunity for litigation funders to step in in Europe and take a punt on suing for data-related compensation damages — with a number of other mass actions announced last year.

In the case of DRI its focus is evidently on seeking to ensure that digital rights are upheld. But it told RTE that it believes compensation claims which force tech giants to pay money to users whose privacy rights have been violated is the best way to make them legally compliant.

Facebook, meanwhile, has sought to play down the breach it failed to disclose in 2019 — claiming it’s ‘old data’ — a deflection that ignores the fact that people’s dates of birth don’t change (nor do most people routinely change their mobile number or email address).

Plenty of the ‘old’ data exposed in this latest massive Facebook leak will be very handy for spammers and fraudsters to target Facebook users — and also now for litigators to target Facebook for data-related damages.

#data-protection, #data-protection-commission, #data-security, #digital-rights, #digital-rights-ireland, #europe, #european-union, #facebook, #gdpr, #general-data-protection-regulation, #ireland, #lawsuit, #litigation, #personal-data, #privacy, #social, #social-media, #tc, #twitter

0

How startups can ensure CCPA and GDPR compliance in 2021

Data is the most valuable asset for any business in 2021. If your business is online and collecting customer personal information, your business is dealing in data, which means data privacy compliance regulations will apply to everyone — no matter the company’s size.

Small startups might not think the world’s strictest data privacy laws — the California Consumer Privacy Act (CCPA) and Europe’s General Data Protection Regulation (GDPR) — apply to them, but it’s important to enact best data management practices before a legal situation arises.

Data compliance is not only critical to a company’s daily functions; if done wrong or not done at all, it can be quite costly for companies of all sizes.

For example, failing to comply with the GDPR can result in legal fines of €20 million or 4% of annual revenue. Under the CCPA, fines can also escalate quickly, to the tune of $2,500 to $7,500 per person whose data is exposed during a data breach.

If the data of 1,000 customers is compromised in a cybersecurity incident, that would add up to $7.5 million. The company can also be sued in class action claims or suffer reputational damage, resulting in lost business costs.

It is also important to recognize some benefits of good data management. If a company takes a proactive approach to data privacy, it may mitigate the impact of a data breach, which the government can take into consideration when assessing legal fines. In addition, companies can benefit from business insights, reduced storage costs and increased employee productivity, which can all make a big impact on the company’s bottom line.

Challenges of data compliance for startups

Data compliance is not only critical to a company’s daily functions; if done wrong or not done at all, it can be quite costly for companies of all sizes. For example, Vodafone Spain was recently fined $9.72 million under GDPR data protection failures, and enforcement trackers show schools, associations, municipalities, homeowners associations and more are also receiving fines.

GDPR regulators have issued $332.4 million in fines since the law was enacted almost two years ago and are being more aggressive with enforcement. While California’s attorney general started CCPA enforcement on July 1, 2020, the newly passed California Privacy Rights Act (CPRA) only recently created a state agency to more effectively enforce compliance for any company storing information of residents in California, a major hub of U.S. startups.

That is why in this age, data privacy compliance is key to a successful business. Unfortunately, many startups are at a disadvantage for many reasons, including:

0

Facebook’s tardy disclosure of breach timing raises GDPR compliance questions

The question of whether Facebook will face any regulatory sanction over the latest massive historical platform privacy fail to come to light remains unclear. But the timeline of the incident looks increasingly awkward for the tech giant.

While it initially sought to play down the data breach revelations published by Business Insider at the weekend by suggesting that information like people’s birth dates and phone numbers was “old”, in a blog post late yesterday the tech giant finally revealed that the data in question had in fact been scraped from its platform by malicious actors “in 2019” and “prior to September 2019”.

That new detail about the timing of this incident raises the issue of compliance with Europe’s General Data Protection Regulation (GDPR) — which came into application in May 2018.

Under the EU regulation data controllers can face fines of up to 2% of their global annual turnover for failures to notify breaches, and up to 4% of annual turnover for more serious compliance violations.

The European framework looks important because Facebook indemnified itself against historical privacy issues in the US when it settled with the FTC for $5BN back in July 2019 — although that does still mean there’s a period of several months (June to September 2019) which could fall outside that settlement.

Yesterday, in its own statement responding to the breach revelations, Facebook’s lead data supervisor in the EU said the provenance of the newly published dataset wasn’t entirely clear, writing that it “seems to comprise the original 2018 (pre-GDPR) dataset” — referring to an earlier breach incident Facebook disclosed in 2018 which related to a vulnerability in its phone lookup functionality that it had said occurred between June 2017 and April 2018 — but also writing that the newly published dataset also looked to have been “combined with additional records, which may be from a later period”.

Facebook followed up the Irish Data Protection Commission (DPC)’s statement by confirming that suspicion — admitting that the data had been extracted from its platform in 2019, up until September of that year.

Another new detail that emerged in Facebook’s blog post yesterday was the fact users’ data was scraped not via the aforementioned phone lookup vulnerability — but via another method altogether: A contact importer tool vulnerability.

This route allowed an unknown number of “malicious actors” to use software to imitate Facebook’s app and upload large sets of phone numbers to see which ones matched Facebook users.

In this way a spammer (for example), could upload a database of potential phone numbers and link them to not only names but other data like birth date, email address, location — all the better to phish you with.

In its PR response to the breach, Facebook quickly claimed it had fixed this vulnerability in August 2019. But, again, that timing places the incident squarely in the period of GDPR being active.

As a reminder, Europe’s data protection framework bakes in a data breach notification regime that requires data controllers to notify a relevant supervisory authority if they believe a loss of personal data is likely to constitute a risk to users’ rights and freedoms — and to do so without undue delay (ideally within 72 hours of becoming aware of it).

Yet Facebook made no disclosure at all of this incident to the DPC. Indeed, the regulator made it clear yesterday that it had to proactively seek information from Facebook in the wake of BI’s report. That’s the opposite of how EU lawmakers intended the regulation to function.

Data breaches, meanwhile, are broadly defined under the GDPR. It could mean personal data being lost or stolen and/or accessed by unauthorized third parties. It can also relate to deliberate or accidental action or inaction by a data controller which exposes personal data.

Legal risk attached to the breach likely explains why Facebook has studiously avoided describing this latest data protection failure, in which the personal information of more than half a billion users was posted for free download on an online forum, as a ‘breach’.

And, indeed, why it’s sought to downplay the significance of the leaked information — dubbing people’s personal information “old data”. (Even as few people regularly change their mobile numbers, email address, full names and biographical information and so on, and no one (legally) gets a new birth date… )

Its blog post instead refers to data being scraped; and to scraping being “a common tactic that often relies on automated software to lift public information from the internet that can end up being distributed in online forums” — tacitly implying that the personal information leaked via its contact importer tool was somehow public.

The self-serving suggestion being peddled here by Facebook is that hundreds of millions of users had both published sensitive stuff like their mobile phone numbers on their Facebook profiles and left default settings on their accounts — thereby making this personal information ‘publicly available for scraping/no longer private/uncovered by data protection legislation’.

This is an argument as obviously absurd as it is viciously hostile to people’s rights and privacy. It’s also an argument that EU data protection regulators must quickly and definitively reject or be complicit in allowing Facebook (ab)use its market power to torch the very fundamental rights that regulators’ sole purpose is to defend and uphold.

Even if some Facebook users affected by this breach had their information exposed via the contact importer tool because they had not changed Facebook’s privacy-hostile defaults that still raises key questions of GPDR compliance — because the regulation also requires data controllers to adequately secure personal data and apply privacy by design and default.

Facebook allowing hundreds of millions of accounts to have their info freely pillaged by spammers (or whoever) doesn’t sound like good security or default privacy.

In short, it’s the Cambridge Analytica scandal all over again.

Facebook is trying to get away with continuing to be terrible at privacy and data protection because it’s been so terrible at it in the past — and likely feels confident in keeping on with this tactic because it’s faced relatively little regulatory sanction for an endless parade of data scandals. (A one-time $5BN FTC fine for a company than turns over $85BN+ in annual revenue is just another business expense.)

We asked Facebook why it failed to notify the DPC about this 2019 breach back in 2019, when it realized people’s information was once again being maliciously extracted from its platform — or, indeed, why it hasn’t bothered to tell affected Facebook users themselves — but the company declined to comment beyond what it said yesterday.

Then it told us it would not be commenting on its communications with regulators.

Under the GDPR, if a breach poses a high risk to users’ rights and freedoms a data controller is required to notify affected individuals — with the rational being that prompt notification of a threat can help people take steps to protect themselves from the risks of their data being breached, such as fraud and ID theft.

Yesterday Facebook also said it does not have plans to notify users either.

Perhaps the company’s trademark ‘thumbs up’ symbol would be more aptly expressed as a middle finger raised at everyone else.

 

#data-controller, #data-protection, #dpc, #europe, #european-union, #facebook, #federal-trade-commission, #gdpr, #general-data-protection-regulation, #personal-data, #privacy, #security-breaches, #united-states

0

Facebook’s Kustomer buy could face EU probe after merger referral

The European Union may investigate Facebook’s $1BN acquisition of customer service platform Kustomer after concerns were referred to it under EU merger rules.

A spokeswoman for the Commission confirmed it received a request to refer the proposed acquisition from Austria under Article 22 of the EU’s Merger Regulation — a mechanism which allows Member States to flag a proposed transaction that’s not notifiable under national filing thresholds (e.g. because the turnover of one of the companies is too low for a formal notification).

The Commission spokeswoman said the case was notified in Austria on March 31.

“Following the receipt of an Article 22 request for referral, the Commission has to transmit the request for referral to other Member States without delay, who will have the right to join the original referral request within 15 working days of being informed by the Commission of the original request,” she told us, adding: “Following the expiry of the deadline for other Member States to join the referral, the Commission will have 10 working days to decide whether to accept or reject the referral.”

We’ll know in a few weeks whether or not the European Commission will take a look at the acquisition — an option that could see the transaction stalled for months, delaying Facebook’s plans for integrating Kustomer’s platform into its empire.

Facebook and Kustomer have been contacted for comment on the development.

The tech giant’s planned purchase of the customer relations management platform was announced last November and quickly raised concerns over what Facebook might do with any personal data held by Kustomer — which could include sensitive information, given sectors served by the platform include healthcare, government and financial services, among others.

Back in February, the Irish Council for Civil Liberties (ICCL) wrote to the Commission and national and EU data protection agencies to raise concerns about the proposed acquisition — urging scrutiny of the “data processing consequences”, and highlighting how Kustomer’s terms allow it to process user data for very wide-ranging purposes.

“Facebook is acquiring this company. The scope of ‘improving our Services’ [in Kustomer’s terms] is already broad, but is likely to grow broader after Kustomer is acquired,” the ICCL warned. “‘Our Services’ may, for example, be taken to mean any Facebook services or systems or projects.”

“The settled caselaw of the European Court of Justice, and the European data protection board, that ‘improving our services’ and similarly vague statements do not qualify as a ‘processing purpose’,” it added.

The ICCL also said it had written to Facebook asking for confirmation of the post-acquisition processing purposes for which people’s data will be used.

Johnny Ryan, senior fellow at the ICCL, confirmed to TechCrunch it has not had any response from Facebook to those questions.

We’ve also asked Facebook to confirm what it will do with any personal data held on users by Kustomer once it owns the company — and will update this report with any response.

In a separate (recent) episode — involving Google — its acquisition of wearable maker Fitbit went through months of competition scrutiny in the EU and was only cleared by regional regulators after the tech giant made a number of concessions, including committing not to use Fitbit data for ads for ten years.

Until now Facebook’s acquisitions have generally flown under regulators’ radar, including, around a decade ago, when it was sewing up the social space by buying up rivals Instagram and WhatsApp.

Several years later it was forced to pay a fine in the EU over a ‘misleading’ filing — after it combined WhatsApp and Facebook data, despite having told regulators it could not do so.

With so many data scandals now inextricably attached to Facebook, the tech giant is saddled with customer mistrust by default and faces far greater scrutiny of how it operates — which is now threatening to inject friction into its plans to expand its b2b offering by acquiring a CRM player. So after ‘move fast and break things’ Facebook is having to move slower because of its reputation for breaking stuff.

 

#austria, #crm, #data-protection, #europe, #european-commission, #european-union, #facebook, #fitbit, #fundings-exits, #google, #healthcare, #johnny-ryan, #kustomer, #merger, #privacy, #social-media

0

Hack takes: A CISO and a hacker detail how they’d respond to the Exchange breach

The cyber world has entered a new era in which attacks are becoming more frequent and happening on a larger scale than ever before. Massive hacks affecting thousands of high-level American companies and agencies have dominated the news recently. Chief among these are the December SolarWinds/FireEye breach and the more recent Microsoft Exchange server breach. Everyone wants to know: If you’ve been hit with the Exchange breach, what should you do?

To answer this question, and compare security philosophies, we outlined what we’d do — side by side. One of us is a career attacker (David Wolpoff), and the other a CISO with experience securing companies in the healthcare and security spaces (Aaron Fosdick).

Don’t wait for your incident response team to take the brunt of a cyberattack on your organization.

CISO Aaron Fosdick

1. Back up your system.

A hacker’s likely going to throw some ransomware attacks at you after breaking into your mail server. So rely on your backups, configurations, etc. Back up everything you can. But back up to an instance before the breach. Design your backups with the assumption that an attacker will try to delete them. Don’t use your normal admin credentials to encrypt your backups, and make sure your admin accounts can’t delete or modify backups once they’ve been created. Your backup target should not be part of your domain.

2. Assume compromise and stop connectivity if necessary.

Identify if and where you have been compromised. Inspect your systems forensically to see if any systems are using your surface as a launch point and attempting to move laterally from there. If your Exchange server is indeed compromised, you want it off your network as soon as possible. Disable external connectivity to the internet to ensure they cannot exfiltrate any data or communicate with other systems in the network, which is how attackers move laterally.

3. Consider deploying default/deny.

#backup, #column, #computer-security, #data-protection, #data-security, #ec-column, #ec-cybersecurity, #ec-how-to, #security, #tc

0

Google isn’t testing FLoCs in Europe yet

Early this month Google quietly began trials of ‘Privacy Sandbox’: Its planned replacement adtech for tracking cookies, as it works toward phasing out support for third party cookies in the Chrome browser — testing a system to reconfigure the dominant web architecture by replacing individual ad targeting with ads that target groups of users (aka Federated Learning of Cohorts, or FLoCs), and which — it loudly contended — will still generate a fat upside for advertisers.

There are a number of gigantic questions about this plan. Not least whether targeting groups of people who are non-transparently stuck into algorithmically computed interest-based buckets based on their browsing history is going to reduce the harms that have come to be widely associated with behavioral advertising.

If your concern is online ads which discriminate against protected groups or seek to exploit vulnerable people (e.g. those with a gambling addiction), FLoCs may very well just serve up more of the abusive same. The EFF has, for example, called FLoCs a “terrible idea”, warning the system may amplify problems like discrimination and predatory targeting.

Advertisers also query whether FLoCs will really generate like-for-like revenue, as Google claims.

Competition concerns are also closely dogging Google’s Privacy Sandbox, which is under investigation by UK antitrust regulators — and has drawn scrutiny from the US Department of Justice too, as Reuters reported recently.

Adtech players complain the shift will merely increase Google’s gatekeeper power over them by blocking their access to web users’ data even as Google can continue to track its own users — leveraging that first party data alongside a new moat they claim will keep them in the dark about what individuals are doing online. (Though whether it will actually do that is not at all clear.)

Antitrust is of course a convenient argument for the adtech industry to use to strategically counter the prospect of privacy protections for individuals. But competition regulators on both sides of the pond are concerned enough over the power dynamics of Google ending support for tracking cookies that they’re taking a closer look.

And then there’s the question of privacy itself — which obviously merits close scrutiny too.

Google’s sales pitch for the ‘Privacy Sandbox’ is evident in its choice of brand name — which suggests its keen to push the perception of a technology that protects privacy.

This is Google’s response to the rising store of value being placed on protecting personal data — after years of data breach and data misuse scandals.

A terrible reputation now dogs the tracking industry (or the “data industrial complex”, as Apple likes to denounce it) — as a result of high profile scandals like Kremlin-fuelled voter manipulation in the US but also just the demonstrable dislike web users have of being ad-stalking around the Internet. (Very evident in the ever increasing use of tracker- and ad-blockers; and in the response of other web browsers which have adopted a number of anti-tracking measures years ahead of Google-owned Chrome).

Given Google’s hunger for its Privacy Sandbox to be perceived as pro-privacy it’s perhaps no small irony, then, that it’s not actually running these origin tests of FLoCs in Europe — where the world’s most stringent and comprehensive online privacy laws apply.

AdExchanger reported yesterday on comments made by a Google engineer during a meeting of the Improving Web Advertising Business Group at the World Wide Web Consortium on Tuesday. “For countries in Europe, we will not be turning on origin trials [of FLoC] for users in EEA [European Economic Area] countries,” Michael Kleber is reported to have said.

TechCrunch had a confirm from Google in early March that this is the case. “Initially, we plan to begin origin trials in the US and plan to carry this out internationally (including in the UK / EEA) at a later date,” a spokesman told us earlier this month.

“As we’ve shared, we are in active discussions with independent authorities — including privacy regulators and the UK’s Competition and Markets Authority — as with other matters they are critical to identifying and shaping the best approach for us, for online privacy, for the industry and world as a whole,” he added then.

At issue here is the fact that Google has chosen to auto-enrol sites in the FLoC origin trials — rather than getting manual sign ups which would have offered a path for it to implement a consent flow.

And lack of consent to process personal data seems to be the legal area of concern for conducting such online tests in Europe where legislation like the ePrivacy Directive (which covers tracking cookies) and the more recent General Data Protection Regulation (GDPR), which further strengthens requirements for consent as a legal basis, both apply.

Asked how consent is being handled for the trials Google’s spokesman told us that some controls will be coming in April: “With the Chrome 90 release in April, we’ll be releasing the first controls for the Privacy Sandbox (first, a simple on/off), and we plan to expand on these controls in future Chrome releases, as more proposals reach the origin trial stage, and we receive more feedback from end users and industry.”

It’s not clear why Google is auto-enrolling sites into the trial rather than asking for opt-ins — beyond the obvious that such a step would add friction and introduce another layer of complexity by limiting the size of the test pool to only those who would consent. Google presumably doesn’t want to be so straightjacketed during product dev.

“During the origin trial, we are defaulting to supporting all sites that already contain ads to determine what FLoC a profile is assigned to,” its spokesman told us when we asked why it’s auto-enrolling sites. “Once FLoC’s final proposal is implemented, we expect the FLoC calculation will only draw on sites that opt into participating.”

He also specified that any user who has blocked third-party cookies won’t be included in the Origin Trial — so the trial is not a full ‘free-for-all’, even in the US.

There are reasons for Google to tread carefully. Its Privacy Sandbox tests were quickly shown to be leaking data about incognito browsing mode — revealing a piece of information that could be used to aid user fingerprinting. Which obviously isn’t good for privacy.

“If FloC is unavailable in incognito mode by design then this allows the detection of users browsing in private browsing mode,” wrote security and privacy researcher, Dr Lukasz Olejnik, in an initial privacy analysis of the Sandbox this month in which he discussed the implications of the bug.

“While indeed, the private data about the FloC ID is not provided (and for a good reason), this is still an information leak,” he went on. “Apparently it is a design bug because the behavior seems to be foreseen to the feature authors. It allows differentiating between incognito and normal web browsing modes. Such behavior should be avoided.”

Google’s Privacy Sandbox tests automating a new form of browser fingerprinting is not ‘on message’ with the claimed boost for user privacy. But Google is presumably hoping to iron out such problems via testing and as development of the system continues.

(Indeed, Google’s spokesman also told us that “countering fingerprinting is an important goal of the Privacy Sandbox”, adding: “The group is developing technology to protect people from opaque or hidden techniques that share data about individual users and allow individuals to be tracked in a covert manner. One of these techniques, for example, involves using a device’s IP address to try and identify someone without their knowledge or ability to opt out.”)

At the same time it’s not clear whether or not Google needs to obtain user consent to run the tests legally in Europe. Other legal bases do exist — although it would take careful legal analysis to ascertain whether or not they could be used. But it’s certainly interesting that Google has decided it doesn’t want to risk testing if it can legally trial this tech in Europe without consent.

Likely relevant is the fact that the ePrivacy Directive is not like the harmonized GDPR — which funnels cross border complaints via a lead data supervisor, shrinking regulatory exposure at least in the first instance.

Any EU DPA may have competence to investigate matters related to ePrivacy in their national markets. To wit: At the end of last year France’s CNIL skewered Google with a $120M fine related to dropping tracking cookies without consent — underlining the risks of getting EU law on consent wrong. And a privacy-related fine for Privacy Sandbox would be terrible PR. So Google may have calculated it’s simply less risky to wait.

Under EU law, certain types of personal data are also considered highly sensitive (aka ‘special category data’) and require an even higher bar of explicit consent to process. Such data couldn’t be bundled into a site-level consent — but would require specific consent for each instance. So, in other words, there would be even more friction involved in testing with such data.

That may explain why Google plans to do regional testing later — if it can figure out how to avoid processing such sensitive data. (Relevant: Analysis of Google’s proposal suggests the final version intends to avoid processing sensitive data in the computation of the FLoC ID — to avoid exactly that scenario.)

If/when Google does implement Privacy Sandbox tests in Europe “later”, as it has said it will (having also professed itself “100% committed to the Privacy Sandbox in Europe”), it will presumably do so when it has added the aforementioned controls to Chrome — meaning it would be in a position to offer some kind of prompt asking users if they wish to turn the tech off (or, better still, on).

Though, again, it’s not clear how exactly this will be implemented — and whether a consent flow will be part of the tests.

Google has also not provided a timeline for when tests will start in Europe. Nor would it specify the other countries it’s running tests in beside the US when we asked about that.

At the time of writing it had not responded to a number of follow up questions either but we’ll update this report if we get more detail.

The (current) lack of regional tests raises questions about the suitability of Privacy Sandbox for European users — as the New York Times’ Robin Berjon has pointed out, noting via Twitter that “the market works differently”.

“Not doing origin tests is already a problem… but not even knowing if it could eventually have a legal basis on which to run seems like a strange position to take?” he also wrote.

Google is surely going to need to test FLoCs in Europe at some point. Because the alternative — implementing regionally untested adtech — is unlikely to be a strong sell to advertisers who are already crying foul over Privacy Sandbox on competition and revenue risk grounds.

Ireland’s Data Protection Commission (DPC), meanwhile — which, under GDPR, is Google’s lead data supervisor in the region — confirmed to us that Google has been consulting with it about the Privacy Sandbox plan.

“Google has been consulting the DPC on this matter and we were aware of the roll-out of the trial,” deputy commissioner Graham Doyle told us today. “As you are aware, this has not yet been rolled-out in the EU/EEA. If, and when, Google present us with detail plans, outlining their intention to start using this technology within the EU/EEA, we will examine all of the issues further at that point.”

The DPC has a number of investigations into Google’s business triggered by GDPR complaints — including a May 2019 probe into its adtech and a February 2020 investigation into its processing of users’ location data — all of which are ongoing.

But — in one legacy example of the risks of getting EU data protection compliance wrong — Google was fined $57M by France’s CNIL back in January 2019 (under GDPR as its EU users hadn’t yet come under the jurisdiction of Ireland’s DPC) for, in that case, not making it clear enough to Android users how it processes their personal information.

#advertising-tech, #data-protection, #eprivacy-directive, #eu, #europe, #flocs, #gdpr, #google, #privacy, #privacy-sandbox, #tc

0

France’s privacy watchdog probes Clubhouse after complaint and petition

Clubhouse, the buzzy but still invite only social audio app that’s popular with the Silicon Valley technorati, is being investigated by France’s privacy watchdog.

The CNIL announced today it’s opened an investigation into Clubhouse following a complaint and after it got some initial responses back from Alpha Exploration Co., the U.S.-based company behind the app.

It also points to a petition that’s circulating in France with over 10,000 signatures — calling for regulatory intervention.

The regulator says it’s confirmed that Clubhouse’s owner is not established anywhere in the European Union — which means the app can be investigated by any EU DPA that receives a complaint or has its own concerns about EU citizens’ data.

Last month the Hamburg privacy regulator also raised concerns over Clubhouse, saying they’d asked the app for more information on how it protects the privacy of European users and their contacts.

In the EU, cross border data protection cases involving tech giants typically avoid this scenario as the General Data Protection Regulation (GDPR) includes a mechanism that funnels complaints via a lead data supervisor — aka the national agency where the business is established in the EU.

This ‘one-stop-shop’ (OSS) already has had the effect of slowing down GDPR enforcement against giants like Facebook, which have established their regional HQ in Ireland. But there is a further risk of a regulatory moat effect that benefits ‘big tech’ if the OSS is combined with swifter unilateral privacy enforcement against newcomers like Clubhouse (which currently fall outside the OSS).

France’s watchdog has certainly demonstrated a willingness to move fast and enforce the rules against tech giants like Google and Amazon when unencumbered by the OSS — recently issuing fines over cookie consent issues in excess of $160M, for example. It also hit Google with a GDPR fine of $57M in 2019 before the tech giant moved the jurisdiction of regional users to Ireland.

So there’s no reason why the CNIL won’t show similar alacrity in its probe of Clubhouse. (Although in its press note today it does write that European DPAs are “communicating with each other on this matter, in order to exchange information and ensure consistent application of the GDPR”.)

Privacy concerns that have been attached to Clubhouse include that it uploads users’ phone book contacts — using the harvested phone numbers to build a usage graph so it can display how many ‘friends’ a non-user has on the service at the point when the user is being asked to select which of their contacts to invite to the service.

The petition to CNIL also claims Clubhouse’s “secret database” of users’ contacts may be sold to third parties.

“For years, lawmakers have not dared to attack Facebook for sucking up our data. Our democracies are paying a heavy price today,” the authors of the petition also write. “Clubhouse hopes we haven’t learned anything from Facebook’s methods and that its questionable practices will go unnoticed. But the German privacy agency has already accused the company of violating EU law. Now we need regulators in other countries to follow suit and put pressure on Clubhouse.

If thousands of you ask the CNIL to enforce the law, we can put an end to this blatant violation of our private lives. It is also an opportunity to send a strong message to the tech giants: our data is ours and no one else’s.”

In its privacy policy, Clubhouse‘s owner writes that the “Company does not sell your Personal Data” — but does list a wide ranging number of reasons why it may “share” user data with third parties, including for “advertising and marketing services”, among many other listed reasons.

Clubhouse has been contacted for comment.

#clubhouse, #cnil, #data-protection, #eu, #europe, #gdpr, #privacy, #social, #social-audio

0

Facebook fined again in Italy for misleading users over what it does with their data

Facebook has been fined again by Italy’s competition authority — this time the penalty is €7 million (~$8.4M) — for failing to comply with an earlier order related to how it informs users about the commercial uses it makes of their data.

The AGCM began investigating certain commercial practices by Facebook back in 2018, including the information it provided to users at sign up and the lack of an opt out for advertising. Later the same year it went on to fine Facebook €10M for two violations of the country’s Consumer Code.

But the watchdog’s action did not stop there. It went on to launch further proceedings against Facebook in 2020 — saying the tech giant was still failing to inform users “with clarity and immediacy” about how it monetizes their data.

“Facebook Ireland Ltd. and Facebook Inc. have not complied with the warning to remove the incorrect practice on the use of user data and have not published the corrective declaration requested by the Authority,” the AGCM writes in a press release today (issued in Italian; which we’ve translated with Google Translate).

The authority said Facebook is still misleading users who register on its platform by not informing them — “immediately and adequately” — at the point of sign up that it will collect and monetize their personal data. Instead it found Facebook emphasizes its service’s ‘gratuitousness’.

“The information provided by Facebook was generic and incomplete and did not provide an adequate distinction between the use of data necessary for the personalization of the service (with the aim of facilitating socialization with other users) and the use of data to carry out targeted advertising campaigns,” the AGCM goes on.

It had already fined Facebook €5M over the same issue of failing to provide adequate information about its use of people’s data. But it also ordered it to correct the practice — and publish an “amendment” notice on its website and apps for users in Italy. Neither of which Facebook has done, per the regulator.

Facebook, meanwhile, has been fighting the AGCM’s order via the Italian legal system — making a petition to the Council of State.

A hearing of Facebook’s appeal against the non-compliance proceedings took place in September last year and a decision is still pending.

Reached for comment on AGCM’s action, a Facebook spokesperson told us: “We note the Italian Competition Authority’s announcement today, but we await the Council of State decision on our appeal against the Authority’s initial findings.”

“Facebook takes privacy extremely seriously and we have already made changes, including to our Terms of Service, to further clarify how Facebook uses data to provide its service and to provide tailored advertising,” it added.

Last year, at the time the AGCM instigated further proceedings against it, Facebook told us it had amended the language of its terms of service back in 2019 — to “further clarify” how it makes money, as it put it.

However while the tech giant appears to have removed a direct “claim of gratuity” it had previously been presenting users at the point of registration, the Italian watchdog is still not happy with how far it’s gone in its presentation to new users — saying it’s still not being “immediate and clear” enough in how it provides information on the collection and use of their data for commercial purposes.

The authority points out that this is key information for people to weigh up in deciding whether or not to join Facebook — given the economic value Facebook gains via the transfer of their personal data.

For its part, Facebook argues that it’s fair to describe a service as ‘free’ if there’s no monetary charge for use. Although it has also made changes to how it describes this value exchange to users — including dropping its former slogan that “Facebook is free and always will be” in favor of some fuzzier phrasing.

On the arguably more salient legal point that Facebook is also appealing — related to the lack of a direct opt out for Facebook users to prevent their data being used for targeted ads — Facebook denies there’s any lack of consent to see here, claiming it does not give any user information to third parties unless the person has chosen to share their information and give consent.

Rather it says this consent process happens off its own site, on a case by case basis, i.e. when people decide whether or not to install third party apps or use Facebook Login to log into a third-party websites etc — and where, it argues, they will be asked by those third parties whether they want Facebook to share their data.

(Facebook’s lead data supervisor in Europe, Ireland’s DPC, has an open investigation into Facebook on exactly this issue of so-called ‘forced consent’ — with complaints filed the moment Europe’s General Data Protection Regulation begun being applied in May 2018.)

The tech giant also flags on-site tools and settings it does offer its own users — such as ‘Why Am I Seeing This Ad’, ‘Ads Preferences’ and ‘Manage Activity’ — which it claims increase transparency and control for Facebook users.

It also points to the ‘Off Facebook Activity‘ setting it launched last year — which shows users some information about which third party services are sending their data to Facebook and lets them disconnect that information from their account. Though there’s no way for users to request the third party delete their data via Facebook. (That requires going to each third party service individually to make a request.)

Last year a German court ruled against a consumer rights challenge to Facebook’s use of the self-promotional slogan that its service is “free and always will be” — on the grounds that the company does not require users to literally hand over monetary payments in exchange for using the service. Although the court found against Facebook on a number of other issues bundled into the challenge related to how it handles user data.

In another interesting development last year, Germany’s federal court also unblocked a separate legal challenge to Facebook’s use of user data which has been brought by the country’s competition watchdog. If that landmark challenge prevails Facebook could be forced to stop combining user data across different services and from the social plug-ins and tracking pixels it embeds in third parties’ digital services.

The company is also now facing rising challenges to its unfettered use of people’s data via the private sector, with Apple set to switch on an opt-in consent mechanism for app tracking on iOS this spring. Browser makers have also been long stepping up action against consentless tracking — including Google, which is working on phasing out support for third party cookies on Chrome.

 

#competition, #data-protection, #eu, #europe, #facebook, #lawsuit, #policy, #privacy

0

TikTok hit with consumer, child safety and privacy complaints in Europe

TikTok is facing a fresh round of regulatory complaints in Europe where consumer protection groups have filed a series of coordinated complaints alleging multiple breaches of EU law.

The European Consumer Organisation (BEUC) has lodged a complaint against the video sharing site with the European Commission and the bloc’s network of consumer protection authorities, while consumer organisations in 15 countries have alerted their national authorities and urged them to investigate the social media giant’s conduct, BEUC said today.

The complaints include claims of unfair terms, including in relation to copyright and TikTok’s virtual currency; concerns around the type of content children are being exposed to on the platform; and accusations of misleading data processing and privacy practices.

Details of the alleged breaches are set out in two reports associated with the complaints: One covering issues with TikTok’s approach to consumer protection, and another focused on data protection and privacy.

Child safety

On child safety, the report accuses TikTok of failing to protect children and teenagers from hidden advertising and “potentially harmful” content on its platform.

“TikTok’s marketing offers to companies who want to advertise on the app contributes to the proliferation of hidden marketing. Users are for instance triggered to participate in branded hashtag challenges where they are encouraged to create content of specific products. As popular influencers are often the starting point of such challenges the commercial intent is usually masked for users. TikTok is also potentially failing to conduct due diligence when it comes to protecting children from inappropriate content such as videos showing suggestive content which are just a few scrolls away,” the BEUC writes in a press release.

TikTok has already faced a regulatory intervention in Italy this year in response to child safety concerns — in that instance after the death of a ten year old girl in the country. Local media had reported that the child died of asphyxiation after participating in a ‘black out’ challenge on TikTok — triggering the emergency intervention by the DPA.

Soon afterwards TikTok agreed to reissue an age gate to verify the age of every user in Italy, although the check merely asks the user to input a date to confirm their age so seems trivially easy to circumvent.

In the BEUC’s report, the consumer rights group draws attention to TikTok’s flimsy age gate, writing that: “In practice, it is very easy for underage users to register on the platform as the age verification process is very loose and only self-declaratory.”

And while it notes TikTok’s privacy policy claims the service is “not directed at children under the age of 13” the report cites a number of studies that found heavy use of TikTok by children under 13 — with BEUC suggesting that children in fact make up “a very big part” of TikTok’s user base.

From the report:

In France, 45% of children below 13 have indicated using the app. In the United Kingdom, a 2020 study from the Office for Telecommunications (OFCOM) revealed that 50% of children between eight and 15 upload videos on TikTok at least weekly. In Czech Republic, a 2019 study found out that TikTok is very popular among children aged 11-12. In Norway, a news article reported that 32% of children aged 10-11 used TikTok in 2019. In the United States, The New York Times revealed that more than one-third of daily TikTok users are 14 or younger, and many videos seem to come from children who are below 13. The fact that many underage users are active on the platform does not come as a surprise as recent studies have shown that, on average, a majority of children owns mobile phones earlier and earlier (for example, by the age of seven in the UK).

A recent EU-backed study also found that age checks on popular social media platforms are “basically ineffective” as they can be circumvented by children of all ages simply by lying about their age.

Terms of use

Another issue raised by the complaints centers on a claim of unfair terms of use — including in relation to copyright, with BEUC noting that TikTok’s T&Cs give it an “irrevocable right to use, distribute and reproduce the videos published by users, without remuneration”.

A virtual currency feature it offers is also highlighted as problematic in consumer rights terms.

TikTok lets users purchase digital coins which they can use to buy virtual gifts for other users (which can in turn be converted by the user back to fiat). But BEUC says its ‘Virtual Item Policy’ contains “unfair terms and misleading practices” — pointing to how it claims an “absolute right” to modify the exchange rate between the coins and the gifts, thereby “potentially skewing the financial transaction in its own favour”.

While TikTok displays the price to buy packs of its virtual coins there is no clarity over the process it applies for the conversion of these gifts into in-app diamonds (which the gift-receiving user can choose to redeem for actual money, remitted to them via PayPal or another third party payment processing tool).

“The amount of the final monetary compensation that is ultimately earned by the content provider remains obscure,” BEUC writes in the report, adding: “According to TikTok, the compensation is calculated ‘based on various factors including the number of diamonds that the user has accrued’… TikTok does not indicate how much the app retains when content providers decide to convert their diamonds into cash.”

“Playful at a first glance, TikTok’s Virtual Item Policy is highly problematic from the point of view of consumer rights,” it adds.

Privacy

On data protection and privacy, the social media platform is also accused of a whole litany of “misleading” practices — including (again) in relation to children. Here the complaint accuses TikTok of failing to clearly inform users about what personal data is collected, for what purpose, and for what legal reason — as is required under Europe’s General Data Protection Regulation (GDPR).

Other issues flagged in the report include the lack of any opt-out from personal data being processed for advertising (aka ‘forced consent’ — something tech giants like Facebook and Google have also been accused); the lack of explicit consent for processing sensitive personal data (which has special protections under GDPR); and an absence of security and data protection by design, among other issues.

We’ve reached out to the Irish Data Protection Commission (DPC), which is TikTok’s lead supervisor for data protection issues in the EU, about the complaint and will update this report with any response.

France’s data watchdog, the CNIL, already opened an investigation into TikTok last year — prior to the company shifting its regional legal base to Ireland (meaning data protection complaints must now be funnelled through the Irish DPC as a result of via the GDPR’s one-stop-shop mechanism — adding to the regulatory backlog).

Jef Ausloos, a postdoc researcher who worked on the legal analysis of TikTok’s privacy policy for the data protection complaints, told TechCrunch researchers had been ready to file data protection complaints a year ago — at a time when the platform had no age check at all — but it suddenly made major changes to how it operates.

Ausloos suggests such sudden massive shifts are a deliberate tactic to evade regulatory scrutiny of data-exploiting practices — as “constant flux” can have the effect of derailing and/or resetting research work being undertaken to build a case for enforcement — also pointing out that resource-strapped regulators may be reluctant to bring cases against companies ‘after the fact’ (i.e. if they’ve since changed a practice).

The upshot of breaches that iterate is that repeat violations of the law may never be enforced.

It’s also true that a frequent refrain of platforms at the point of being called out (or called up) on specific business practices is to claim they’ve since changed how they operate — seeking to use that a defence to limit the impact of regulatory enforcement or indeed a legal ruling. (Aka: ‘Move fast and break regulatory accountability’.)

Nonetheless, Ausloos says the complainants’ hope now is that the two years of documentation undertaken on the TikTok case will help DPAs build cases.

Commenting on the complaints in a statement, Monique Goyens, DG of BEUC, said: “In just a few years, TikTok has become one of the most popular social media apps with millions of users across Europe. But TikTok is letting its users down by breaching their rights on a massive scale. We have discovered a whole series of consumer rights infringements and therefore filed a complaint against TikTok.

“Children love TikTok but the company fails to keep them protected. We do not want our youngest ones to be exposed to pervasive hidden advertising and unknowingly turned into billboards when they are just trying to have fun.

“Together with our members — consumer groups from across Europe — we urge authorities to take swift action. They must act now to make sure TikTok is a place where consumers, especially children, can enjoy themselves without being deprived of their rights.”

Reached for comment on the complaints, a TikTok spokesperson told us:

Keeping our community safe, especially our younger users, and complying with the laws where we operate are responsibilities we take incredibly seriously. Every day we work hard to protect our community which is why we have taken a range of major steps, including making all accounts belonging to users under 16 private by default. We’ve also developed an in-app summary of our Privacy Policy with vocabulary and a tone of voice that makes it easier for teens to understand our approach to privacy. We’re always open to hearing how we can improve, and we have contacted BEUC as we would welcome a meeting to listen to their concerns.

#beuc, #child-safety, #consumer-protection, #data-protection, #europe, #lawsuit, #privacy, #social, #tiktok

0

EU’s top privacy regulator urges ban on surveillance-based ad targeting

The European Union’s lead data protection supervisor has recommended that a ban on targeted advertising based on tracking Internet users’ digital activity be included in a major reform of digital services rules which aims to increase operators’ accountability, among other key goals.

The European Data Protection Supervisor (EDPS), Wojciech Wiewiorówski, made the call for a ban on surveillance-based targeted ads in reference to the Commission’s Digital Services Act (DSA) — following a request for consultation from EU lawmakers.

The DSA legislative proposal was introduced in December, alongside the Digital Markets Act (DMA) — kicking off the EU’s (often lengthy) co-legislative process which involves debate and negotiations in the European Parliament and Council on amendments before any final text can be agreed for approval. This means battle lines are being drawn to try to influence the final shape of the biggest overhaul to pan-EU digital rules for decades — with everything to play for.

The intervention by Europe’s lead data protection supervisor calling for a ban on targeted ads is a powerful pre-emptive push against attempts to water down legislative protections for consumer interests.

The Commission had not gone so far in its proposal — but big tech lobbyists are certainly pushing in the opposite direction so the EDPS taking a strong line here looks important.

In his opinion on the DSA the EDPS writes that “additional safeguards” are needed to supplement risk mitigation measures proposed by the Commission — arguing that “certain activities in the context of online platforms present increasing risks not only for the rights of individuals, but for society as a whole”.

Online advertising, recommender systems and content moderation are the areas the EDPS is particularly concerned about.

“Given the multitude of risks associated with online targeted advertising, the EDPS urges the co-legislators to consider additional rules going beyond transparency,” he goes on. “Such measures should include a phase-out leading to a prohibition of targeted advertising on the basis of pervasive tracking, as well as restrictions in relation to the categories of data that can be processed for targeting purposes and the categories of data that may be disclosed to advertisers or third parties to enable or facilitate targeted advertising.”

It’s the latest regional salvo aimed at mass-surveillance-based targeted ads after the European Parliament called for tighter rules back in October — when it suggested EU lawmakers should consider a phased in ban.

Again, though, the EDPS is going a bit further here in actually calling for one. (Facebook’s Nick Clegg will be clutching his pearls.)

More recently, the CEO of European publishing giant Axel Springer, a long time co-conspirator of adtech interests, went public with a (rather protectionist-flavored) rant about US-based data-mining tech platforms turning citizens into “the marionettes of capitalist monopolies” — calling for EU lawmakers to extend regional privacy rules by prohibiting platforms from storing personal data and using it for commercial gain at all.

Apple CEO, Tim Cook, also took to the virtual stage of a (usually) Brussels based conference last month to urge Europe to double down on enforcement of its flagship General Data Protection Regulation (GDPR).

In the speech Cook warned that the adtech ‘data complex’ is fuelling a social catastrophe by driving the spread of disinformation as it works to profit off of mass manipulation. He went on to urge lawmakers on both sides of the pond to “send a universal, humanistic response to those who claim a right to users’ private information about what should not and will not be tolerated”. So it’s not just European companies (and institutions) calling for pro-privacy reform of adtech.

The iPhone maker is preparing to introduce stricter limits on tracking on its smartphones by making apps ask users for permission to track, instead of just grabbing their data — a move that’s naturally raised the hackles of the adtech sector, which relies on mass surveillance to power ‘relevant’ ads.

Hence the adtech industry has resorted to crying ‘antitrust‘ as a tactic to push competition regulators to block platform-level moves against its consentless surveillance. And on that front it’s notable than the EDPS’ opinion on the DMA, which proposes extra rules for intermediating platforms with the most market power, reiterates the vital links between competition, consumer protection and data protection law — saying these three are “inextricably linked policy areas in the context of the online platform economy”; and that there “should be a relationship of complementarity, not a relationship where one area replaces or enters into friction with another”.

Wiewiorówski also takes aim at recommender systems in his DSA opinion — saying these should not be based on profiling by default to ensure compliance with regional data protection rules (where privacy by design and default is supposed to be the legal default).

Here too be calls for additional measures to beef up the Commission’s legislative proposal — with the aim of “further promot[ing] transparency and user control”.

This is necessary because such system have “significant impact”, the EDPS argues.

The role of content recommendation engines in driving Internet users towards hateful and extremist points of view has long been a subject of public scrutiny. Back in 2017, for example, UK parliamentarians grilled a number of tech companies on the topic — raising concerns that AI-driven tools, engineered to maximize platform profit by increasing user engagement, risked automating radicalization, causing damage not just to the individuals who become hooked on hateful views the algorithms feeds them but cascading knock-on harms for all of us as societal cohesion is eaten away in the name of keeping the eyeballs busy.

Yet years on little information is available on how such algorithmic recommender systems work because the private companies that operate and profit off these AIs shield the workings as proprietary business secrets.

The Commission’s DSA proposal takes aim at this sort of secrecy as a bar to accountability — with its push for transparency obligations. The proposed obligations (in the initial draft) include requirements for platforms to provide “meaningful” criteria used to target ads; and explain the “main parameters” of their recommender algorithms; as well as requirements to foreground user controls (including at least one “nonprofiling” option).

However the EDPS wants regional lawmakers to go further in the service of protecting individuals from exploitation (and society as a whole from the toxic byproducts that flow from an industry based on harvesting personal data to manipulate people).

On content moderation, Wiewiorówski’s opinion stresses that this should “take place in accordance with the rule of law”. Though the Commission draft has favored leaving it with platforms to interpret the law.

“Given the already endemic monitoring of individuals’ behaviour, particularly in the context of online platforms, the DSA should delineate when efforts to combat ‘illegal content’ legitimise the use of automated means to detect, identify and address illegal content,” he writes, in what looks like a tacit recognition of recent CJEU jurisprudence in this area.

“Profiling for purposes of content moderation should be prohibited unless the provider can demonstrate that such measures are strictly necessary to address the systemic risks explicitly identified by the DSA,” he adds.

The EDPS has also suggested minimum interoperability requirements for very large platforms, and for those designated as ‘gatekeepers’ (under the DMA), and urges lawmakers to work to promote the development of technical standards to help with this at the European level.

On the DMA, he also urges amendments to ensure the proposal “complements the GDPR effectively”, as he puts it, calling for “increasing protection for the fundamental rights and freedoms of the persons concerned, and avoiding frictions with current data protection rules”.

Among the EDPS’ specific recommendations are: That the DMA makes it clear that gatekeeper platforms must provide users with easier and more accessible consent management; clarification to the scope of data portability envisaged in the draft; and rewording of a provision that requires gatekeepers to provide other businesses with access to aggregated user data — again with an eye on ensuring “full consistency with the GDPR”.

The opinion also raises the issue of the need for “effective anonymisation” — with the EDPS calling for “re-identification tests when sharing query, click and view data in relation to free and paid search generated by end users on online search engines of the gatekeeper”.

ePrivacy reform emerges from stasis

Wiewiorówski’s contributions to shaping incoming platform regulations come on the same day that the European Council has finally reached agreement on its negotiating position for a long-delayed EU reform effort around existing ePrivacy rules.

In a press release announcing the development, the Commission writes that Member States agreed on a negotiating mandate for revised rules on the protection of privacy and confidentiality in the use of electronic communications services.

“These updated ‘ePrivacy’ rules will define cases in which service providers are allowed to process electronic communications data or have access to data stored on end-users’ devices,” it writes, adding: “Today’s agreement allows the Portuguese presidency to start talks with the European Parliament on the final text.”

Reform of the ePrivacy directive has been stalled for years as conflicting interests locked horns — putting paid to the (prior) Commission’s hopes that the whole effort could be done and dusted in 2018. (The original ePrivacy reform proposal came out in January 2017; four years later the Council has finally settled on its arguing mandate.)

The fact that the GDPR was passed first appears to have upped the stakes for data-hungry ePrivacy lobbyists — in both the adtech and telco space (the latter having a keen interest in removing existing regulatory barriers on comms data in order that it can exploit the vast troves of user data which Internet giants running rival messaging and VoIP services have long been able to).

There’s a concerted effort to try to use ePrivacy to undo consumer protections baked into GDPR — including attempts to water down protections provided for sensitive personal data. So the stage is set for an ugly rights battle as negotiations kick off with the European Parliament.

Metadata and cookie consent rules are also bound up with ePrivacy so there’s all sorts of messy and contested issues on the table here.

Digital rights advocacy group Access Now summed up the ePrivacy development by slamming the Council for “hugely” missing the mark.

“The reform is supposed to strengthen privacy rights in the EU [but] States poked so many holes into the proposal that it now looks like French Gruyère,” said Estelle Massé, senior policy analyst at Access Now, in a statement. “The text adopted today is below par when compared to the Parliament’s text and previous versions of government positions. We lost forward-looking provisions for the protection of privacy while several surveillance measures have been added.”

The group said it will be pushing to restore requirements for service providers to protect online users’ privacy by default and for the establishment of clear rules against online tracking beyond cookies, among other policy preferences.

The Council, meanwhile, appears to be advocating for a highly dilute (and so probably useless) flavor of ‘do not track’ — by suggesting users should be able to give consent to the use of “certain types of cookies by whitelisting one or several providers in their browser settings”, per the Commission.

“Software providers will be encouraged to make it easy for users to set up and amend whitelists on their browsers and withdraw consent at any moment,” it adds in its press release.

Clearly the devil will be in the detail of the Council’s position there. (The European Parliament has, by contrast, previously clearly endorsed a “legally binding and enforceable” Do Not Track mechanism for ePrivacy so, again, the stage is set for clashes.)

Encryption is another likely bone of ePrivacy contention.

As security and privacy researcher, Dr Lukasz Olejnik, noted back in mid 2017, the parliament strongly backed end-to-end encryption as a means of protecting the confidentiality of comms data — saying then that Member States should not impose any obligations on service providers to weaken strong encryption.

So it’s notable that the Council does not have much to say about e2e encryption — at least in the PR version of its public position. (A line in this that runs: “As a main rule, electronic communications data will be confidential. Any interference, including listening to, monitoring and processing of data by anyone other than the end-user will be prohibited, except when permitted by the ePrivacy regulation” is hardly reassuring, either.)

It certainly looks like a worrying omission given recent efforts at the Council level to advocate for ‘lawful’ access to encrypted data. Digital and humans rights groups will be buckling up for a fight.

#advertising-tech, #data-protection, #digital-markets-act, #digital-services-act, #do-not-track, #e2e-encryption, #edps, #eprivacy, #europe, #platform-regulation, #policy, #privacy, #targeted-ads

0