Europe’s cookie consent reckoning is coming

Cookie pop-ups getting you down? Complaints that the web is ‘unusable’ in Europe because of frustrating and confusing ‘data choices’ notifications that get in the way of what you’re trying to do online certainly aren’t hard to find.

What is hard to find is the ‘reject all’ button that lets you opt out of non-essential cookies which power unpopular stuff like creepy ads. Yet the law says there should be an opt-out clearly offered. So people who complain that EU ‘regulatory bureaucracy’ is the problem are taking aim at the wrong target.

EU law on cookie consent is clear: Web users should be offered a simple, free choice — to accept or reject.

The problem is that most websites simply aren’t compliant. They choose to make a mockery of the law by offering a skewed choice: Typically a super simple opt-in (to hand them all your data) vs a highly confusing, frustrating, tedious opt-out (and sometimes even no reject option at all).

Make no mistake: This is ignoring the law by design. Sites are choosing to try to wear people down so they can keep grabbing their data by only offering the most cynically asymmetrical ‘choice’ possible.

However since that’s not how cookie consent is supposed to work under EU law sites that are doing this are opening themselves to large fines under the General Data Protection Regulation (GDPR) and/or ePrivacy Directive for flouting the rules.

See, for example, these two whopping fines handed to Google and Amazon in France at the back end of last year for dropping tracking cookies without consent…

While those fines were certainly head-turning, we haven’t generally seen much EU enforcement on cookie consent — yet.

This is because data protection agencies have mostly taken a softly-softly approach to bringing sites into compliance. But there are signs enforcement is going to get a lot tougher. For one thing, DPAs have published detailed guidance on what proper cookie compliance looks like — so there are zero excuses for getting it wrong.

Some agencies had also been offering compliance grace periods to allow companies time to make the necessary changes to their cookie consent flows. But it’s now a full three years since the EU’s flagship data protection regime (GDPR) came into application. So, again, there’s no valid excuse to still have a horribly cynical cookie banner. It just means a site is trying its luck by breaking the law.

There is another reason to expect cookie consent enforcement to dial up soon, too: European privacy group noyb is today kicking off a major campaign to clean up the trashfire of non-compliance — with a plan to file up to 10,000 complaints against offenders over the course of this year. And as part of this action it’s offering freebie guidance for offenders to come into compliance.

Today it’s announcing the first batch of 560 complaints already filed against sites, large and small, located all over the EU (33 countries are covered). noyb said the complaints target companies that range from large players like Google and Twitter to local pages “that have relevant visitor numbers”.

“A whole industry of consultants and designers develop crazy click labyrinths to ensure imaginary consent rates. Frustrating people into clicking ‘okay’ is a clear violation of the GDPR’s principles. Under the law, companies must facilitate users to express their choice and design systems fairly. Companies openly admit that only 3% of all users actually want to accept cookies, but more than 90% can be nudged into clicking the ‘agree’ button,” said noyb chair and long-time EU privacy campaigner, Max Schrems, in a statement.

“Instead of giving a simple yes or no option, companies use every trick in the book to manipulate users. We have identified more than fifteen common abuses. The most common issue is that there is simply no ‘reject’ button on the initial page,” he added. “We focus on popular pages in Europe. We estimate that this project can easily reach 10,000 complaints. As we are funded by donations, we provide companies a free and easy settlement option — contrary to law firms. We hope most complaints will quickly be settled and we can soon see banners become more and more privacy friendly.”

To scale its action, noyb developed a tool which automatically parses cookie consent flows to identify compliance problems (such as no opt out being offered at the top layer; or confusing button coloring; or bogus ‘legitimate interest’ opt-ins, to name a few of the many chronicled offences); and automatically create a draft report which can be emailed to the offender after it’s been reviewed by a member of the not-for-profit’s legal staff.

It’s an innovative, scalable approach to tackling systematically cynical cookie manipulation in a way that could really move the needle and clean up the trashfire of horrible cookie pop-ups.

noyb is even giving offenders a warning first — and a full month to clean up their ways — before it will file an official complaint with their relevant DPA (which could lead to an eye-watering fine).

Its first batch of complaints are focused on the OneTrust consent management platform (CMP), one of the most popular template tools used in the region — and which European privacy researchers have previously shown (cynically) provides its client base with ample options to set non-compliant choices like pre-checked boxes… Talk about taking the biscuit.

A noyb spokeswoman said it’s started with OneTrust because its tool is popular but confirmed the group will expand the action to cover other CMPs in the future.

The first batch of noyb’s cookie consent complaints reveal the rotten depth of dark patterns being deployed — with 81% of the 500+ pages not offering a reject option on the initial page (meaning users have to dig into sub-menus to try to find it); and 73% using “deceptive colors and contrasts” to try to trick users into clicking the ‘accept’ option.

noyb’s assessment of this batch also found that a full 90% did not provide a way to easily withdraw consent as the law requires.

Cookie compliance problems found in the first batch of sites facing complaints (Image credit: noyb)

It’s a snapshot of truly massive enforcement failure. But dodgy cookie consents are now operating on borrowed time.

Asked if it was able to work out how prevalent cookie abuse might be across the EU based on the sites it crawled, noyb’s spokeswoman said it was difficult to determine, owing to technical difficulties encountered through its process, but she said an initial intake of 5,000 websites was whittled down to 3,600 sites to focus on. And of those it was able to determine that 3,300 violated the GDPR.

That still left 300 — as either having technical issues or no violations — but, again, the vast majority (90%) were found to have violations. And with so much rule-breaking going on it really does require a systematic approach to fixing the ‘bogus consent’ problem — so noyb’s use of automation tech is very fitting.

More innovation is also on the way from the not-for-profit — which told us it’s working on an automated system that will allow Europeans to “signal their privacy choices in the background, without annoying cookie banners”.

At the time of writing it couldn’t provide us with more details on how that will work (presumably it will be some kind of browser plug-in) but said it will be publishing more details “in the next weeks” — so hopefully we’ll learn more soon.

A browser plug-in that can automatically detect and select the ‘reject all’ button (even if only from a subset of the most prevalent CMPs) sounds like it could revive the ‘do not track’ dream. At the very least, it would be a powerful weapon to fight back against the scourge of dark patterns in cookie banners and kick non-compliant cookies to digital dust.

 

#advertising-tech, #cookie-consent, #data-protection, #eprivacy, #europe, #european-union, #gdpr, #general-data-protection-regulation, #max-schrems, #noyb, #policy, #privacy, #tc

0

Facebook ordered not to apply controversial WhatsApp T&Cs in Germany

The Hamburg data protection agency has banned Facebook from processing the additional WhatsApp user data that the tech giant is granting itself access to under a mandatory update to WhatsApp’s terms of service.

The controversial WhatsApp privacy policy update has caused widespread confusion around the world since being announced — and already been delayed by Facebook for several months after a major user backlash saw rivals messaging apps benefitting from an influx of angry users.

The Indian government has also sought to block the changes to WhatApp’s T&Cs in court — and the country’s antitrust authority is investigating.

Globally, WhatsApp users have until May 15 to accept the new terms (after which the requirement to accept the T&Cs update will become persistent, per a WhatsApp FAQ).

The majority of users who have had the terms pushed on them have already accepted them, according to Facebook, although it hasn’t disclosed what proportion of users that is.

But the intervention by Hamburg’s DPA could further delay Facebook’s rollout of the T&Cs — at least in Germany — as the agency has used an urgency procedure, allowed for under the European Union’s General Data Protection Regulation (GDPR), to order the tech giant not to share the data for three months.

A WhatsApp spokesperson disputed the legal validity of Hamburg’s order — calling it “a fundamental misunderstanding of the purpose and effect of WhatsApp’s update” and arguing that it “therefore has no legitimate basis”.

“Our recent update explains the options people have to message a business on WhatsApp and provides further transparency about how we collect and use data. As the Hamburg DPA’s claims are wrong, the order will not impact the continued roll-out of the update. We remain fully committed to delivering secure and private communications for everyone,” the spokesperson added, suggesting that Facebook-owned WhatsApp may be intending to ignore the order.

We understand that Facebook is considering its options to appeal Hamburg’s procedure.

The emergency powers Hamburg is using can’t extend beyond three months but the agency is also applying pressure to the European Data Protection Board (EDPB) to step in and make what it calls “a binding decision” for the 27 Member State bloc.

We’ve reached out to the EDPB to ask what action, if any, it could take in response to the Hamburg DPA’s call.

The body is not usually involved in making binding GDPR decisions related to specific complaints — unless EU DPAs cannot agree over a draft GDPR decision brought to them for review by a lead supervisory authority under the one-stop-shop mechanism for handling cross-border cases.

In such a scenario the EDPB can cast a deciding vote — but it’s not clear that an urgency procedure would qualify.

In taking the emergency action, the German DPA is not only attacking Facebook for continuing to thumb its nose at EU data protection rules, but throwing shade at its lead data supervisor in the region, Ireland’s Data Protection Commission (DPC) — accusing the latter of failing to investigate the very widespread concerns attached to the incoming WhatsApp T&Cs.

(“Our request to the lead supervisory authority for an investigation into the actual practice of data sharing was not honoured so far,” is the polite framing of this shade in Hamburg’s press release).

We’ve reached out to the DPC for a response and will update this report if we get one.

Ireland’s data watchdog is no stranger to criticism that it indulges in creative regulatory inaction when it comes to enforcing the GDPR — with critics charging commissioner Helen Dixon and her team of failing to investigate scores of complaints and, in the instances when it has opened probes, taking years to investigate — and opting for weak enforcements at the last.

The only GDPR decision the DPC has issued to date against a tech giant (against Twitter, in relation to a data breach) was disputed by other EU DPAs — which wanted a far tougher penalty than the $550k fine eventually handed down by Ireland.

GDPR investigations into Facebook and WhatsApp remain on the DPC’s desk. Although a draft decision in one WhatsApp data-sharing transparency case was sent to other EU DPAs in January for review — but a resolution has still yet to see the light of day almost three years after the regulation begun being applied.

In short, frustrations about the lack of GDPR enforcement against the biggest tech giants are riding high among other EU DPAs — some of whom are now resorting to creative regulatory actions to try to sidestep the bottleneck created by the one-stop-shop (OSS) mechanism which funnels so many complaints through Ireland.

The Italian DPA also issued a warning over the WhatsApp T&Cs change, back in January — saying it had contacted the EDPB to raise concerns about a lack of clear information over what’s changing.

At that point the EDPB emphasized that its role is to promote cooperation between supervisory authorities. It added that it will continue to facilitate exchanges between DPAs “in order to ensure a consistent application of data protection law across the EU in accordance with its mandate”. But the always fragile consensus between EU DPAs is becoming increasingly fraught over enforcement bottlenecks and the perception that the regulation is failing to be upheld because of OSS forum shopping.

That will increase pressure on the EDPB to find some way to resolve the impasse and avoid a wider break down of the regulation — i.e. if more and more Member State agencies resort to unilateral ’emergency’ action.

The Hamburg DPA writes that the update to WhatsApp’s terms grant the messaging platform “far-reaching powers to share data with Facebook” for the company’s own purposes (including for advertising and marketing) — such as by passing WhatApp users’ location data to Facebook and allowing for the communication data of WhatsApp users to be transferred to third-parties if businesses make use of Facebook’s hosting services.

Its assessment is that Facebook cannot rely on legitimate interests as a legal base for the expanded data sharing under EU law.

And if the tech giant is intending to rely on user consent it’s not meeting the bar either because the changes are not clearly explained nor are users offered a free choice to consent or not (which is the required standard under GDPR).

“The investigation of the new provisions has shown that they aim to further expand the close connection between the two companies in order for Facebook to be able to use the data of WhatsApp users for their own purposes at any time,” Hamburg goes on. “For the areas of product improvement and advertising, WhatsApp reserves the right to pass on data to Facebook companies without requiring any further consent from data subjects. In other areas, use for the company’s own purposes in accordance to the privacy policy can already be assumed at present.

“The privacy policy submitted by WhatsApp and the FAQ describe, for example, that WhatsApp users’ data, such as phone numbers and device identifiers, are already being exchanged between the companies for joint purposes such as network security and to prevent spam from being sent.”

DPAs like Hamburg may be feeling buoyed to take matters into their own hands on GDPR enforcement by a recent opinion by an advisor to the EU’s top court, as we suggested in our coverage at the time. Advocate General Bobek took the view that EU law allows agencies to bring their own proceedings in certain situations, including in order to adopt “urgent measures” or to intervene “following the lead data protection authority having decided not to handle a case.”

The CJEU ruling on that case is still pending — but the court tends to align with the position of its advisors.

 

#data-protection, #data-protection-commission, #data-protection-law, #europe, #european-data-protection-board, #european-union, #facebook, #gdpr, #general-data-protection-regulation, #germany, #hamburg, #helen-dixon, #ireland, #privacy, #privacy-policy, #social, #social-media, #terms-of-service, #whatsapp

0

Disqus facing $3M fine in Norway for tracking users without consent

Disqus, a commenting plugin that’s used by a number of news websites and which can share user data for ad targeting purposes, has got into hot water in Norway for tracking users without their consent.

The local data protection agency said today it has notified the U.S.-based company of an intent to fine it €2.5 million (~$3M) for failures to comply with requirements in Europe’s General Data Protection Regulation (GDPR) on accountability, lawfulness and transparency.

Disqus’ parent, Zeta Global, has been contacted for comment.

Datatilsynet said it acted following a 2019 investigation in Norway’s national press — which found that default settings buried in the Disqus’ plug-in opted sites into sharing user data on millions of users in markets including the U.S.

And while in most of Europe the company was found to have applied an opt-in to gather consent from users to be tracked — likely in order to avoid trouble with the GDPR — it appears to have been unaware that the regulation applies in Norway.

Norway is not a member of the European Union but is in the European Economic Area — which adopted the GDPR in July 2018, slightly after it came into force elsewhere in the EU. (Norway transposed the regulation into national law also in July 2018.)

The Norwegian DPA writes that Disqus’ unlawful data-sharing has “predominantly been an issue in Norway” — and says that seven websites are affected: NRK.no/ytring, P3.no, tv.2.no/broom, khrono.no, adressa.no, rights.no and document.no.

“Disqus has argued that their practices could be based on the legitimate interest balancing test as a lawful basis, despite the company being unaware that the GDPR applied to data subjects in Norway,” the DPA’s director-general, Bjørn Erik Thon, goes on.

“Based on our investigation so far, we believe that Disqus could not rely on legitimate interest as a legal basis for tracking across websites, services or devices, profiling and disclosure of personal data for marketing purposes, and that this type of tracking would require consent.”

“Our preliminary conclusion is that Disqus has processed personal data unlawfully. However, our investigation also discovered serious issues regarding transparency and accountability,” Thon added.

The DPA said the infringements are serious and have affected “several hundred thousands of individuals”, adding that the affected personal data “are highly private and may relate to minors or reveal political opinions”.

“The tracking, profiling and disclosure of data was invasive and nontransparent,” it added.

The DPA has given Disqus until May 31 to comment on the findings ahead of issuing a fine decision.

Publishers reminded of their responsibility

Datatilsynet has also fired a warning shot at local publishers who were using the Disqus platform — pointing out that website owners “are also responsible under the GDPR for which third parties they allow on their websites”.

So, in other words, even if you didn’t know about a default data-sharing setting that’s not an excuse because it’s your legal responsibility to know what any code you put on your website is doing with user data.

The DPA adds that “in the present case” it has focused the investigation on Disqus — providing publishers with an opportunity to get their houses in order ahead of any future checks it might make.

Norway’s DPA also has some admirably plain language to explain the “serious” problem of profiling people without their consent. “Hidden tracking and profiling is very invasive,” says Thon. “Without information that someone is using our personal data, we lose the opportunity to exercise our rights to access, and to object to the use of our personal data for marketing purposes.

“An aggravating circumstance is that disclosure of personal data for programmatic advertising entails a high risk that individuals will lose control over who processes their personal data.”

Zooming out, the issue of adtech industry tracking and GDPR compliance has become a major headache for DPAs across Europe — which have been repeatedly slammed for failing to enforce the law in this area since GDPR came into application in May 2018.

In the UK, for example (which transposed the GDPR before Brexit so still has an equivalent data protection framework for now), the ICO has been investigating GDPR complaints against real-time bidding’s (RTB) use of personal data to run behavioral ads for years — yet hasn’t issued a single fine or order, despite repeatedly warning the industry that it’s acting unlawfully.

The regulator is now being sued by complainants over its inaction.

Ireland’s DPC, meanwhile — which is the lead DPA for a swathe of adtech giants which site their regional HQ in the country — has a number of open GDPR investigations into adtech (including RTB). But has also failed to issue any decisions in this area almost three years after the regulation begun being applied.

Its lack of action on adtech complaints has contributed significantly to rising domestic (and international) pressure on its GDPR enforcement record more generally, including from the European Commission. (And it’s notable that the latter’s most recent legislative proposals in the digital arena include provisions that seek to avoid the risk of similar enforcement bottlenecks.)

The story on adtech and the GDPR looks a little different in Belgium, though, where the DPA appears to be inching toward a major slap-down of current adtech practices.

A preliminary report last year by its investigatory division called into question the legal standard of the consents being gathered via a flagship industry framework, designed by the IAB Europe. This so-called ‘Transparency and Consent’ framework (TCF) was found not to comply with the GDPR’s principles of transparency, fairness and accountability, or the lawfulness of processing.

A final decision is expected on that case this year — but if the DPA upholds the division’s findings it could deal a massive blow to the behavioral ad industry’s ability to track and target Europeans.

Studies suggest Internet users in Europe would overwhelmingly choose not to be tracked if they were actually offered the GDPR standard of a specific, clear, informed and free choice, i.e. without any loopholes or manipulative dark patterns.

#advertising-tech, #belgium, #data-protection, #data-security, #disqus, #europe, #european-commission, #european-union, #gdpr, #general-data-protection-regulation, #ireland, #norway, #personal-data, #privacy, #programmatic-advertising, #united-kingdom, #zeta-global

0

Facebook faces ‘mass action’ lawsuit in Europe over 2019 breach

Facebook is to be sued in Europe over the major leak of user data that dates back to 2019 but which only came to light recently after information on 533M+ accounts was found posted for free download on a hacker forum.

Today Digital Rights Ireland (DRI) announced it’s commencing a “mass action” to sue Facebook, citing the right to monetary compensation for breaches of personal data that’s set out in the European Union’s General Data Protection Regulation (GDPR).

Article 82 of the GDPR provides for a ‘right to compensation and liability’ for those affected by violations of the law. Since the regulation came into force, in May 2018, related civil litigation has been on the rise in the region.

The Ireland-based digital rights group is urging Facebook users who live in the European Union or European Economic Area to check whether their data was breach — via the haveibeenpwned website (which lets you check by email address or mobile number) — and sign up to join the case if so.

Information leaked via the breach includes Facebook IDs, location, mobile phone numbers, email address, relationship status and employer.

Facebook has been contacted for comment on the litigation.

The tech giant’s European headquarters is located in Ireland — and earlier this week the national data watchdog opened an investigation, under EU and Irish data protection laws.

A mechanism in the GDPR for simplifying investigation of cross-border cases means Ireland’s Data Protection Commission (DPC) is Facebook’s lead data regulator in the EU. However it has been criticized over its handling of and approach to GDPR complaints and investigations — including the length of time it’s taking to issue decisions on major cross-border cases. And this is particularly true for Facebook.

With the three-year anniversary of the GDPR fast approaching, the DPC has multiple open investigations into various aspects of Facebook’s business but has yet to issue a single decision against the company.

(The closest it’s come is a preliminary suspension order issued last year, in relation to Facebook’s EU to US data transfers. However that complaint long predates GDPR; and Facebook immediately filed to block the order via the courts. A resolution is expected later this year after the litigant filed his own judicial review of the DPC’s processes).

Since May 2018 the EU’s data protection regime has — at least on paper — baked in fines of up to 4% of a company’s global annual turnover for the most serious violations.

Again, though, the sole GDPR fine issued to date by the DPC against a tech giant (Twitter) is very far off that theoretical maximum. Last December the regulator announced a €450k (~$547k) sanction against Twitter — which works out to around just 0.1% of the company’s full-year revenue.

That penalty was also for a data breach — but one which, unlike the Facebook leak, had been publicly disclosed when Twitter found it in 2019. So Facebook’s failure to disclose the vulnerability it discovered and claims it fixed by September 2019, which led to the leak of 533M accounts now, suggests it should face a higher sanction from the DPC than Twitter received.

However even if Facebook ends up with a more substantial GDPR penalty for this breach the watchdog’s caseload backlog and plodding procedural pace makes it hard to envisage a swift resolution to an investigation that’s only a few days old.

Judging by past performance it’ll be years before the DPC decides on this 2019 Facebook leak — which likely explains why the DRI sees value in instigating class-action style litigation in parallel to the regulatory investigation.

“Compensation is not the only thing that makes this mass action worth joining. It is important to send a message to large data controllers that they must comply with the law and that there is a cost to them if they do not,” DRI writes on its website.

It also submitted a complaint about the Facebook breach to the DPC earlier this month, writing then that it was “also consulting with its legal advisors on other options including a mass action for damages in the Irish Courts”.

It’s clear that the GDPR enforcement gap is creating a growing opportunity for litigation funders to step in in Europe and take a punt on suing for data-related compensation damages — with a number of other mass actions announced last year.

In the case of DRI its focus is evidently on seeking to ensure that digital rights are upheld. But it told RTE that it believes compensation claims which force tech giants to pay money to users whose privacy rights have been violated is the best way to make them legally compliant.

Facebook, meanwhile, has sought to play down the breach it failed to disclose in 2019 — claiming it’s ‘old data’ — a deflection that ignores the fact that people’s dates of birth don’t change (nor do most people routinely change their mobile number or email address).

Plenty of the ‘old’ data exposed in this latest massive Facebook leak will be very handy for spammers and fraudsters to target Facebook users — and also now for litigators to target Facebook for data-related damages.

#data-protection, #data-protection-commission, #data-security, #digital-rights, #digital-rights-ireland, #europe, #european-union, #facebook, #gdpr, #general-data-protection-regulation, #ireland, #lawsuit, #litigation, #personal-data, #privacy, #social, #social-media, #tc, #twitter

0

Uber hit with default ‘robo-firing’ ruling after another EU labor rights GDPR challenge

Labor activists challenging Uber over what they allege are ‘robo-firings’ of drivers in Europe have trumpeted winning a default judgement in the Netherlands — where the Court of Amsterdam ordered the ride-hailing giant to reinstate six drivers who the litigants claim were unfairly terminated “by algorithmic means.”

The court also ordered Uber to pay the fired drivers compensation.

The challenge references Article 22 of the European Union’s General Data Protection Regulation (GDPR) — which provides protection for individuals against purely automated decisions with a legal or significant impact.

The activists say this is the first time a court has ordered the overturning of an automated decision to dismiss workers from employment.

However the judgement, which was issued on February 24, was issued by default — and Uber says it was not aware of the case until last week, claiming that was why it did not contest it (nor, indeed, comply with the order).

It had until March 29 to do so, per the litigants, who are being supported by the App Drivers & Couriers Union (ADCU) and Worker Info Exchange (WIE).

Uber argues the default judgement was not correctly served and says it is now making an application to set the default ruling aside and have its case heard “on the basis that the correct procedure was not followed.”

It envisages the hearing taking place within four weeks of its Dutch entity, Uber BV, being made aware of the judgement — which it says occurred on April 8.

“Uber only became aware of this default judgement last week, due to representatives for the ADCU not following proper legal procedure,” an Uber spokesperson told TechCrunch.

A spokesperson for WIE denied that correct procedure was not followed but welcomed the opportunity for Uber to respond to questions over how its driver ID systems operate in court, adding: “They [Uber] are out of time. But we’d be happy to see them in court. They will need to show meaningful human intervention and provide transparency.”

Uber pointed to a separate judgement by the Amsterdam Court last month — which rejected another ADCU- and WIE-backed challenge to Uber’s anti-fraud systems, with the court accepting its explanation that algorithmic tools are mere aids to human “anti-fraud” teams who it said take all decisions on terminations.

“With no knowledge of the case, the Court handed down a default judgement in our absence, which was automatic and not considered. Only weeks later, the very same Court found comprehensively in Uber’s favour on similar issues in a separate case. We will now contest this judgement,” Uber’s spokesperson added.

However WIE said this default judgement “robo-firing” challenge specifically targets Uber’s Hybrid Real-Time ID System — a system that incorporates facial recognition checks and which labor activists recently found misidentifying drivers in a number of instances.

It also pointed to a separate development this week in the U.K. where it said the City of London Magistrates Court ordered the city’s transport regulator, TfL, to reinstate the licence of one of the drivers revoked after Uber routinely notified it of a dismissal (also triggered by Uber’s real time ID system, per WIE).

Reached for comment on that, a TfL spokesperson said: “The safety of the travelling public is our top priority and where we are notified of cases of driver identity fraud, we take immediate licensing action so that passenger safety is not compromised. We always require the evidence behind an operator’s decision to dismiss a driver and review it along with any other relevant information as part of any decision to revoke a licence. All drivers have the right to appeal a decision to remove a licence through the Magistrates’ Court.”

The regulator has been applying pressure to Uber since 2017 when it took the (shocking to Uber) decision to revoke the company’s licence to operate — citing safety and corporate governance concerns.

Since then Uber has been able to continue to operate in the U.K. capital but the company remains under pressure to comply with a laundry list of requirements set by TfL as it tries to regain a full operator licence.

Commenting on the default Dutch judgement on the Uber driver terminations in a statement, James Farrar, director of WIE, accused gig platforms of “hiding management control in algorithms.”

“For the Uber drivers robbed of their jobs and livelihoods this has been a dystopian nightmare come true,” he said. “They were publicly accused of ‘fraudulent activity’ on the back of poorly governed use of bad technology. This case is a wake-up call for lawmakers about the abuse of surveillance technology now proliferating in the gig economy. In the aftermath of the recent U.K. Supreme Court ruling on worker rights gig economy platforms are hiding management control in algorithms. This is misclassification 2.0.”

In another supporting statement, Yaseen Aslam, president of the ADCU, added: “I am deeply concerned about the complicit role Transport for London has played in this catastrophe. They have encouraged Uber to introduce surveillance technology as a price for keeping their operator’s license and the result has been devastating for a TfL licensed workforce that is 94% BAME. The Mayor of London must step in and guarantee the rights and freedoms of Uber drivers licensed under his administration.”  

When pressed on the driver termination challenge being specifically targeted at its Hybrid Real-Time ID system, Uber declined to comment in greater detail — claiming the case is “now a live court case again”.

But its spokesman suggested it will seek to apply the same defence against the earlier “robo-firing” charge — when it argued its anti-fraud systems do not equate to automated decision making under EU law because “meaningful human involvement [is] involved in decisions of this nature”.

 

#app-drivers-couriers-union, #artificial-intelligence, #automated-decisions, #europe, #european-union, #facial-recognition, #gdpr, #general-data-protection-regulation, #gig-worker, #james-farrar, #labor, #lawsuit, #london, #netherlands, #transport-for-london, #uber, #united-kingdom

0

Ireland opens GDPR investigation into Facebook leak

Facebook’s lead data supervisor in the European Union has opened an investigation into whether the tech giant violated data protection rules vis-a-vis the leak of data reported earlier this month.

Here’s the Irish Data Protection Commission’s statement:

“The Data Protection Commission (DPC) today launched an own-volition inquiry pursuant to section 110 of the Data Protection Act 2018 in relation to multiple international media reports, which highlighted that a collated dataset of Facebook user personal data had been made available on the internet. This dataset was reported to contain personal data relating to approximately 533 million Facebook users worldwide. The DPC engaged with Facebook Ireland in relation to this reported issue, raising queries in relation to GDPR compliance to which Facebook Ireland furnished a number of responses.

The DPC, having considered the information provided by Facebook Ireland regarding this matter to date, is of the opinion that one or more provisions of the GDPR and/or the Data Protection Act 2018 may have been, and/or are being, infringed in relation to Facebook Users’ personal data.

Accordingly, the Commission considers it appropriate to determine whether Facebook Ireland has complied with its obligations, as data controller, in connection with the processing of personal data of its users by means of the Facebook Search, Facebook Messenger Contact Importer and Instagram Contact Importer features of its service, or whether any provision(s) of the GDPR and/or the Data Protection Act 2018 have been, and/or are being, infringed by Facebook in this respect.”

Facebook has been contacted for comment.

The move comes after the European Commission intervened to apply pressure on Ireland’s data protection commissioner. Justice commissioner, Didier Reynders, tweeted Monday that he had spoken with Helen Dixon about the Facebook data leak.

“The Commission continues to follow this case closely and is committed to supporting national authorities,” he added, going on to urge Facebook to “cooperate actively and swiftly to shed light on the identified issues”.

A spokeswoman for the Commission confirmed the virtual meeting between Reynders and Dixon, saying: “Dixon informed the Commissioner about the issues at stake and the different tracks of work to clarify the situation.

“They both urge Facebook to cooperate swiftly and to share the necessary information. It is crucial to shed light on this leak that has affected millions of European citizens.”

“It is up to the Irish data protection authority to assess this case. The Commission remains available if support is needed. The situation will also have to be further analyzed for the future. Lessons should be learned,” she added.

The revelation that a vulnerability in Facebook’s platform enabled unidentified ‘malicious actors’ to extract the personal data (including email addresses, mobile phone numbers and more) of more than 500 million Facebook accounts up until September 2019 — when Facebook claims it fixed the issue — only emerged in the wake of the data being found for free download on a hacker forum earlier this month.

Despite the European Union’s data protection framework (the GDPR) baking in a regime of data breach notifications — with the risk of hefty fines for compliance failure — Facebook did not inform its lead EU data supervisory when it found and fixed the issue. Ireland’s Data Protection Commission (DPC) was left to find out in the press, like everyone else.

Nor has Facebook individually informed the 533M+ users that their information was taken without their knowledge or consent, saying last week it has no plans to do so — despite the heightened risk for affected users of spam and phishing attacks.

Privacy experts have, meanwhile, been swift to point out that the company has still not faced any regulatory sanction under the GDPR — with a number of investigations ongoing into various Facebook businesses and practices and no decisions yet issued in those cases by Ireland’s DPC.

Last month the European Parliament adopted a resolution on the implementation of the GDPR which expressed “great concern” over the functioning of the mechanism — raising particular concern over the Irish data protection authority by writing that it “generally closes most cases with a settlement instead of a sanction and that cases referred to Ireland in 2018 have not even reached the stage of a draft decision pursuant to Article 60(3) of the GDPR”.

The latest Facebook data scandal further amps up the pressure on the DPC — providing further succour to critics of the GDPR who argue the regulation is unworkable under the current foot-dragging enforcement structure, given the major bottlenecks in Ireland (and Luxembourg) where many tech giants choose to locate regional HQ.

On Thursday Reynders made his concern over Ireland’s response to the Facebook data leak public, tweeting to say the Commission had been in contact with the DPC.

He does have reason to be personally concerned. Earlier last week Politico reported that Reynders’ own digits had been among the cache of leaked data, along with those of the Luxembourg prime minister Xavier Bettel — and “dozens of EU officials”. However the problem of weak GDPR enforcement affects everyone across the bloc — some 446M people whose rights are not being uniformly and vigorously upheld.

“A strong enforcement of GDPR is of key importance,” Reynders also remarked on Twitter, urging Facebook to “fully cooperate with Irish authorities”.

Last week Italy’s data protection commission also called on Facebook to immediately offer a service for Italian users to check whether they had been affected by the breach. But Facebook made no public acknowledgment or response to the call. Under the GDPR’s one-stop-shop mechanism the tech giant can limit its regulatory exposure by direct dealing only with its lead EU data supervisor in Ireland.

A two-year Commission review of how the data protection regime is functioning, which reported last summer, already drew attention to problems with patchy enforcement. A lack of progress on unblocking GDPR bottlenecks is thus a growing problem for the Commission — which is in the midst of proposing a package of additional digital regulations. That makes the enforcement point a very pressing one as EU lawmakers are being asked how new digital rules will be upheld if existing ones keep being trampled on?

It’s certainly notable that the EU’s executive has proposed a different, centralized enforcement structure for incoming pan-EU legislation targeted at digital services and tech giants. Albeit, getting agreement from all the EU’s institutions and elected representatives on how to reshape platform oversight looks challenging.

And in the meanwhile the data leaks continue: Motherboard reported Friday on another alarming leak of Facebook data it found being made accessible via a bot on the Telegram messaging platform that gives out the names and phone numbers of users who have liked a Facebook page (in exchange for a fee unless the page has had less than 100 likes).

The publication said this data appears to be separate to the 533M+ scraped dataset — after it ran checks against the larger dataset via the breach advice site, haveibeenpwned. It also asked Alon Gal, the person who discovered the aforementioned leaked Facebook dataset being offered for free download online, to compare data obtained via the bot and he did not find any matches.

We contacted Facebook about the source of this leaked data and will update this report with any response.

In his tweet about the 500M+ Facebook data leak last week, Reynders made reference to the Europe Data Protection Board (EDPB), a steering body comprised of representatives from Member State data protection agencies which works to ensure a consistent application of the GDPR.

However the body does not lead on GDPR enforcement — so it’s not clear why he would invoke it. Optics is one possibility, if he was trying to encourage a perception that the EU has vigorous and uniform enforcement structures where people’s data is concerned.

“Under the GDPR, enforcement and the investigation of potential violations lies with the national supervisory authorities. The EDPB does not have investigative powers per se and is not involved in investigations at the national level. As such, the EDPB cannot comment on the processing activities of specific companies,” an EDPB spokeswoman told us when we enquired about Reynders’ remarks.

But she also noted the Commission attends plenary meetings of the EDPB — adding it’s possible there will be an exchange of views among members about the Facebook leak case in the future, as attending supervisory authorities “regularly exchange information on cases at the national level”.

 

#data-breach, #dpc, #eu, #europe, #facebook, #gdpr, #ireland, #privacy, #social, #tc

0

Facebook’s tardy disclosure of breach timing raises GDPR compliance questions

The question of whether Facebook will face any regulatory sanction over the latest massive historical platform privacy fail to come to light remains unclear. But the timeline of the incident looks increasingly awkward for the tech giant.

While it initially sought to play down the data breach revelations published by Business Insider at the weekend by suggesting that information like people’s birth dates and phone numbers was “old”, in a blog post late yesterday the tech giant finally revealed that the data in question had in fact been scraped from its platform by malicious actors “in 2019” and “prior to September 2019”.

That new detail about the timing of this incident raises the issue of compliance with Europe’s General Data Protection Regulation (GDPR) — which came into application in May 2018.

Under the EU regulation data controllers can face fines of up to 2% of their global annual turnover for failures to notify breaches, and up to 4% of annual turnover for more serious compliance violations.

The European framework looks important because Facebook indemnified itself against historical privacy issues in the US when it settled with the FTC for $5BN back in July 2019 — although that does still mean there’s a period of several months (June to September 2019) which could fall outside that settlement.

Yesterday, in its own statement responding to the breach revelations, Facebook’s lead data supervisor in the EU said the provenance of the newly published dataset wasn’t entirely clear, writing that it “seems to comprise the original 2018 (pre-GDPR) dataset” — referring to an earlier breach incident Facebook disclosed in 2018 which related to a vulnerability in its phone lookup functionality that it had said occurred between June 2017 and April 2018 — but also writing that the newly published dataset also looked to have been “combined with additional records, which may be from a later period”.

Facebook followed up the Irish Data Protection Commission (DPC)’s statement by confirming that suspicion — admitting that the data had been extracted from its platform in 2019, up until September of that year.

Another new detail that emerged in Facebook’s blog post yesterday was the fact users’ data was scraped not via the aforementioned phone lookup vulnerability — but via another method altogether: A contact importer tool vulnerability.

This route allowed an unknown number of “malicious actors” to use software to imitate Facebook’s app and upload large sets of phone numbers to see which ones matched Facebook users.

In this way a spammer (for example), could upload a database of potential phone numbers and link them to not only names but other data like birth date, email address, location — all the better to phish you with.

In its PR response to the breach, Facebook quickly claimed it had fixed this vulnerability in August 2019. But, again, that timing places the incident squarely in the period of GDPR being active.

As a reminder, Europe’s data protection framework bakes in a data breach notification regime that requires data controllers to notify a relevant supervisory authority if they believe a loss of personal data is likely to constitute a risk to users’ rights and freedoms — and to do so without undue delay (ideally within 72 hours of becoming aware of it).

Yet Facebook made no disclosure at all of this incident to the DPC. Indeed, the regulator made it clear yesterday that it had to proactively seek information from Facebook in the wake of BI’s report. That’s the opposite of how EU lawmakers intended the regulation to function.

Data breaches, meanwhile, are broadly defined under the GDPR. It could mean personal data being lost or stolen and/or accessed by unauthorized third parties. It can also relate to deliberate or accidental action or inaction by a data controller which exposes personal data.

Legal risk attached to the breach likely explains why Facebook has studiously avoided describing this latest data protection failure, in which the personal information of more than half a billion users was posted for free download on an online forum, as a ‘breach’.

And, indeed, why it’s sought to downplay the significance of the leaked information — dubbing people’s personal information “old data”. (Even as few people regularly change their mobile numbers, email address, full names and biographical information and so on, and no one (legally) gets a new birth date… )

Its blog post instead refers to data being scraped; and to scraping being “a common tactic that often relies on automated software to lift public information from the internet that can end up being distributed in online forums” — tacitly implying that the personal information leaked via its contact importer tool was somehow public.

The self-serving suggestion being peddled here by Facebook is that hundreds of millions of users had both published sensitive stuff like their mobile phone numbers on their Facebook profiles and left default settings on their accounts — thereby making this personal information ‘publicly available for scraping/no longer private/uncovered by data protection legislation’.

This is an argument as obviously absurd as it is viciously hostile to people’s rights and privacy. It’s also an argument that EU data protection regulators must quickly and definitively reject or be complicit in allowing Facebook (ab)use its market power to torch the very fundamental rights that regulators’ sole purpose is to defend and uphold.

Even if some Facebook users affected by this breach had their information exposed via the contact importer tool because they had not changed Facebook’s privacy-hostile defaults that still raises key questions of GPDR compliance — because the regulation also requires data controllers to adequately secure personal data and apply privacy by design and default.

Facebook allowing hundreds of millions of accounts to have their info freely pillaged by spammers (or whoever) doesn’t sound like good security or default privacy.

In short, it’s the Cambridge Analytica scandal all over again.

Facebook is trying to get away with continuing to be terrible at privacy and data protection because it’s been so terrible at it in the past — and likely feels confident in keeping on with this tactic because it’s faced relatively little regulatory sanction for an endless parade of data scandals. (A one-time $5BN FTC fine for a company than turns over $85BN+ in annual revenue is just another business expense.)

We asked Facebook why it failed to notify the DPC about this 2019 breach back in 2019, when it realized people’s information was once again being maliciously extracted from its platform — or, indeed, why it hasn’t bothered to tell affected Facebook users themselves — but the company declined to comment beyond what it said yesterday.

Then it told us it would not be commenting on its communications with regulators.

Under the GDPR, if a breach poses a high risk to users’ rights and freedoms a data controller is required to notify affected individuals — with the rational being that prompt notification of a threat can help people take steps to protect themselves from the risks of their data being breached, such as fraud and ID theft.

Yesterday Facebook also said it does not have plans to notify users either.

Perhaps the company’s trademark ‘thumbs up’ symbol would be more aptly expressed as a middle finger raised at everyone else.

 

#data-controller, #data-protection, #dpc, #europe, #european-union, #facebook, #federal-trade-commission, #gdpr, #general-data-protection-regulation, #personal-data, #privacy, #security-breaches, #united-states

0

Answers being sought from Facebook over latest data breach

Facebook’s lead data protection regulator in the European Union is seeking answers from the tech giant over a major data breach reported on over the weekend.

The breach was reported on by Business Insider on Saturday which said personal data (including email addresses and mobile phone numbers) of more than 500M Facebook accounts had been posted to a low level hacking forum — making the personal information on hundreds of millions of Facebook users’ accounts freely available.

“The exposed data includes the personal information of over 533M Facebook users from 106 countries, including over 32M records on users in the US, 11M on users in the UK, and 6M on users in India,” Business Insider said, noting that the dump includes phone numbers, Facebook IDs, full names, locations, birthdates, bios, and some email addresses.

Facebook responded to the report of the data dump by saying it related to a vulnerability in its platform it had “found and fixed” in August 2019 — dubbing the info “old data” which it also claimed had been reported on in 2019. However as security experts were quick to point out, most people don’t change their mobile phone number often — so Facebook’s trigger reaction to downplay the breach looks like an ill-thought through attempt to deflect blame.

It’s also not clear whether all the data is all ‘old’, as Facebook’s initial response suggests.

There’s plenty of reasons for Facebook to try to downplay yet another data scandal. Not least because, under European Union data protection rules, there are stiff penalties for companies that fail to promptly report significant breaches to relevant authorities. And indeed for breaches themselves — as the bloc’s General Data Protection Regulation (GDPR) bakes in an expectation of security by design and default.

By pushing the claim that the leaked data is “old” Facebook may be hoping to peddle the idea that it predates the GDPR coming into application (in May 2018).

However the Irish Data Protection Commission (DPC), Facebook’s lead data supervisor in the EU, told TechCrunch that it’s not abundantly clear whether that’s the case at this point.

“The newly published dataset seems to comprise the original 2018 (pre-GDPR) dataset and combined with additional records, which may be from a later period,” the DPC’s deputy commissioner, Graham Doyle said in a statement.

“A significant number of the users are EU users. Much of the data appears to been data scraped some time ago from Facebook public profiles,” he also said.

“Previous datasets were published in 2019 and 2018 relating to a large-scale scraping of the Facebook website which at the time Facebook advised occurred between June 2017 and April 2018 when Facebook closed off a vulnerability in its phone lookup functionality. Because the scraping took place prior to GDPR, Facebook chose not to notify this as a personal data breach under GDPR.”

Doyle said the regulator sought to establish “the full facts” about the breach from Facebook over the weekend and is “continuing to do so” — making it clear that there’s an ongoing lack of clarity on the issue, despite the breach itself being claimed as “old” by Facebook.

The DPC also made it clear that it did not receive any proactive communication from Facebook on the issue — despite the GDPR putting the onus on companies to proactively inform regulators about significant data protection issues. Rather the regulator had to approach Facebook — using a number of channels to try to obtain answers from the tech giant.

Through this approach the DPC said it learnt Facebook believes the information was scraped prior to the changes it made to its platform in 2018 and 2019 in light of vulnerabilities identified in the wake of the Cambridge Analytica data misuse scandal.

A huge database of Facebook phone numbers was found unprotected online back in September 2019.

Facebook had also earlier admitted to a vulnerability with a search tool it offered — revealing in April 2018 that somewhere between 1BN and 2BN users had had their public Facebook information scraped via a feature which allowed people to look up users by inputting a phone number or email — which is one potential source for the cache of personal data.

Last year Facebook also filed a lawsuit against two companies it accused of engaging in an international data scraping operation.

But the fallout from its poor security design choices continue to dog Facebook years after its ‘fix’.

More importantly, the fallout from the massive personal data spill continues to affect Facebook users whose information is now being openly offered for download on the Internet — opening them up to the risk of spam and phishing attacks and other forms of social engineering (such as for attempted identity theft).

There are still more questions than answers about how this “old” cache of Facebook data came to be published online for free on a hacker forum.

The DPC said it was told by Facebook that “the data at issue appears to have been collated by third parties and potentially stems from multiple sources”.

The company also claimed the matter “requires extensive investigation to establish its provenance with a level of confidence sufficient to provide your Office and our users with additional information” — which is a long way of suggesting that Facebook has no idea either.

“Facebook assures the DPC it is giving highest priority to providing firm answers to the DPC,” Doyle also said. “A percentage of the records released on the hacker website contain phone numbers and email address of users.

“Risks arise for users who may be spammed for marketing purposes but equally users need to be vigilant in relation to any services they use that require authentication using a person’s phone number or email address in case third parties are attempting to gain access.”

“The DPC will communicate further facts as it receives information from Facebook,” he added.

At the time of writing Facebook had not responded to a request for comment about the breach.

Facebook users who are concerned whether their information is in the dump can run a search for their phone number or email address via the data breach advice site, haveibeenpwned.

According to haveibeenpwned’s Troy Hunt, this latest Facebook data dump contains far more mobile phone numbers than email addresses.

He writes that he was sent the data a few weeks ago — initially getting 370M records and later “the larger corpus which is now in very broad circulation”.

“A lot of it is the same, but a lot of it is also different,” Hunt also notes, adding: “There is not one clear source of this data.”

 

#computer-security, #data-breach, #data-security, #european-union, #facebook, #gdpr, #general-data-protection-regulation, #social-media, #tc, #troy-hunt, #united-kingdom

0

Competition challenge to Facebook’s ‘superprofiling’ of users sparks referral to Europe’s top court

A German court that’s considering Facebook’s appeal against a pioneering pro-privacy order by the country’s competition authority to stop combining user data without consent has said it will refer questions to Europe’s top court.

In a press release today the Düsseldorf court writes [translated by Google]: “…the Senate has come to the conclusion that a decision on the Facebook complaints can only be made after referring to the Court of Justice of the European Union (ECJ).

“The question of whether Facebook is abusing its dominant position as a provider on the German market for social networks because it collects and uses the data of its users in violation of the GDPR can not be decided without referring to the ECJ. Because the ECJ is responsible for the interpretation of European law.”

The Bundeskartellamt (Federal Cartel Office, FCO)’s ‘exploitative abuse’ case links Facebook’s ability to gather data on users of its products from across the web, via third party sites (where it deploys plug-ins and tracking pixels), and across its own suite of products (Facebook, Instagram, WhatsApp, Oculus), to its market power — asserting this data-gathering is not legal under EU privacy law as users are not offered a choice.

The associated competition contention, therefore, is that inappropriate contractual terms allow Facebook to build a unique database for each individual user and unfairly gain market power over rivals who don’t have such broad and deep reach into user’s personal data.

The FOC’s case against Facebook is seen as highly innovative as it combines the (usually) separate (and even conflicting) tracks of competition and privacy law — offering the tantalizing prospect, were the order to actually get enforced, of a structural separation of Facebook’s business empire without having to order a break up of its various business units up.

However enforcement at this point — some five years after the FCO started investigating Facebook’s data practices in March 2016 — is still a big if.

Soon after the FCO’s February 2019 order to stop combining user data, Facebook succeeded in blocking the order via a court appeal in August 2019.

But then last summer Germany’s federal court unblocked the ‘superprofiling’ case — reviving the FCO’s challenge to the tech giant’s data-harvesting-by-default.

The latest development means another long wait to see whether competition law innovation can achieve what the EU’s privacy regulators have so far failed to do — with multiple GDPR challenges against Facebook still sitting undecided on the desk of the Irish Data Protection Commission.

Albeit, it’s fair to say that neither route looks capable of ‘moving fast and breaking’ platform power at this point.

In its opinion the Düsseldorf court does appear to raise questions over the level of Facebook’s data collection, suggesting the company could avoid antitrust concerns by offering users a choice to base profiling on only the data they upload themselves rather than on a wider range of data sources, and querying its use of Instagram and Oculus data.

But it also found fault with the FCO’s approach — saying Facebook’s US and Irish business entities were not granted a fair hearing before the order against its German sister company was issued, among other procedural quibbles.

Referrals to the EU’s Court of Justice can take years to return a final interpretation.

In this case the ECJ will likely be asked to consider whether the FCO has exceeded its remit, although the exact questions being referred by the court have not been confirmed — with a written reference set to be issued in the next few weeks, per its press release.

In a statement responding to the court’s announcement today, a Facebook spokesperson said:

“Today, the Düsseldorf Court has expressed doubts as to the legality of the Bundeskartellamt’s order and decided to refer questions to the Court of Justice of the European Union. We believe that the Bundeskartellamt’s order also violates European law.”

#competition-law, #europe, #facebook, #gdpr, #lawsuit, #privacy

0

Google isn’t testing FLoCs in Europe yet

Early this month Google quietly began trials of ‘Privacy Sandbox’: Its planned replacement adtech for tracking cookies, as it works toward phasing out support for third party cookies in the Chrome browser — testing a system to reconfigure the dominant web architecture by replacing individual ad targeting with ads that target groups of users (aka Federated Learning of Cohorts, or FLoCs), and which — it loudly contended — will still generate a fat upside for advertisers.

There are a number of gigantic questions about this plan. Not least whether targeting groups of people who are non-transparently stuck into algorithmically computed interest-based buckets based on their browsing history is going to reduce the harms that have come to be widely associated with behavioral advertising.

If your concern is online ads which discriminate against protected groups or seek to exploit vulnerable people (e.g. those with a gambling addiction), FLoCs may very well just serve up more of the abusive same. The EFF has, for example, called FLoCs a “terrible idea”, warning the system may amplify problems like discrimination and predatory targeting.

Advertisers also query whether FLoCs will really generate like-for-like revenue, as Google claims.

Competition concerns are also closely dogging Google’s Privacy Sandbox, which is under investigation by UK antitrust regulators — and has drawn scrutiny from the US Department of Justice too, as Reuters reported recently.

Adtech players complain the shift will merely increase Google’s gatekeeper power over them by blocking their access to web users’ data even as Google can continue to track its own users — leveraging that first party data alongside a new moat they claim will keep them in the dark about what individuals are doing online. (Though whether it will actually do that is not at all clear.)

Antitrust is of course a convenient argument for the adtech industry to use to strategically counter the prospect of privacy protections for individuals. But competition regulators on both sides of the pond are concerned enough over the power dynamics of Google ending support for tracking cookies that they’re taking a closer look.

And then there’s the question of privacy itself — which obviously merits close scrutiny too.

Google’s sales pitch for the ‘Privacy Sandbox’ is evident in its choice of brand name — which suggests its keen to push the perception of a technology that protects privacy.

This is Google’s response to the rising store of value being placed on protecting personal data — after years of data breach and data misuse scandals.

A terrible reputation now dogs the tracking industry (or the “data industrial complex”, as Apple likes to denounce it) — as a result of high profile scandals like Kremlin-fuelled voter manipulation in the US but also just the demonstrable dislike web users have of being ad-stalking around the Internet. (Very evident in the ever increasing use of tracker- and ad-blockers; and in the response of other web browsers which have adopted a number of anti-tracking measures years ahead of Google-owned Chrome).

Given Google’s hunger for its Privacy Sandbox to be perceived as pro-privacy it’s perhaps no small irony, then, that it’s not actually running these origin tests of FLoCs in Europe — where the world’s most stringent and comprehensive online privacy laws apply.

AdExchanger reported yesterday on comments made by a Google engineer during a meeting of the Improving Web Advertising Business Group at the World Wide Web Consortium on Tuesday. “For countries in Europe, we will not be turning on origin trials [of FLoC] for users in EEA [European Economic Area] countries,” Michael Kleber is reported to have said.

TechCrunch had a confirm from Google in early March that this is the case. “Initially, we plan to begin origin trials in the US and plan to carry this out internationally (including in the UK / EEA) at a later date,” a spokesman told us earlier this month.

“As we’ve shared, we are in active discussions with independent authorities — including privacy regulators and the UK’s Competition and Markets Authority — as with other matters they are critical to identifying and shaping the best approach for us, for online privacy, for the industry and world as a whole,” he added then.

At issue here is the fact that Google has chosen to auto-enrol sites in the FLoC origin trials — rather than getting manual sign ups which would have offered a path for it to implement a consent flow.

And lack of consent to process personal data seems to be the legal area of concern for conducting such online tests in Europe where legislation like the ePrivacy Directive (which covers tracking cookies) and the more recent General Data Protection Regulation (GDPR), which further strengthens requirements for consent as a legal basis, both apply.

Asked how consent is being handled for the trials Google’s spokesman told us that some controls will be coming in April: “With the Chrome 90 release in April, we’ll be releasing the first controls for the Privacy Sandbox (first, a simple on/off), and we plan to expand on these controls in future Chrome releases, as more proposals reach the origin trial stage, and we receive more feedback from end users and industry.”

It’s not clear why Google is auto-enrolling sites into the trial rather than asking for opt-ins — beyond the obvious that such a step would add friction and introduce another layer of complexity by limiting the size of the test pool to only those who would consent. Google presumably doesn’t want to be so straightjacketed during product dev.

“During the origin trial, we are defaulting to supporting all sites that already contain ads to determine what FLoC a profile is assigned to,” its spokesman told us when we asked why it’s auto-enrolling sites. “Once FLoC’s final proposal is implemented, we expect the FLoC calculation will only draw on sites that opt into participating.”

He also specified that any user who has blocked third-party cookies won’t be included in the Origin Trial — so the trial is not a full ‘free-for-all’, even in the US.

There are reasons for Google to tread carefully. Its Privacy Sandbox tests were quickly shown to be leaking data about incognito browsing mode — revealing a piece of information that could be used to aid user fingerprinting. Which obviously isn’t good for privacy.

“If FloC is unavailable in incognito mode by design then this allows the detection of users browsing in private browsing mode,” wrote security and privacy researcher, Dr Lukasz Olejnik, in an initial privacy analysis of the Sandbox this month in which he discussed the implications of the bug.

“While indeed, the private data about the FloC ID is not provided (and for a good reason), this is still an information leak,” he went on. “Apparently it is a design bug because the behavior seems to be foreseen to the feature authors. It allows differentiating between incognito and normal web browsing modes. Such behavior should be avoided.”

Google’s Privacy Sandbox tests automating a new form of browser fingerprinting is not ‘on message’ with the claimed boost for user privacy. But Google is presumably hoping to iron out such problems via testing and as development of the system continues.

(Indeed, Google’s spokesman also told us that “countering fingerprinting is an important goal of the Privacy Sandbox”, adding: “The group is developing technology to protect people from opaque or hidden techniques that share data about individual users and allow individuals to be tracked in a covert manner. One of these techniques, for example, involves using a device’s IP address to try and identify someone without their knowledge or ability to opt out.”)

At the same time it’s not clear whether or not Google needs to obtain user consent to run the tests legally in Europe. Other legal bases do exist — although it would take careful legal analysis to ascertain whether or not they could be used. But it’s certainly interesting that Google has decided it doesn’t want to risk testing if it can legally trial this tech in Europe without consent.

Likely relevant is the fact that the ePrivacy Directive is not like the harmonized GDPR — which funnels cross border complaints via a lead data supervisor, shrinking regulatory exposure at least in the first instance.

Any EU DPA may have competence to investigate matters related to ePrivacy in their national markets. To wit: At the end of last year France’s CNIL skewered Google with a $120M fine related to dropping tracking cookies without consent — underlining the risks of getting EU law on consent wrong. And a privacy-related fine for Privacy Sandbox would be terrible PR. So Google may have calculated it’s simply less risky to wait.

Under EU law, certain types of personal data are also considered highly sensitive (aka ‘special category data’) and require an even higher bar of explicit consent to process. Such data couldn’t be bundled into a site-level consent — but would require specific consent for each instance. So, in other words, there would be even more friction involved in testing with such data.

That may explain why Google plans to do regional testing later — if it can figure out how to avoid processing such sensitive data. (Relevant: Analysis of Google’s proposal suggests the final version intends to avoid processing sensitive data in the computation of the FLoC ID — to avoid exactly that scenario.)

If/when Google does implement Privacy Sandbox tests in Europe “later”, as it has said it will (having also professed itself “100% committed to the Privacy Sandbox in Europe”), it will presumably do so when it has added the aforementioned controls to Chrome — meaning it would be in a position to offer some kind of prompt asking users if they wish to turn the tech off (or, better still, on).

Though, again, it’s not clear how exactly this will be implemented — and whether a consent flow will be part of the tests.

Google has also not provided a timeline for when tests will start in Europe. Nor would it specify the other countries it’s running tests in beside the US when we asked about that.

At the time of writing it had not responded to a number of follow up questions either but we’ll update this report if we get more detail.

The (current) lack of regional tests raises questions about the suitability of Privacy Sandbox for European users — as the New York Times’ Robin Berjon has pointed out, noting via Twitter that “the market works differently”.

“Not doing origin tests is already a problem… but not even knowing if it could eventually have a legal basis on which to run seems like a strange position to take?” he also wrote.

Google is surely going to need to test FLoCs in Europe at some point. Because the alternative — implementing regionally untested adtech — is unlikely to be a strong sell to advertisers who are already crying foul over Privacy Sandbox on competition and revenue risk grounds.

Ireland’s Data Protection Commission (DPC), meanwhile — which, under GDPR, is Google’s lead data supervisor in the region — confirmed to us that Google has been consulting with it about the Privacy Sandbox plan.

“Google has been consulting the DPC on this matter and we were aware of the roll-out of the trial,” deputy commissioner Graham Doyle told us today. “As you are aware, this has not yet been rolled-out in the EU/EEA. If, and when, Google present us with detail plans, outlining their intention to start using this technology within the EU/EEA, we will examine all of the issues further at that point.”

The DPC has a number of investigations into Google’s business triggered by GDPR complaints — including a May 2019 probe into its adtech and a February 2020 investigation into its processing of users’ location data — all of which are ongoing.

But — in one legacy example of the risks of getting EU data protection compliance wrong — Google was fined $57M by France’s CNIL back in January 2019 (under GDPR as its EU users hadn’t yet come under the jurisdiction of Ireland’s DPC) for, in that case, not making it clear enough to Android users how it processes their personal information.

#advertising-tech, #data-protection, #eprivacy-directive, #eu, #europe, #flocs, #gdpr, #google, #privacy, #privacy-sandbox, #tc

0

France’s privacy watchdog probes Clubhouse after complaint and petition

Clubhouse, the buzzy but still invite only social audio app that’s popular with the Silicon Valley technorati, is being investigated by France’s privacy watchdog.

The CNIL announced today it’s opened an investigation into Clubhouse following a complaint and after it got some initial responses back from Alpha Exploration Co., the U.S.-based company behind the app.

It also points to a petition that’s circulating in France with over 10,000 signatures — calling for regulatory intervention.

The regulator says it’s confirmed that Clubhouse’s owner is not established anywhere in the European Union — which means the app can be investigated by any EU DPA that receives a complaint or has its own concerns about EU citizens’ data.

Last month the Hamburg privacy regulator also raised concerns over Clubhouse, saying they’d asked the app for more information on how it protects the privacy of European users and their contacts.

In the EU, cross border data protection cases involving tech giants typically avoid this scenario as the General Data Protection Regulation (GDPR) includes a mechanism that funnels complaints via a lead data supervisor — aka the national agency where the business is established in the EU.

This ‘one-stop-shop’ (OSS) already has had the effect of slowing down GDPR enforcement against giants like Facebook, which have established their regional HQ in Ireland. But there is a further risk of a regulatory moat effect that benefits ‘big tech’ if the OSS is combined with swifter unilateral privacy enforcement against newcomers like Clubhouse (which currently fall outside the OSS).

France’s watchdog has certainly demonstrated a willingness to move fast and enforce the rules against tech giants like Google and Amazon when unencumbered by the OSS — recently issuing fines over cookie consent issues in excess of $160M, for example. It also hit Google with a GDPR fine of $57M in 2019 before the tech giant moved the jurisdiction of regional users to Ireland.

So there’s no reason why the CNIL won’t show similar alacrity in its probe of Clubhouse. (Although in its press note today it does write that European DPAs are “communicating with each other on this matter, in order to exchange information and ensure consistent application of the GDPR”.)

Privacy concerns that have been attached to Clubhouse include that it uploads users’ phone book contacts — using the harvested phone numbers to build a usage graph so it can display how many ‘friends’ a non-user has on the service at the point when the user is being asked to select which of their contacts to invite to the service.

The petition to CNIL also claims Clubhouse’s “secret database” of users’ contacts may be sold to third parties.

“For years, lawmakers have not dared to attack Facebook for sucking up our data. Our democracies are paying a heavy price today,” the authors of the petition also write. “Clubhouse hopes we haven’t learned anything from Facebook’s methods and that its questionable practices will go unnoticed. But the German privacy agency has already accused the company of violating EU law. Now we need regulators in other countries to follow suit and put pressure on Clubhouse.

If thousands of you ask the CNIL to enforce the law, we can put an end to this blatant violation of our private lives. It is also an opportunity to send a strong message to the tech giants: our data is ours and no one else’s.”

In its privacy policy, Clubhouse‘s owner writes that the “Company does not sell your Personal Data” — but does list a wide ranging number of reasons why it may “share” user data with third parties, including for “advertising and marketing services”, among many other listed reasons.

Clubhouse has been contacted for comment.

#clubhouse, #cnil, #data-protection, #eu, #europe, #gdpr, #privacy, #social, #social-audio

0

Dutch court rejects Uber drivers’ ‘robo-firing’ charge but tells Ola to explain algo-deductions

Uber has had a good result against litigation in the Netherlands, where its European business is headquartered, that had alleged it uses algorithms to terminate drivers — but which the court has rejected.

The ride-hailing giant has also been largely successful in fending off wide-ranging requests for data from drivers wanting to obtain more of the personal data it holds on them.

A number of Uber drivers filed the suits last year with the support of the App Drivers & Couriers Union (ADCU) in part because they are seeking to port data held on them in Uber’s platform to a data trust (called Worker Info Exchange) that they want to set up, administered by a union, to further their ability to collectively bargain against the platform giant.

The court did not object to them seeking data, saying such a purpose does not stand in the way of exercising their personal data access rights, but it rejected most of their specific requests — at times saying they were too general or had not been sufficiently explained or must be balanced against other rights (such as passenger privacy).

The ruling hasn’t gone entirely Uber’s way, though, as the court ordered the tech giant to hand over a little more data to the litigating drivers than it has so far. While it rejected driver access to information including manual notes about them, tags and reports, Uber has been ordered to provide drivers with individual ratings given by riders on an anonymized basis — with the court giving it two months to comply.

In another win for Uber, the court did not find that its (automated) dispatch system results in a “legal or similarly significant effect” for drivers under EU law — and therefore has allowed that it be applied without additional human oversight.

The court also rejected a request by the applicants that data Uber does provide to them must be provided via a CSV file or API, finding that the PDF format Uber has provider is sufficient to comply with legal requirements.

In response to the judgements, an Uber spokesman sent us this statement:

“This is a crucial decision. The Court has confirmed Uber’s dispatch system does not equate to automated decision making, and that we provided drivers with the data they are entitled to. The Court also confirmed that Uber’s processes have meaningful human involvement. Safety is the number one priority on the Uber platform, so any account deactivation decision is taken extremely seriously with manual reviews by our specialist team.”

The ADCU said the litigation has established that drivers taking collective action to seek access to their data is not an abuse of data protection rights — and lauded the aspects of the judgement where Uber has been ordered to hand over more data.

It also said it sees potential grounds for appeal, saying it’s concerned that some aspects of the judgments unduly restrict the rights of drivers, which it said could interfere with the right of workers to access employment rights — “to the extent they are frustrated in their ability to validate the fare basis and compare earnings and operating costs”.

“We also feel the court has unduly put the burden of proof on workers to show they have been subject to automated decision making before they can demand transparency of such decision making,” it added in a press release. “Similarly, the court has required drivers to provide greater specificity on the personal data sought rather than placing the burden on firms like Uber and Ola to clearly explain what personal data is held and how it is processed.”

The two Court of Amsterdam judgements can be found here and here (both are in Dutch; we’ve used Google Translate for the sections quoted below).

Our earlier reports on the legal challenges can be found here and here.

The Amsterdam court has also ruled on similar litigation filed against India-based Ola last year — ordering the India-based ride-hailing company to hand over a wider array of data than it currently does; and also saying it must explain the main criteria for a ‘penalties and deductions’ algorithm that can be applied to drivers’ earnings.

The judgement is available here (in Dutch). See below for more details on the Ola judgement.

Commenting in a statement, James Farrar, a former Uber driver who is now director of the aforementioned Worker Info Exchange, said: “This judgment is a giant leap forward in the struggle for workers to hold platform employers like Uber and Ola Cabs accountable for opaque and unfair automated management practices. Uber and Ola Cabs have been ordered to make transparent the basis for unfair dismissals, wage deductions and the use of surveillance systems such as Ola’s Guardian system and Uber’s Real Time ID system. The court completely rejected Uber & Ola’s arguments against the right of workers to collectively organize their data and establish a data trust with Worker Info Exchange as an abuse of data access rights.”

In an interesting (related) development in Spain, which we reported on yesterday, the government there has said it will legislate in a reform of the labor law aimed at delivery platforms that will require them to provide workers’ legal representatives with information on the rules of any algorithms that manage and assess them.

Court did not find Uber does ‘robo firings’

In one of the lawsuits, the applicants had argued that Uber had infringed their right not to be subject to automated decision-making when it terminated their driver accounts and also that it has not complied with its transparency obligations (within the meaning of GDPR Articles 13, 14 and 15).

Article 22 GDPR gives EU citizens the right not to be subject to a decision based solely on automated processing (including profiling) where the decision has legal or otherwise significant consequences for them. There must be meaningful human interaction in the decision-making process for it to not be considered solely automated processing.

Uber argued that it does not carry out automated terminations of drivers in the region and therefore that the law does not apply — telling the court that potential fraudulent activities are investigated by a specialized team of Uber employees (aka the ‘EMEA Operational Risk team’).

And while it said that the team makes use of software with which potential fraudulent activities can be detected, investigations are carried out by employees following internal protocols which require them to analyze potential fraud signals and the “facts and circumstances” to confirm or rule out the existence of fraud.

Uber said that if a consistent pattern of fraud is detected, a decision to terminate requires an unanimous decision from two employees of the Risk team. When the two employees do not agree, Uber says a third conducts an investigation — presumably to cast a deciding vote.

It provided the court with explanations for each of the terminations of the litigating applicants — and the court writes that Uber’s explanations of its decision-making process for terminations were not disputed. “In the absence of evidence to the contrary, the court will assume that the explanation provided by Uber is correct,” it wrote.

Interestingly, in the case of one of the applicants, Uber told the court they had been using (unidentified) software to manipulate the Uber Driver app in order to identify more expensive journeys by being able to view the passenger’s destination before accepting the ride — enabling them to cherry pick jobs, a practice that’s against Uber’s terms. Uber said the driver was warned that if they used the software again they would be terminated. But a few days later they did so — leading to another investigation and a termination.

However it’s worth noting that the activity in question dates back to 2018. And Uber has since changed how its service operates to provide drivers with information about the destination before they accept a ride — a change it flagged in response to a recent UK Supreme Court ruling that confirmed drivers who brought the challenge are workers, not self employed.

Some transparency issues were found

On the associated question of whether Uber had violated its transparency obligations to terminated drivers, the court found that in the cases of two of the four applicants Uber had done so (but not for the other two).

Uber did not clarify which specific fraudulent acts resulted in their accounts being deactivated,” the court writes in the case of the two applicants who it found had not been provided with sufficient information related to their terminations. Based on the information provided by Uber, they cannot check which personal data Uber used in the decision-making process that led to this decision. As a result, the decision to deactivate their accounts is insufficiently transparent and verifiable. As a result, Uber must provide [applicant 2] and [applicant 4] with access to their personal data pursuant to Article 15 of the GDPR insofar as they were the basis for the decision to deactivate their accounts, in such a way that they can are able to verify the correctness and lawfulness of the processing of their personal data.”

The court dismissed Uber’s attempt to evade disclosure on the grounds that providing more information would give the drivers insight into its anti-fraud detection systems which it suggested could then be used to circumvent them, writing: “In this state of affairs, Uber’s interest in refusing access to the processed personal data of [applicant 2] and [applicant 4] cannot outweigh the right of [applicant 2] and [applicant 4] to access their personal data.”

Compensation claims related to the charges were rejected — including in the case of the two applicants who were not provided with sufficient data on their terminations, with the court saying that they had not provided “reasons for damage to their humanity or good name or damage to their person in any other way”.

The court has given Uber two months to provide the two applicants with personal data pertaining to their terminations. No penalty has been ordered.

“For the time being, the trust is justified that Uber will voluntarily comply with the order for inspection [of personal data] and will endeavor to provide the relevant personal data,” it adds.

No legal/significant effect from Uber’s aIgo-dispatch

The litigants’ data access case also sought to challenge Uber’s algorithmic management of drivers — through its use of an algorithmic batch matching system to allocate rides — arguing that, under EU law, the drivers had a right to information about automated decision making and profiling used by Uber to run the service in order to be able to assess impacts of that automated processing.

However the court did not find that automated decision-making “within the meaning of Article 22 GDPR” takes place in this instance, accepting Uber’s argument that “the automated allocation of available rides has no legal consequences and does not significantly affect the data subject”.

Again, the court found that the applicants had “insufficiently explained” their request.

From the judgement:

It has been established between the parties that Uber uses personal data to make automated decisions. This also follows from section 9 ‘Automated decision-making’ included in its privacy statement. However, this does not mean that there is an automated decision-making process as referred to in Article 22 GDPR. After all, this requires that there are also legal consequences or that the data subject is otherwise significantly affected. The request is only briefly explained on this point. The Applicants argue that Uber has not provided sufficient concrete information about its anti-fraud processes and has not demonstrated any meaningful human intervention. Unlike in the case with application number C / 13/692003 / HA RK 20/302 in which an order is also given today, the applicants did not explain that Uber concluded that they were guilty of fraud. The extent to which Uber has taken decisions about them based on automated decision-making is therefore insufficiently explained. Although it is obvious that it is The batched matching system and the upfront pricing system will have a certain influence on the performance of the agreement between Uber and the driver, it has not been found that there is a legal consequence or a significant effect, as referred to in the Guidelines. Since Article 15 paragraph 1 under h GDPR only applies to such decisions, the request under I (iv) is rejected.

Ola must hand over data and algo criteria

In this case the court ruled that Ola must provided applicants with a wider range of data than it is currently doing — including a ‘fraud probability profile’ it maintains on drivers and data within a ‘Guardian’ surveillance system it operates.

The court also found that algorithmic decisions Ola uses to make deductions from driver earnings do fall under Article 22 of the GDPR, as there is no significant human intervention while the discounts/fines themselves may have a significant effect on drivers.

On this it ordered Ola to provide applicants with information on how these algorithmic choices are made by communicating “the main assessment criteria and their role in the automated decision… so that [applicants] can understand the criteria on the basis of which the decisions were taken and they are able to check the correctness and lawfulness of the data processing”.

Ola has been contacted for comment.

#adcu, #algorithmic-accountability, #artificial-intelligence, #data-access, #europe, #gdpr, #lawsuit, #ola, #privacy, #tc, #uber

0

DataGrail snares $30M Series B to help deal with privacy regulations

DataGrail, a startup that helps customers understand where their data lives in order to help comply with a growing body of privacy regulations, announced a $30 million Series B today.

Felicis Ventures led the round with help from Basis Set Ventures, Operator Collective and previous investors. One of the interesting aspects of this round was the participation from several strategic investors including HubSpot, Okta and Next47, the venture firm backed by Siemens. The company has now raised over $39 million, according to Crunchbase data.

That investor interest could stem from the fact that DataGrail helps organizations find data by building connectors to popular applications and then helps ensure that they are in compliance with customer privacy regulations such as GDPR, CCPA and similar laws.

“DataGrail [is really] the first integrated solution with over 900 integrations (up from 180 in 2019) to different apps and infrastructure platforms that allow the product to detect when new apps or new infrastructure platforms are added, and then also perform automated data discovery across those applications,” company CEO and co-founder Daniel Barber explained to me. This helps users find customer data wherever it lives and enables them to comply with legal requirements to manage and protect that data.

Victoria Treyger, general partner at lead investors Felicis Ventures says that one of the things that attracted her to DataGrail was that she had to help implement GDPR regulations at a previous venture and felt the pain first hand. She said that her firm tends to look for startups in large markets where the product or service being offered is a critical need, rather an option, and she believes that DataGrail is an example of that.

“I really liked the fact that privacy management is such a hard problem, and it is not optional. As a business, you have to manage privacy requests, which you may do manually or you may do it with a solution like DataGrail,” Treyger told me.

HubSpot’s Andrew Lindsay, who is SVP of corporate and business development, says his company is both a customer and an investor because DataGrail is helping HubSpot customers navigate the complexity of privacy regulation. “DataGrail’s unique ecosystem approach, where they are integrating with key Saas and business applications is an easy way for many of our joint customers to protect their customers’ privacy,” Lindsay said.

The company has 40 employees today with plans to grow to 90 or 100 by the end of this year. It’s worth noting that Treyger is joining the Board, which already has 3 other women. That shows shows a commitment to gender diversity at the board level that is not typical for startups.

#data-privacy, #datagrail, #enterprise, #felicis-ventures, #funding, #gdpr, #privacy, #recent-funding, #startups, #tc

0

Sweden’s data watchdog slaps police for unlawful use of Clearview AI

Sweden’s data protection authority, the IMY, has fined the local police authority €250,000 ($300k+) for unlawful use of the controversial facial recognition software, Clearview AI, in breach of the country’s Criminal Data Act.

As part of the enforcement the police must conduct further training and education of staff in order to avoid any future processing of personal data in breach of data protection rules and regulations.

The authority has also been ordered to inform people whose personal data was sent to Clearview — when confidentiality rules allow it to do so, per the IMY.

Its investigation found that the police had used the facial recognition tool on a number of occasions and that several employees had used it without prior authorization.

Earlier this month Canadian privacy authorities found Clearview had breached local laws when it collected photos of people to plug into its facial recognition database without their knowledge or permission.

“IMY concludes that the Police has not fulfilled its obligations as a data controller on a number of accounts with regards to the use of Clearview AI. The Police has failed to implement sufficient organisational measures to ensure and be able to demonstrate that the processing of personal data in this case has been carried out in compliance with the Criminal Data Act. When using Clearview AI the Police has unlawfully processed biometric data for facial recognition as well as having failed to conduct a data protection impact assessment which this case of processing would require,” the Swedish data protection authority writes in a press release.

The IMY’s full decision can be found here (in Swedish).

“There are clearly defined rules and regulations on how the Police Authority may process personal data, especially for law enforcement purposes. It is the responsibility of the Police to ensure that employees are aware of those rules,” added Elena Mazzotti Pallard, legal advisor at IMY, in a statement.

The fine (SEK2.5M in local currency) was decided on the basis of an overall assessment, per the IMY, though it falls quite a way short of the maximum possible under Swedish law for the violations in question — which the watchdog notes would be SEK10M. (The authority’s decision notes that not knowing the rules or having inadequate procedures in place are not a reason to reduce a penalty fee so it’s not entirely clear why the police avoided a bigger fine.)

The data authority said it was not possible to determine what had happened to the data of the people whose photos the police authority had sent to Clearview — such as whether the company still stored the information. So it has also ordered the police to take steps to ensure Clearview deletes the data.

The IMY said it investigated the police’s use of the controversial technology following reports in local media.

Just over a year ago, US-based Clearview AI was revealed by the New York Times to have amassed a database of billions of photos of people’s faces — including by scraping public social media postings and harvesting people’s sensitive biometric data without individuals’ knowledge or consent.

European Union data protection law puts a high bar on the processing of special category data, such as biometrics.

Ad hoc use by police of a commercial facial recognition database — with seemingly zero attention paid to local data protection law — evidently does not meet that bar.

Last month it emerged that the Hamburg data protection authority had instigating proceedings against Clearview following a complaint by a German resident over consentless processing of his biometric data.

The Hamburg authority cited Article 9 (1) of the GDPR, which prohibits the processing of biometric data for the purpose of uniquely identifying a natural person, unless the individual has given explicit consent (or for a number of other narrow exceptions which it said had not been met) — thereby finding Clearview’s processing unlawful.

However the German authority only made a narrow order for the deletion of the individual complainant’s mathematical hash values (which represent the biometric profile).

It did not order deletion of the photos themselves. It also did not issue a pan-EU order banning the collection of any European resident’s photos as it could have done and as European privacy campaign group, noyb, had been pushing for.

noyb is encouraging all EU residents to use forms on Clearview AI’s website to ask the company for a copy of their data and ask it to delete any data it has on them, as well as to object to being included in its database. It also recommends that individuals who finds Clearview holds their data submit a complaint against the company with their local DPA.

European Union lawmakers are in the process of drawing up a risk-based framework to regulate applications of artificial intelligence — with draft legislation expected to be put forward this year although the Commission intends it to work in concert with data protections already baked into the EU’s General Data Protection Regulation (GDPR).

Earlier this month the controversial facial recognition company was ruled illegal by Canadian privacy authorities — who warned they would “pursue other actions” if the company does not follow recommendations that include stopping the collection of Canadians’ data and deleting all previously collected images.

Clearview said it had stopped providing its tech to Canadian customers last summer.

It is also facing a class action lawsuit in the U.S. citing Illinois’ biometric protection laws.

Last summer the UK and Australian data protection watchdogs announced a joint investigation into Clearview’s personal data handling practices. That probe is ongoing.

 

#artificial-intelligence, #clearview-ai, #eu-data-protection-law, #europe, #facial-recognition, #gdpr, #privacy, #sweden, #tc

0

EU’s lead data supervisor for most of big tech is still using Lotus Notes

The lead data supervisor for a slew of tech giants in the European Union, including Apple, Facebook, Google, LinkedIn, TikTok and Twitter, is still relying on Lotus Notes to manage complaints and investigations lodged under the bloc’s flagship General Data Protection Regulation (GDPR), per freedom of information requests made by the Irish Council for Civil Liberties (ICCL).

Back in its 2016 annual report Ireland’s Data Protection Commission (DPC) stated that one of its main goals for GDPR (and ePrivacy) readiness included “implementation of a new website and case-management system” in time for the regulation coming into force in May 2018. However some five years later this ITC upgrade project is still a work in progress, responses to the ICCL’s FOIs show.

Project deadlines were repeatedly missed, per internal documents now in the public domain, while by October 2020 the cost of the DPC’s ICT upgrade had more than doubled vs an initial projection — ballooning to at least €615,121 (a figure that excludes staff time spent on the project since 2016; and also does not include the cost of maintaining the antiquated Lotus Notes system which is borne by the Irish government’s Department of Justice).

The revelation that the lead data supervisor for much of big tech in Europe is handling complaints using such ‘last-gen’ software not only looks highly embarrassing for the DPC but raises questions over the effectiveness of its senior management.

The DPC continues to face criticism over the slow pace of regulatory enforcement vis-a-vis big tech which, combined with the GDPR’s one-stop-shop mechanism, has led to a huge backlog of cases that the European Commission has conceded is a weakness of the regulation. So the revelation that it’s taking so long to get its own ITC in order will only fuel criticism that the regulator is not fit for purpose.

The wider issue here is the vast gulf in resources and technical expertise between tech giants, many of which are racking up vast profits off of people’s data that they can use to put toward paying armies of in-house lawyers to shield them from the risk of regulatory intervention, vs the tiny, under-resourced public sector agencies tasked with defending users’ rights — without appropriately modern tools to help them do the job.

In Ireland’s case, though, the length of time involved in overhauling its internal ICT does throw the spotlight on management of resources. Not least because the DPC’s budget and headcount has been growing since around 2015, as more resource have been allocated to it to reflect GDPR coming into application.

The ICCL is calling for the Irish government to consider hiring two additional commissioners — to supplement the current (sole) commissioner, Helen Dixon, who was appointed to the role back in 2014.

It notes that Irish law allows for the possibility of having three commissioners.

“The people who are supposed to make sure that Facebook and Google do not misuse the information that they have about each of us, are using a system so antiquated that one former staff member told me it is ‘like attempting to use an abacus to do payroll’,” Dr Johnny Ryan, an ICCL senior fellow, told TechCrunch.

The DPC is not configured for its digital mission,” he added in a statement. “What we have discovered indicates that it cannot run critically important internal technology projects. How can it be expected to monitor what the world’s biggest tech firms do with our data? This raises serious questions not only for the DPC, but for the Irish Government. We have alerted the Irish Government of the strategic economic risk from failing to enforce the GDPR.”

Reached for comment, the DPC told us it has a “functional and fit-for-purpose” Case Management System which it said has been “optimised with new features over the last number of years (including with capability for the generation of statistics and management reports)”.

But it conceded the system is “dated” and “limited” in terms of how much it can be adapted for integration with a new DPC website and web forms and the IMI [information systems management] shared platform used between EU data protection authorities — given that it’s based on Lotus Notes technology. 

“Significant work in specifying the system and building its core modules has been completed,” deputy commission Graham Doyle said. “Some delays in delivery have occurred because of updates to specification of security and infrastructure elements. Some other elements have on demand from the DPC been slowed in order to allow for the resolution between EU DPAs of final intended processes such as those involved in the Article 60 cooperation and consistency mechanism under the GDPR.

“The EDPB [European Data Protection Board] is only now preparing internal guidance on the operationalisation of Article 60 and further on the dispute resolution mechanism under Article 65. These are key features of work between EU DPAs that require hand-offs between systems. In addition, the EU almost 3 years after it intended to has not yet adopted its new e-Privacy legislation. Further, the DPC alongside all other EU DPAs is learning how the procedural and operational aspects of the GDPR are to operate in fine detail and some of them remain to be settled.”

Doyle added that “progress continues” on the new Case Management System investment — saying it’s the DPC’s intention that “initial core modules” of the new system will be rolled out in Q2 2021.

To date, Ireland’s regulator has only issued one decision pertaining to a cross-border GDPR complaint: In December when it fined Twitter $550k over a security breach the company had publicly disclosed in January 2019.

Disagreement between Ireland and other EU DPAs over its initial enforcement proposal added months more to the decision process — and the DPC was finally forced to increase its suggested penalty by up to a few thousand euros following a majority vote.

The Twitter case was hardly smooth sailing but it actually represents a relatively rapid turnaround compared to the seven+ years involved in a separate (2013) complaint (aka Schrems II) — related to Facebook’s international data transfers which predates the GDPR.

With that complaint the DPC chose to go to court to raise concerns about the legality of the data transfer mechanism itself rather than acting on a specific complaint over Facebook’s use of Standard Contractual Clauses. A referral to the European Court of Justice followed and the EU’s highest court ended up torpedoing a flagship data transfer arrangement between the EU and the US.

Despite its legal challenge resulting in the EU-US Privacy Shield being struck down, the DPC still hasn’t pulled the plug on Facebook’s EU transfers. Although last September it did issue a preliminary suspension order — which Facebook immediately challenged (and blocked, temporarily) via judicial review.

Last year the DPC settled a counter judicial review of its processes, brought by the original complainant, agreeing to swiftly finalize the complaint — although a decision is still likely months out. But should finally come this year.

The DPC defends itself against accusations of enforcement foot-dragging by saying it must follow due process to ensure its decisions stand up to legal challenge.

But as criticism of the unit continues to mount revelations that its own flagship internal ICT upgrade is dragging on some five years after it was stated as a DPC priority will do nothing to silence critics.

Last week the EU parliament’s civil liberties committee issued a draft motion calling on the Commission to begin infringement proceedings against against Ireland “for not properly enforcing the GDPR”.

In the statement it wrote of “deep concern” that several complaints against breaches of the GDPR have not yet been decided by the Irish DPC despite GDPR coming into application in May 2018.

The LIBE committee also flagged the Schrems II Facebook transfers case — writing that it is concerned this case “was started by the Irish Data Protection Commissioner, instead taking a decision within its powers pursuant to Article 58 GDPR”.

It’s also notable that the Commission’s latest plans for updating pan-EU platform regulations — the Digital Services Act and Digital Markets Act — propose to side-step the risk of enforcement bottlenecks by suggesting that key enforcement against the largest platforms should be brought in-house to avoid the risk of any single Member State agency standing in the way of cross-border enforcement of European citizens’ data rights, as continues to happen with the GDPR.

Another quirk in relation to the Irish DPC is that the unit is not subject to the full range of freedom of information law. Instead the law only applies in respect of records concerning “the general administration of the Commission”. This means that its “supervisory, regulatory, consultation, complaint-handling or investigatory functions (including case files) are not releasable under the Act”, as it notes on its website.

Freedom of information requests filed by TechCrunch last year — asking the DPC how many times it has used GDPR powers to impose a temporary or absolute ban on data processing — were refused by the regulator on these grounds.

Its refusal to disclose whether or not it has ever asked an infringing entity to stop processing personal data cited the partial coverage of FOI law, saying that ‘general administration’ only refers to “records which have to do with the management of an FOI body such as records referring to personnel, pay matters, recruitment, accounts, information technology, accommodation, internal organization, office procedures and the like”.

While Ireland’s FOI law prevents closer scrutiny of the DPC’s activities the agency’s enforcement record speaks for itself.

 

#data-protection, #dpc, #gdpr, #ireland, #platform-regulation, #tc

0

TikTok will recheck the age of every user in Italy after DPA order

TikTok has agreed to re-verify the age of every user in Italy and block access to users who state they are under 13, the country’s data protection agency said today.

The video sharing social network confirmed that from February 9 every user in Italy will be required to go through its age gate process again — and only those users aged 13 and above will be allowed to continue using the app.

Accounts of those who say they are under 13 will be deleted.

The move to required all users in Italy to go through TikTok’s age-verification process again follows an emergency order by Italy’s GPDP on January 22 after the death of a 10-year-old girl from Palermo who died of asphyxiation after participating in a “blackout challenge” on the social network, according to reports in local media.

TikTok was given a deadline of January 29 to respond to the GPDP’s order, as we reported earlier. Today the agency confirmed the measures TikTok has agreed to take.

As well as asking all users in the country to re-enter their date of birth to continue using the app, the GPDP said TikTok will “consider deploying AI-based systems for age verification purposes”.

The Italian watchdog added that it will be monitoring the effectiveness of TikTok’s age verification process.

The basic age-check TikTok conducts when users sign-up — which it will be pushing out again to all users in Italian in a few days — simply requires users to enter a date of birth so is very easy to circumvent. But it’s also clear that age verification online remains a hard problem.

Robust identity checks to determine age beyond doubt threaten a ‘sledgehammer to crack a nut’ scenario — potentially limiting service access in a way that’s unfair and risking harm to online anonymity and privacy, with potential knock on impacts on other considerations like freedom of expression and data security.

On the flip side are growing public concerns that underage users are being exposed to inappropriate and even harmful content online.

While TikTok’s lead data supervisor in the European Union is Ireland’s Data Protection Commission (DPC), the EU’s General Data Protection Regulation (GDPR) includes a provision that allows national DPAs to take emergency interventions to protect users — which is the route the GPDP has used here.

“In order to identify users below 13 years with reasonable certainty following this initial check, the company undertook to further consider the deployment of AI-based systems,” the agency said today.

“Since the implementation of such systems requires balancing the need for accurate verification against the children’s right to data protection, the company committed to starting a dialogue with the Irish Data Protection Commission (DPC) on the use of AI for age verification purposes. Ireland is where TikTok set its main establishment in the EU,” it added.

Reached for comment, the Irish Data Protection Commission told TechCrunch: “The DPC is engaging with TikTok to review, in the context of the processing of personal data, the measures implemented by the company to ensure it has effective means of identifying child users on the platform and, more generally, the measures and protections to protect the most vulnerable of users in terms of risks arising from the processing of their personal data.” So it remains to be seen whether the regulator will push for more robust age checks.

In another change triggered by the GPDP’s intervention TikTok has implemented an in-app button to enable users to “quickly and easily” report other users that may seem to be below 13 years of age.

Per TikTok, these reports are reviewed by moderators and “removed as necessary”.

“All the above measures supplement those already in place,” the GPDP said, adding: “TikTok undertook to also double the number of Italian moderators of platform contents.”

Commenting in a statement, Alexandra Evans, TikTok’s head of child safety, added: “Keeping people on TikTok safe is our top priority. We’ve reached an agreement with the Garante and today, we’re taking additional steps to support our community in Italy. From February 9, we’ll be sending every user in Italy through our age gate process again and only users aged 13 and over will be able to continue using the app after going through this process. We’re also rolling out a new, dedicated in-app reporting button to allow users to flag an account they believe may be under the age of 13, which will then be reviewed by our team and removed as necessary.

“There is no finish line when it comes to protecting our users, especially our younger ones, and our work in this important area doesn’t stop. That’s why we’re continuing to invest in the people, processes and technologies that help to keep our community a safe space for positive, creative expression.”

TikTok’s reissued age check in Italy will also be accompanied by a local information campaign in which TikTok will aim to raise parents’ and children’s awareness of the age checks and other child-safety-related features — both via its app and in the media.

“An information campaign will be launched by TikTok starting on February 4 both via the app and through other channels. The company will send push alerts to users on the app before blocking their access and will inform them on the need to enter their age. Banners will also be published containing links to information on security tools and on how to change profile settings from ‘public’ to ‘private’. The information campaign both via the web and through the press will be addressed to parents and the age threshold for registration will be specifically highlighted, among other things,” the GPDP said.

The agency also noted that TikTok has agreed to improve the wording of the short privacy notice intended for users aged under 18 years — “to explain what data are collected and how those data are processed in an easily understandable and user-oriented manner”.

In addition to TikTok’s impending information campaign, the GPDP is launching a child safety awareness-raising campaign of its own on national TV channels, in cooperation with a child protection charity called Telefono Azzurro. It said this will be targeted at parents to encourage them to supervise their kids’ use of the app.

“The campaign is aimed at calling upon parents to actively supervise and pay special attention to the situations where their children are requested to enter their age in order to access TikTok,” it said.

#apps, #child-safety, #data-protection, #europe, #gdpr, #italy, #privacy, #social, #tiktok

0

Apple’s Tim Cook warns of adtech fuelling a “social catastrophe” as he defends app tracker opt-in

Apple’s CEO Tim Cook has urged Europe to step up privacy enforcement in a keynote speech to the CPDP conference today — echoing many of the points he made in Brussels in person two years ago when he hit out at the ‘data industrial complex’ underpinning the adtech industry’s mass surveillance of Internet users.

Reforming current-gen adtech is now a humanitarian imperative, he argued in a speech that took a bunch of thinly-veiled swipes at Facebook.

“As I said in Brussels two years ago, it is certainly time, not only for a comprehensive privacy law here in the United States, but also for worldwide laws and new international agreements that enshrine the principles of data minimization, user knowledge, user access and data security across the globe,” said Cook.

“Together, we must send a universal, humanistic response to those who claim a right to users’ private information about what should not and will not be tolerated,” he added.

The message comes at a critical time for Apple as it prepares to flip a switch that will, for the first time, require developers to gain opt-in user consent to tracking.

Earlier today Apple confirmed it would be enabling the App Tracking Transparency (ATT) feature in the next beta release of iOS 14, which it said would roll out in early spring.

The tech giant had intended to debut the feature last year but delayed to give developers more time to adapt.

Adtech giant Facebook has also been aggressively briefing against the shift, warning of a major impact on publishers who use its ad network once Apple gives its users the ability to refuse third party tracking.

Reporting its Q4 earnings yesterday, Facebook also sounded a warning over “more significant advertising headwinds” impacting its own bottom line this year — naming Apple’s ATT as a risk (as well as what it couched as “the evolving regulatory landscape”).

In the speech to a data protection and privacy conference which is usually held in Brussels (but has been streamed online because of the pandemic), Cook made an aggressive defence of ATT and Apple’s pro-privacy stance in general, saying the forthcoming tracking opt-in is about “returning control to users” and linking adtech-fuelled surveilled of Internet users to a range of harms, including the spread of conspiracy theories, extremism and real-world violence.

“Users have asked for this feature for a long time,” he said of ATT. “We have worked closely with developers to give them the time and resources to implement it and we’re passionate about it because we think it has great potential to make things better for everybody.”

The move has attracted a competition challenge in France where four online advertising lobbies filed an antitrust complaint last October — arguing that Apple requiring developers ask app users for permission to track them is an abuse of market power by Apple. (A similar complaint has been lodged in the UK over Google’s move to depreciated third party tracking cookies in Chrome — and there the regulator has opened an investigation.)

The Information also reported today that Facebook is preparing to lodge an antitrust lawsuit against Apple — so the legal stakes are rising. (Though the social media giant is itself being sued by the FTC which alleges it has maintained a social networking monopoly via years of anti-competitive conduct… )

In the speech Cook highlighted another recent pro-privacy move made by Apple to require iOS developers to display “privacy nutrition” labels within the App Store — providing users with an overview of their data collection practices. Both the labels and the incoming ATT apply in the case of Apple’s own apps (not just third parties), as we reported earlier.

Cook said these moves align with Apple’s overarching philosophy: To make technology that “serves people and has their well-being in mind” — contrasting its approach with a rapacious ‘data industrial complex’ that wants to aggregate information about everything people do online to use against them, as a tool of mass manipulation.

“It seems no piece of information is too private or personal to be surveilled, monetized and aggregated into a 360 degree view of your life,” Cook warned. “The end result of all of this is that you are no longer the customer; you are the product.

“When ATT is in full effect users will have a say over this kind of tracking. Some may well think that sharing this degree of information is worth it for more targeted ads. Many others, I suspect, will not. Just as most appreciated it when we built this similar functionality into Safari limiting web trackers several years ago,” he went on, adding that: “We see developing these kinds of privacy-centric features and innovations as a core responsibility of our work. We always have, we always will.”

Apple’s CEO pointed out that advertising has flourished in the past without the need for privacy-hostile mass surveillance, arguing: “Technology does not need vast troves of personal data stitched together across dozens of websites and apps in order to succeed. Advertising existed and thrived for decades without it. And we’re here today because the path of least resistance is rarely the path of wisdom.”

He also made some veiled sideswipes at Facebook — avoiding literally naming the adtech giant but hitting out at the notion of a business that’s built on “surveilling users”, on “data exploitation” and on “choices that are no choices at all”.

Such an entity “does not deserve our praise, it deserves reform”, he went on, having earlier heaped praise on Europe’s General Data Protection Regulation (GDPR) for its role in furthering privacy rights — telling conference delegates that enforcement “must continue”. (The GDPR’s weak spot to date has been exactly that; but 2.5 years in there are signs the regime is getting into a groove.)

In further sideswipes at Facebook, Cook attacked the role of data-gobbling, engagement-obsessed adtech in fuelling disinformation and conspiracy theories — arguing that the consequences of such an approach are simply too high for democratic societies to accept.

“We should not look away from the bigger picture,” he argued. “At a moment of rampant disinformation and conspiracy theories juiced by algorithms we can no longer turn a blind eye to a theory of technology that says all engagement is good engagement, the longer the better. And all with the goal of collecting as much data as possible.

“Too many are still asking the question how much can we get away with? When they need to be asking what are the consequences? What are the consequences of prioritizing conspiracy theories and violent incitement simply because of the high rates of engagement? What are the consequences of not just tolerating but rewarding content that undermines public trust in lifesaving vaccinations? What are consequences of seeing thousands of users join extremist groups and then perpetuating an algorithm that recommends even more,” he went on — sketching a number of scenarios of which Facebook’s business stands directly accused.

“It is long past time to stop pretending that this approach doesn’t come with a cost. Of polarization. Of lost trust. And — yes — of violence. A social dilemma cannot be allowed to become a social catastrophe,” he added, rebranding ‘The Social Network’ at a stroke.

Apple has reason to appeal to a European audience of data protection experts to further its fight with adtech objectors to its ATT, as EU regulators have the power to take enforcement decisions that would align with and support its approach. Although they have been shy to do so so far.

Facebook’s lead data protection supervisor in Europe, Ireland’s Data Protection Commission (DPC), has a backlog of investigations into a number of aspects of its business — including its use of so-called ‘forced consent’ (as users are not given any choice over being tracked for ad targeting if they wish to use its services).

That lack of choice stands in stark contrast to the change Apple is driving on its App Store, where all entities will be required to ask users if they want to be tracked. So Apple’s move aligns with the principles of European data protection law (which, for example, requires that consent for processing people’s data be freely given in order to be legally valid).

Equally, Facebook’s continued refusal to give users a choice stands in direct conflict with EU law and risks GDPR enforcement. (The kind Cook was urging in his speech.)

2021 looks like it could be a critical year on that front. A long running DPC investigation into the transparency of data-sharing between WhatsApp and Facebook is headed for enforcement this year — after Ireland sent a draft decision to the other EU data protection agencies at the back end of last year.

Last week Politico reported WhatsApp could be on the hook for a fine of between €30M and €50M in that single case. More pertinently for the tech giant — which paid a $5BN fine to the FTC in 2019 to settle charges related to privacy failings (but was not required to make any material changes to how it operates its ad business) — WhatsApp could be ordered to change how it handles user data.

A regulatory order to stop processing certain types of user data — or mandating it ask users for consent before it can do so — could clearly have a far greater impact on Facebook’s business empire.

The tech giant is also facing a final verdict later this year on whether it can continue to legally transfer European users’ data out of the bloc.

If Facebook is ordered to suspend such data flows that would mean massive disruption to a sizeable chunk of its business (in 2019 it reported 286M DAUs in the region in Q1).

So — in short — the regulatory conditions around Facebook’s business are certainly ‘evolving’.

The data industrial complex’s fight back against the looming privacy enforcement at Apple’s platform level involves ploughing legal resource into trying to claim such moves are anti-competitive. However EU lawmakers seem alive to this self-interested push to appropriate ‘antitrust’ as a tool to stymie privacy enforcement.

(And it’s notable that Cook referred to privacy “innovation” in the speech. Including this ask: “Will the future belong to the innovations that make our lives better, more fulfilled and more human?” — which is really the key question in the privacy vs competition regulation ‘debate’.)

Last month Commission EVP and competition chief, Margrethe Vestager told the OECD Global Competition Forum that antitrust enforcers should be “vigilant so that privacy is not used as a shield against competition”. However her remarks had a sting in the tail for the data industrial complex — as she expressed support for a ‘superprofiling’ case against Facebook in Germany.

That case (which is continuing to be litigated by the German FCO) combines privacy and competition in new and interesting ways. If the regulator prevails it could result in a structural separation of Facebook’s social empire at the data level — in a sort of regulatory equivalent of moving fast and breaking things.

So it’s notable Vestager dubbed that piece of regulatory innovation “inspiring and interesting”. Which sounds more of a vote of confidence than condemnation from Europe’s digital policy and competition chief.

#antitrust, #apple, #data-protection, #facebook, #gdpr, #privacy, #tc, #tim-cook

0

Grindr on the hook for €10M over GDPR consent violations

Grindr, a gay, bi, trans and queer hook-up app, is on the hook for a penalty of NOK100,000,000 (aka €10M or ~$12.1M) in Europe.

Norway’s data protection agency has announced it’s notified the US-based company of its intention to issue the fine in relation to consent violations under the region’s General Data Protection Regulation (GDPR) which sets out strict conditions for processing people’s data.

The size of the fine is notable. GDPR allows for fines to scale up to 4% of global annual turnover or up to €20M, whichever is higher. In this case Grindr is on the hook for around 10% of its annual revenue, per the DPA. (Although the sanction is not yet final; Grindr has until February 15 to submit a response before the Datatilsynet issues a final decision.)

“We have notified Grindr that we intend to impose a fine of high magnitude as our findings suggest grave violations of the GDPR,” said Bjørn Erik Thon, DG of the agency, in a statement. “Grindr has 13.7 million active users, of which thousands reside in Norway. Our view is that these people have had their personal data shared unlawfully. An important objective of the GDPR is precisely to prevent take-it-or-leave-it ‘consents’. It is imperative that such practices cease.”

Grindr has been contacted for comment.

Last year a report by Norway’s Consumer Council (NCC) delved into the data sharing practices of a number of popular apps in categories such as dating and fertility. It found the majority of apps transmitted data to “unexpected third parties”, with users not clearly informed how their information was being used.

Grindr was one of the apps featured in the NCC report. And the Council went on to file a complaint against the app with the national DPA, claiming unlawful sharing of users’ personal data with third parties for marketing purposes — including GPS location; user profile data; and the fact the user in question is on Grindr.

Under the GDPR, an app user’s personal data may be legally shared if you obtain their consent to do so. However there are a set of clear standards for consent to be legal — meaning it must be informed, specific and freely given. The Datatilsynet found that Grindr had failed to meet this standard. 

It said users of Grindr were forced to accept the privacy policy in its entirety — and were not asked if they wanted to consent with the sharing of their data to third parties.

Additionally, it said sexual orientation could be inferred by a user’s presence on Grindr; and under regional law such sensitive ‘special category’ data carries an even higher standard of explicit consent before it can be shared (which, again, the Datatilsynet said Grindr failed to get from users).

“Our preliminary conclusion is that Grindr needs consent to share these personal data and that Grindr’s consents were not valid. Additionally, we believe that the fact that someone is a Grindr user speaks to their sexual orientation, and therefore this constitutes special category data that merit particular protection,” it writes in a press release.

“The Norwegian Data Protection Authority considers that this is a serious case,” added Thon. “Users were not able to exercise real and effective control over the sharing of their data. Business models where users are pressured into giving consent, and where they are not properly informed about what they are consenting to, are not compliant with the law.”

The decision could have wider significance as a similar ‘forced consent’ complaint against Facebook is still open on the desk of Ireland’s data protection watchdog — despite being filed back in May 2018. For tech giants that have have set up a regional base in Ireland, and made an Irish entity legally responsible for processing EU citizens’ data, GDPR’s one-stop-shop mechanism has led to considerable delays in complaint enforcement.

Grindr, meanwhile, changed how it obtains consent in April 2020 — and the proposed sanction deals with how it was handling this prior to then, from May 2018, when the GDPR came into force.

“We have not to date assessed whether the subsequent changes comply with the GDPR,” the Datatilsynet adds.

After its report last year, the NCC also filed complaints against five of the third parties who it found to be receiving data from Grindr: MoPub (owned by Twitter), Xandr (formerly known as AppNexus), OpenX Software, AdColony, and Smaato. The DPA notes that those cases are ongoing.

Following the NCC report in January 2020, Twitter told us it had suspended Grindr’s MoPub account while it investigated the “sufficiency” of its consent mechanism. We’ve reached out to Twitter to ask whether it ever reinstated the account and will update this report with any response.

European privacy campaign group noyb, which was involved in filing the strategic complaints against Grindr and the adtech companies, hailed the DPA’s decision to uphold the complaints — dubbing the size of the fine “enormous” (given Grindr only reported profits of just over $30M in 2019, meaning it’s facing losing about a third of that at one fell swoop).

noyb also argues that Grindr’s switch to trying to claim legitimate interests to continue processing users’ data without obtaining their consent could result in further penalties for the company. 

“This is in conflict with the decision of the Norwegian DPA, as it explicitly held that “any extensive disclosure … for marketing purposes should be based on the data subject’s consent“,” writes Ala Krinickytė, data protection lawyer at noyb, in a statement. The case is clear from the factual and legal side. We do not expect any successful objection by Grindr. However, more fines may be in the pipeline for Grindr as it lately claims an unlawful ‘legitimate interest’ to share user data with third parties — even without consent. Grindr may be bound for a second round.” 

#apps, #data-protection, #europe, #gdpr, #grindr, #norwegian-consumer-council, #noyb, #privacy

0

UK resumes privacy oversight of adtech, warns platform audits are coming

The UK’s data watchdog has restarted an investigation of adtech practices that, since 2018, have been subject to scores of complaints across Europe under the bloc’s General Data Protection Regulation (GDPR).

The high velocity trading of Internet users’ personal data can’t possibly be compliant with GDPR’s requirement that such information is adequately secured, the complaints contend.

Other concerns attached to real-time bidding (RTB) focus on consent, questioning how this can meet the required legal standard with data being broadcast to so many companies — including sensitive information, such as health data or religious and political affiliation and sexual orientation.

Since the first complaints were filed the UK’s Information Commissioner’s Office (ICO) has raised its own concerns over what it said are systemic problems with lawfulness in the adtech sector. But last year announced it was pausing its investigation on account of disruption to businesses from the COVID-19 pandemic.

Today it said it’s unpausing its multi-year probe to keep on prodding.

In an update on its website, ICO deputy commissioner, Simon McDougall, ICO, who takes care of “Regulatory Innovation and Technology” at the agency, writes that the eight-month freeze is over. And the audits are coming.

“We have now resumed our investigation,” he says. “Enabling transparency and protecting vulnerable citizens are priorities for the ICO. The complex system of RTB can use people’s sensitive personal data to serve adverts and requires people’s explicit consent, which is not happening right now.”

“Sharing people’s data with potentially hundreds of companies, without properly assessing and addressing the risk of these counterparties, also raises questions around the security and retention of this data,” he goes on. “Our work will continue with a series of audits focusing on digital market platforms and we will be issuing assessment notices to specific companies in the coming months. The outcome of these audits will give us a clearer picture of the state of the industry.”

It’s not clear what data the ICO still lacks to come to a decision on complaints that are approaching 2.5 years old at this point. But the ICO has committed to resume looking at adtech — including at data brokers, per McDougall, who writes that “we will be reviewing the role of data brokers in this adtech eco-system”.

“The investigation is vast and complex and, because of the sensitivity of the work, there will be times where it won’t be possible to provide regular updates. However, we are committed to publishing our final findings, once the investigation is concluded,” he goes on, managing expectations of any swift resolution to this vintage GDPR complaint.

Commenting on the ICO’s continued reluctance to take enforcement action against adtech despite mounds of evidence of rampant breaches of the law, Johnny Ryan, a senior fellow at the Irish Council for Civil Liberties who was involved in filing the first batch of RTB GDPR complaints — and continues to be a vocal critic of EU regulatory inaction against adtech — told TechCrunch: “It seems to me that the facts are clearly set out in the ICO’s mid 2019 adtech report.

“Indeed, that report merely confirms the evidence that accompanied our complaints in September 2018 in Ireland and the UK. It is therefore unclear why the ICO requires several months further. Nor is it clear why the ICO accepted empty gestures from the IAB and Google a year ago.”

“I have since published evidence of the impact that failure to enforce has had: Including documented use of RTB data to influence an election,” he added. “As that evidence shows, the scale of the vast data breach caused by the RTB system has increased significantly in the three years since I blew the whistle to the ICO in early 2018.”

Despite plentiful data on the scale of the personal data leakage involved in RTB, and widespread concern that all sorts of tangible harms are flowing from adtech’s mass surveillance of Internet users (from discrimination and societal division to voter manipulation), the ICO is in no rush to enforce.

In fact, it quietly closed the 2018 complaint last year — telling the complainants it believed it had investigated the matter “to the extent appropriate”. It’s in the process of being sued by the complainants as a result — for, essentially, doing nothing about their complaint. (The Open Rights Group, which is involved in that legal action, is running this crowdfunder to raise money to take the ICO to court.)

So what does the ICO’s great adtech investigation unpausing mean exactly for the sector?

Not much more than gentle notice you might be the recipient of an “assessment notice” at some future point, per the latest mildly worded ICO blog post (and judging by its past performance).

Per McDougall, all organizations should be “assessing how they use personal data as a matter of urgency”.

He has also committed the ICO to publishing “final findings” at some future point. So — to follow, post-pause — yet another report. And more audits.

“We already have existing, comprehensive guidance in this area, which applies to RTB and adtech in the same way it does to other types of processing — particularly in respect of consentlegitimate interestsdata protection by design and data protection impact assessments (DPIAs),” he goes on, eschewing talk of any firmer consequences following should all that guidance continue being roundly ignored.

He ends the post with a nod to the Competition and Markets Authority’s recent investigation of Google’s Privacy Sandbox proposals (to phase out support for third party cookies on Chrome) — saying the ICO is “continuing” to work the CMA on that active antitrust complaint.

You’ll have to fill in the blanks as to exactly what work it might be doing there — because, again, McDougall isn’t saying. If it’s a veiled threat to the adtech industry to finally ‘get with the ICO’s privacy program’, or risk not having it fighting adtech’s corner in that crux antitrust vs privacy complaint, it really is gossamer thin.

#adtech, #data-protection, #europe, #gdpr, #ico, #privacy, #rtb

0