Meta may be forced to shutter Facebook, Instagram in EU

Meta may be forced to shutter Facebook, Instagram in EU

Enlarge

Meta says it may have to abandon the European Union.

The note was buried in the company’s annual filing with the Securities and Exchange Commission. Meta said that if officials on both sides of the Atlantic can’t reach an agreement on data transfers and warehousing, the company may have to pull its Facebook and Instagram platforms from Europe.

“If a new transatlantic data transfer framework is not adopted… we will likely be unable to offer a number of our most significant products and services, including Facebook and Instagram, in Europe,” Meta said in its 10-K filing.

Read 8 remaining paragraphs | Comments

#eu-data-transfers, #european-union, #facebook, #gdpr, #instagram, #meta, #policy, #privacy-regulations

Facebook warned over ‘very small’ indicator LED on smart glasses, as EU DPAs flag privacy concerns

Facebook’s lead privacy regulator in Europe has raised concerns about a pair of ‘smart’ Ray-Ban sunglasses the tech giant is now selling. The glasses include a face-mounted camera which can be used to take pictures and short videos with a verbal cue.

Ireland’s Data Protection Commission (DPC) said Friday that it’s asked the tech giant to demonstrate that an LED indicator light also mounted on the specs — which lights up when the user is taking a video — is an effective way of putting other people on notice that they are being recorded by the wearer.

Italy’s privacy watchdog, the Garante, already raised concerns about Facebook’s smart glasses — but Ireland has an outsized role as a regulator for the tech giant owing to where the company’s regional base is located.

Facebook announced what it couched as the “next step” on the road to making a pair of augmented reality ‘smart’ glasses a full year ago — saying initial specs would not include any AR but announcing a multi-year partnership luxury eyewear giant Luxottica, as it seemingly planned for a pipeline of increasingly feature-loaded ‘smart’ eyewear.

The first Facebook Ray-Ban-branded specs went on sale earlier this month — looking mostly like a standard pair of sunglasses but containing two 5 MP cameras mounted on the front that enable the user to take video of whatever they’re looking at and upload it to a new Facebook app called View. (The sunglasses also contain in-frame speakers so the user can listen to music and take phone calls.)

The specs also include a front mounted LED light which is supposed to switch on to indicate when a video is being recorded. However European regulators are concerned that what the DPC describes as a “very small” indicator is an inadequate mechanism for alerting people to the risk they are being recorded.

Facebook has not demonstrated it conducted comprehensive field testing of the device with a view to assessing the privacy risk it may pose, it added.

“While it is accepted that many devices including smart phones can record third party individuals, it is generally the case that the camera or the phone is visible as the device by which recording is happening, thereby putting those captured in the recordings on notice. With the glasses, there is a very small indicator light that comes on when recording is occurring. It has not been demonstrated to the DPC and Garante that comprehensive testing in the field was done by Facebook or Ray-Ban to ensure the indicator LED light is an effective means of giving notice,” the DPC wrote.

Facebook’s lead EU data protection regulator goes on to say it is calling on the tech giant to “confirm and demonstrate that the LED indicator light is effective for its purpose and to run an information campaign to alert the public as to how this new consumer product may give rise to less obvious recording of their images”.

Facebook has been contacted with questions.

It is not clear whether Facebook engaged with any EU privacy regulators during the design of the smart glasses.

Nor whether or when they might launch in Europe.

The specs sent on sale in the US earlier this month — costing $299. The price to Americans’ privacy is tbc.

Over the years, Facebook has delayed (or even halted) some of its product launches in Europe following regulatory concerns — including a facial tagging feature (which it later reintroduced in another form).

The launch of Facebook’s dating service in Europe was also delayed for more than nine months — and arrived with some claimed changes after an intervention by the DPC.

There are also ongoing limits on how the Facebook-owned messaging platform WhatsApp can share data with Facebook itself in Europe, again owing to regulatory push back. Although plenty of data does still flow from WhatsApp to Facebook in the EU and — zooming out — scores of privacy complaints against the tech giant remain under investigation in the region, meaning these issues are undecided and unenforced.

Earlier this month Ireland’s DPC did announce its first decision against a Facebook company (under the EU’s GDPR)  — hitting WhatsApp with a $267 penalty related to transparency failures. However the DPC has multiple unresolved complaints against Facebook or Facebook-owned businesses still on its desk.

In January the Irish regulator also agreed to “swiftly” resolve a (pre-GDPR) 2013 complaint against Facebook’s data transfers out of the EU to the US. That decision is still pending too.

#data-protection-commission, #europe, #european-union, #eyewear, #facebook, #gadgets, #gdpr, #glasses, #ireland, #italy, #luxottica, #privacy, #ray-ban, #smartglasses, #sunglasses, #united-states, #whatsapp

Relyance AI scores $25M Series A to ensure privacy compliance at the code level

Relyance AI, an early stage startup that is helping companies stay in compliance with privacy laws at the code level, announced a $25 million Series A today. At the same time, they revealed a previously unannounced $5 million seed round.

Menlo Ventures and Unusual Ventures led the A round, while Unusual was sole lead on the seed. Serial entrepreneur Jyoti Bansal from Unusual will join the board under the terms of the deal. His partner John Vrionis had previously joined after the seed round. Matt Murphy from Menlo is coming on as a board observer. The company has now raised $30 million.

Relyance takes an unusual approach to verifying that data stays in compliance working at the code level, while ingesting contracts and existing legal requirements as code to ensure that a company is in compliance. Company co-CEO and co-founder Abhi Sharma says that code-level check is key to the solution. “For the first time, we are building the legal compliance and regulation into the source code,” Sharma told me.

He added, “Relayance is actually embedded within the DevOps pipeline of our customers infrastructure. So every time a new ETL pipeline is built or a machine learning model is receiving new source code, we do a compiler-like analysis of how personal sensitive data is flowing between internal microservices, data lakes and data warehouses, and then get a metadata analysis back to the privacy and compliance professionals [inside an organization].”

Leila R. Golchehreh, the other founder and co-CEO brings a strong compliance background to the equation and has experienced the challenge of keeping companies in compliance first-hand. She said that Relayance also enables companies to define policy and contracts as code.

“Our approach is specifically to ingest contracts. We’ve actually created an algorithm around how [you] actually write a good data protection agreement. We’ve extracted those relevant provisions and we will compare that against [your] operational reality. So if there’s a disconnect, we will be able to raise that as an intelligent insight of a data misalignment,” she said.

With 32 employees, the co-founders hope to double or perhaps even triple that number in the next 12-18 months. Golchehreh and Sharma are a diverse co-founder team and they are attempting to build a company that reflects that. They believe being remote first gives them a leg up in this regard, but they also have internal policies to drive it.

“The recruiters we work with have a mandate internally to say, ‘Hey, we really want to hire good people and diverse people.’ Reliance as a company is the genesis of two individuals from two completely different ends of the spectrum coming together. And I think hopefully, we can do our job of relaying that into the company as we scale,” Sharma said.

The two founders have been friends for several years and began talking about forming a company together in 2019 over a pizza dinner. The idea began to gel and they launched the company in February 2020. They spent some time talking to compliance pros to understand their requirements better, then began building the solution they have today in July 2020. They released a beta in February and began quietly selling it in March.

Today they have a number of early customers working with their software including Dialpad, Patreon, Samsara and True.

#compliance, #data-protection, #developer, #enterprise, #funding, #gdpr, #jyoti-bansal, #recent-funding, #startups, #tc, #unusual-ventures

Ireland fails to enforce EU law against Big Tech

Ireland fails to enforce EU law against Big Tech

Enlarge (credit: NurPhoto | Getty Images)

Ireland is failing to apply the EU’s privacy laws to US Big Tech companies, with 98 percent of 164 significant complaints about privacy abuses still unresolved by its regulator.

Google, Facebook, Apple, Microsoft, and Twitter all have their European headquarters in Dublin, making Ireland’s Data Protection Commissioner the lead EU regulator responsible for holding them to the law.

But the Irish DPC has been repeatedly criticized, both by privacy campaigners and by other EU regulators for failing to take action.

Read 14 remaining paragraphs | Comments

#digital-rights-ireland, #eu, #gdpr, #policy, #privacy

WhatsApp fined $267M for not telling users how it shared data with Facebook

WhatsApp fined $267M for not telling users how it shared data with Facebook

Enlarge (credit: Stan Honda / Getty Images)

WhatsApp has been fined €225 million for breaking the EU’s data privacy law by not telling its users how it was sharing their data with its parent company Facebook.

In one of the biggest fines relating to the General Data Protection Regulation (GDPR), the Irish data regulator applied a penalty more than four times the level it had initially proposed for the messaging service after coming under pressure from other European countries.

The WhatsApp ruling came after Luxembourg fined Amazon a record €746 million in July for breaching GDPR and Ireland fined Twitter €450 million in December for not informing regulators about a data leak within 72 hours.

Read 9 remaining paragraphs | Comments

#eu, #facebook, #gdpr, #policy, #privacy, #whatsapp

WhatsApp faces $267M fine for breaching Europe’s GDPR

It’s been a long time coming but Facebook is finally feeling some heat from Europe’s much trumpeted data protection regime: Ireland’s Data Protection Commission (DPC) has just announced a €225 million (~$267M) for WhatsApp.

The Facebook-owned messaging app has been under investigation by the Irish DPC, its lead data supervisor in the European Union, since December 2018 — several months after the first complaints were fired at WhatsApp over how it processes user data under Europe’s General Data Protection Regulation (GDPR), once it begun being applied in May 2018.

Despite receiving a number of specific complaints about WhatsApp, the investigation undertaken by the DPC that’s been decided today was what’s known as an “own volition” enquiry — meaning the regulator selected the parameters of the investigation itself, choosing to fix on an audit of WhatsApp’s ‘transparency’ obligations.

A key principle of the GDPR is that entities which are processing people’s data must be clear, open and honest with those people about how their information will be used.

The DPC’s decision today (which runs to a full 266 pages) concludes that WhatsApp failed to live up to the standard required by the GDPR.

Its enquiry considered whether or not WhatsApp fulfils transparency obligations to both users and non-users of its service (WhatsApp may, for example, upload the phone numbers of non-users if a user agrees to it ingesting their phone book which contains other people’s personal data); as well as looking at the transparency the platform offers over its sharing of data with its parent entity Facebook (a highly controversial issue at the time the privacy U-turn was announced back in 2016, although it predated GDPR being applied).

In sum, the DPC found a range of transparency infringements by WhatsApp — spanning articles 5(1)(a); 12, 13 and 14 of the GDPR.

In addition to issuing a sizeable financial penalty, it has ordered WhatsApp to take a number of actions to improve the level of transparency it offer users and non-users — giving the tech giant a three-month deadline for making all the ordered changes.

In a statement responding to the DPC’s decision, WhatsApp disputed the findings and dubbed the penalty “entirely disproportionate” — as well as confirming it will appeal, writing:

“WhatsApp is committed to providing a secure and private service. We have worked to ensure the information we provide is transparent and comprehensive and will continue to do so. We disagree with the decision today regarding the transparency we provided to people in 2018 and the penalties are entirely disproportionate. We will appeal this decision.” 

It’s worth emphasizing that the scope of the DPC enquiry which has finally been decided today was limited to only looking at WhatsApp’s transparency obligations.

The regulator was explicitly not looking into wider complaints — which have also been raised against Facebook’s data-mining empire for well over three years — about the legal basis WhatsApp claims for processing people’s information in the first place.

So the DPC will continue to face criticism over both the pace and approach of its GDPR enforcement.

 

Indeed, prior to today, Ireland’s regulator had only issued one decision in a major cross-border cases addressing ‘Big Tech’ — against Twitter when, back in December, it knuckle-tapped the social network over a historical security breach with a fine of $550k.

WhatsApp’s first GDPR penalty is, by contrast, considerably larger — reflecting what EU regulators (plural) evidently consider to be a far more serious infringement of the GDPR.

Transparency is a key principle of the regulation. And while a security breach may indicate sloppy practice, systematic opacity towards people whose data your adtech empire relies upon to turn a fat profit looks rather more intentional; indeed, it’s arguably the whole business model.

And — at least in Europe — such companies are going to find themselves being forced to be up front about what they’re doing with people’s data.

Is the GDPR working?  

The WhatsApp decision will rekindle the debate about whether the GDPR is working effectively where it counts most: Against the most powerful companies in the world, which are also of course Internet companies.

Under the EU’s flagship data protection regulation, decisions on cross border cases require agreement from all affected regulators — across the 27 Member States — so while the GDPR’s “one-stop-shop” mechanism seeks to streamline the regulatory burden for cross-border businesses by funnelling complaints and investigations via a lead regulator (typically where a company has its main legal establishment in the EU), objections can be raised to that lead supervisory authority’s conclusions (and any proposed sanctions), as has happened here in this WhatsApp case.

Ireland originally proposed a far more low-ball penalty of up to €50M for WhatsApp. However other EU regulators objected to its draft decision on a number of fronts — and the European Data Protection Board (EDPB) ultimately had to step in and take a binding decision (issued this summer) to settle the various disputes.

Through that (admittedly rather painful) joint-working, the DPC was required to increase the size of the fine issued to WhatsApp. In a mirror of what happened with its draft Twitter decision — where the DPC has also suggested an even tinier penalty in the first instance.

While there is a clear time cost in settling disputes between the EU’s smorgasbord of data protection agencies — the DPC submitted its draft WhatsApp decision to the other DPAs for review back in December, so it’s taken well over half a year to hash out all the disputes about WhatsApp’s lossy hashing and so forth — the fact that ‘corrections’ are being made to its decisions and conclusions can land — if not jointly agreed but at least arriving via a consensus getting pushed through by the EDPB — is a sign that the process, while slow and creaky, is working. At least technically.

Even so, Ireland’s data watchdog will continue to face criticism for its outsized role in handling GDPR complaints and investigations — with some accusing the DPC of essentially cherry-picking which issues to examine in detail (by its choice and framing of cases) and which to elide entirely (those issues it doesn’t open an enquiry into or complaints it simply drops or ignores), with its loudest critics arguing it’s therefore still a major bottleneck on effective enforcement of data protection rights across the EU.

The associated conclusion for that critique is that tech giants like Facebook are still getting a pretty free pass to violate Europe’s privacy rules.

But while it’s true that a $267M penalty is the equivalent of a parking ticket for Facebook’s business empire, orders to change how such adtech giants are able to process people’s information at least have the potential to be a far more significant correction on problematic business models.

Again, though, time will be needed to tell whether such wider orders are having the sought for impact.

In a statement reacting to the DPC’s WhatsApp decision today, noyb — the privacy advocacy group founded by long-time European privacy campaigner Max Schrems, said: “We welcome the first decision by the Irish regulator. However, the DPC gets about ten thousand complaints per year since 2018 and this is the first major fine. The DPC also proposed an initial €50MK fine and was forced by the other European data protection authorities to move towards €225M, which is still only 0.08% of the turnover of the Facebook Group. The GDPR foresees fines of up to 4% of the turnover. This shows how the DPC is still extremely dysfunctional.”

Schrems also noted that he and noyb still have a number of pending cases before the DPC — including on WhatsApp.

In further remarks, they raised concerns about the length of the appeals process and whether the DPC would make a muscular defence of a sanction it had been forced to increase by other EU DPAs.

“WhatsApp will surely appeal the decision. In the Irish court system this means that years will pass before any fine is actually paid. In our cases we often had the feeling that the DPC is more concerned with headlines than with actually doing the hard groundwork. It will be very interesting to see if the DPC will actually defend this decision fully, as it was basically forced to make this decision by its European counterparts. I can imagine that the DPC will simply not put many resources on the case or ‘settle’ with WhatsApp in Ireland. We will monitor this case closely to ensure that the DPC is actually following through with this decision.”

#data-protection, #data-protection-commission, #europe, #european-data-protection-board, #european-union, #facebook, #gdpr, #general-data-protection-regulation, #ireland, #noyb, #privacy, #social-media, #social-network, #transparency, #whatsapp

UK names John Edwards as its choice for next data protection chief as gov’t eyes watering down privacy standards

The UK government has named the person it wants to take over as its chief data protection watchdog, with sitting commissioner Elizabeth Denham overdue to vacate the post: The Department of Digital, Culture, Media and Sport (DCMS) today said its preferred replacement is New Zealand’s privacy commissioner, John Edwards.

Edwards, who has a legal background, has spent more than seven years heading up the Office of the Privacy Commissioner In New Zealand — in addition to other roles with public bodies in his home country.

He is perhaps best known to the wider world for his verbose Twitter presence and for taking a public dislike to Facebook: In the wake of the 2018 Cambridge Analytica data misuse scandal Edwards publicly announced that he was deleting his account with the social media — accusing Facebook of not complying with the country’s privacy laws.

An anti-‘Big Tech’ stance aligns with the UK government’s agenda to tame the tech giants as it works to bring in safety-focused legislation for digital platforms and reforms of competition rules that take account of platform power.

If confirmed in the role — the DCMS committee has to approve Edwards’ appointment; plus there’s a ceremonial nod needed from the Queen — he will be joining the regulatory body at a crucial moment as digital minister Oliver Dowden has signalled the beginnings of a planned divergence from the European Union’s data protection regime, post-Brexit, by Boris Johnson’s government.

Dial back the clock five years and prior digital minister, Matt Hancock, was defending the EU’s General Data Protection Regulation (GDPR) as a “decent piece of legislation” — and suggesting to parliament that there would be little room for the UK to diverge in data protection post-Brexit.

But Hancock is now out of government (aptly enough after a data leak showed him breaching social distancing rules by kissing his aide inside a government building), and the government mood music around data has changed key to something far more brash — with sitting digital minister Dowden framing unfettered (i.e. deregulated) data-mining as “a great opportunity” for the post-Brexit UK.

For months, now, ministers have been eyeing how to rework the UK’s current (legascy) EU-based data protection framework — to, essentially, reduce user rights in favor of soundbites heavy on claims of slashing ‘red tape’ and turbocharging data-driven ‘innovation’. Of course the government isn’t saying the quiet part out loud; its press releases talk about using “the power of data to drive growth and create jobs while keeping high data protection standards”. But those standards are being reframed as a fig leaf to enable a new era of data capture and sharing by default.

Dowden has said that the emergency data-sharing which was waived through during the pandemic — when the government used the pressing public health emergency to justify handing NHS data to a raft of tech giantsshould be the ‘new normal’ for a post-Brexit UK. So, tl;dr, get used to living in a regulatory crisis.

A special taskforce, which was commissioned by the prime minister to investigate how the UK could reshape its data policies outside the EU, also issued a report this summer — in which it recommended scrapping some elements of the UK’s GDPR altogether — branding the regime “prescriptive and inflexible”; and advocating for changes to “free up data for innovation and in the public interest”, as it put it, including pushing for revisions related to AI and “growth sectors”.

The government is now preparing to reveal how it intends to act on its appetite to ‘reform’ (read: reduce) domestic privacy standards — with proposals for overhauling the data protection regime incoming next month.

Speaking to the Telegraph for a paywalled article published yesterday, Dowden trailed one change that he said he wants to make which appears to target consent requirements — with the minister suggesting the government will remove the legal requirement to gain consent to, for example, track and profile website visitors — all the while framing it as a pro-consumer move; a way to do away with “endless” cookie banners.

Only cookies that pose a ‘high risk’ to privacy would still require consent notices, per the report — whatever that means.

“There’s an awful lot of needless bureaucracy and box ticking and actually we should be looking at how we can focus on protecting people’s privacy but in as light a touch way as possible,” the digital minister also told the Telegraph.

The draft of this Great British ‘light touch’ data protection framework will emerge next month, so all the detail is still to be set out. But the overarching point is that the government intends to redefine UK citizens’ privacy rights, using meaningless soundbites — with Dowden touting a plan for “common sense” privacy rules — to cover up the fact that it intends to reduce the UK’s currently world class privacy standards and replace them with worse protections for data.

If you live in the UK, how much privacy and data protection you get will depend upon how much ‘innovation’ ministers want to ‘turbocharge’ today — so, yes, be afraid.

It will then fall to Edwards — once/if approved in post as head of the ICO — to nod any deregulation through in his capacity as the post-Brexit information commissioner.

We can speculate that the government hopes to slip through the devilish detail of how it will torch citizens’ privacy rights behind flashy, distraction rhetoric about ‘taking action against Big Tech’. But time will tell.

Data protection experts are already warning of a regulatory stooge.

While the Telegraph suggests Edwards is seen by government as an ideal candidate to ensure the ICO takes a “more open and transparent and collaborative approach” in its future dealings with business.

In a particularly eyebrow raising detail, the newspaper goes on to report that government is exploring the idea of requiring the ICO to carry out “economic impact assessments” — to, in the words of Dowden, ensure that “it understands what the cost is on business” before introducing new guidance or codes of practice.

All too soon, UK citizens may find that — in the ‘sunny post-Brexit uplands’ — they are afforded exactly as much privacy as the market deems acceptable to give them. And that Brexit actually means watching your fundamental rights being traded away.

In a statement responding to Edwards’ nomination, Denham, the outgoing information commissioner, appeared to offer some lightly coded words of warning for government, writing [emphasis ours]: “Data driven innovation stands to bring enormous benefits to the UK economy and to our society, but the digital opportunity before us today will only be realised where people continue to trust their data will be used fairly and transparently, both here in the UK and when shared overseas.”

The lurking iceberg for government is of course that if wades in and rips up a carefully balanced, gold standard privacy regime on a soundbite-centric whim — replacing a pan-European standard with ‘anything goes’ rules of its/the market’s choosing — it’s setting the UK up for a post-Brexit future of domestic data misuse scandals.

You only have to look at the dire parade of data breaches over in the US to glimpse what’s coming down the pipe if data protection standards are allowed to slip. The government publicly bashing the private sector for adhering to lax standards it deregulated could soon be the new ‘get popcorn’ moment for UK policy watchers…

UK citizens will surely soon learn of unfair and unethical uses of their data under the ‘light touch’ data protection regime — i.e. when they read about it in the newspaper.

Such an approach will indeed be setting the country on a path where mistrust of digital services becomes the new normal. And that of course will be horrible for digital business over the longer run. But Dowden appears to lack even a surface understanding of Internet basics.

The UK is also of course setting itself on a direct collision course with the EU if it goes ahead and lowers data protection standards.

This is because its current data adequacy deal with the bloc — which allows for EU citizens’ data to continue flowing freely to the UK — was granted only on the basis that the UK was, at the time it was inked, still aligned with the GDPR. So Dowden’s rush to rip up protections for people’s data presents a clear risk to the “significant safeguards” needed to maintain EU adequacy. Meaning the deal could topple.

Back in June, when the Commission signed off on the UK’s adequacy deal, it clearly warned that “if anything changes on the UK side, we will intervene”.

Add to that, the adequacy deal is also the first with a baked in sunset clause — meaning it will automatically expire in four years. So even if the Commission avoids taking proactive action over slipping privacy standards in the UK there is a hard deadline — in 2025 — when the EU’s executive will be bound to look again in detail at exactly what Dowden & Co. have wrought. And it probably won’t be pretty.

The longer term UK ‘plan’ (if we can put it that way) appears to be to replace domestic economic reliance on EU data flows — by seeking out other jurisdictions that may be friendly to a privacy-light regime governing what can be done with people’s information.

Hence — also today — DCMS trumpeted an intention to secure what it billed as “new multi-billion pound global data partnerships” — saying it will prioritize striking ‘data adequacy’ “partnerships” with the US, Australia, the Republic of Korea, Singapore, and the Dubai International Finance Centre and Colombia.

Future partnerships with India, Brazil, Kenya and Indonesia will also be prioritized, it added — with the government department cheerfully glossing over the fact it’s UK citizens’ own privacy that is being deprioritized here.

“Estimates suggest there is as much as £11 billion worth of trade that goes unrealised around the world due to barriers associated with data transfers,” DCMS writes in an ebullient press release.

As it stands, the EU is of course the UK’s largest trading partner. And statistics from the House of Commons library on the UK’s trade with the EU — which you won’t find cited in the DCMS release — underline quite how tiny this potential Brexit ‘data bonanza’ is, given that UK exports to the EU stood at £294 billion in 2019 (43% of all UK exports).

So even the government’s ‘economic’ case to water down citizens’ privacy rights looks to be puffed up with the same kind of misleadingly vacuous nonsense as ministers’ reframing of a post-Brexit UK as ‘Global Britain’.

Everyone hates cookies banners, sure, but that’s a case for strengthening not weakening people’s privacy — for making non-tracking the default setting online and outlawing manipulative dark patterns so that Internet users don’t constantly have to affirm they want their information protected. Instead the UK may be poised to get rid of annoying cookie consent ‘friction’ by allowing a free for all on citizens’ data.

 

#artificial-intelligence, #australia, #brazil, #colombia, #data-mining, #data-protection, #data-security, #digital-rights, #elizabeth-denham, #europe, #european-union, #facebook, #gdpr, #general-data-protection-regulation, #human-rights, #india, #indonesia, #john-edwards, #kenya, #korea, #matt-hancock, #new-zealand, #nhs, #oliver-dowden, #privacy, #singapore, #social-issues, #social-media, #uk-government, #united-kingdom, #united-states

China passes data protection law

China has passed a personal data protection law, state media Xinhua reports (via Reuters).

The law, called the Personal Information Protection Law (PIPL), is set to take effect on November 1.

It was proposed last year — signalling an intent by China’s communist leaders to crack down on unscrupulous data collection in the commercial sphere by putting legal restrictions on user data collection.

The new law requires app makers to offer users options over how their information is or isn’t used, such as the ability not to be targeted for marketing purposes or to have marketing based on personal characteristics, according to Xinhua.

It also places requirements on data processors to obtain consent from individuals in order to be able to process sensitive types of data such as biometrics, medical and health data, financial information and location data.

While apps that illegally process user data risk having their service suspended or terminated.

Any Western companies doing business in China which involves processing citizens’ personal data must grapple with the law’s extraterritorial jurisdiction — meaning foreign companies will face regulatory requirements such as the need to assign local representatives and report to supervisory agencies in China.

On the surface, core elements of China’s new data protection regime mirror requirements long baked into European Union law — where the General Data Protection Regulation (GDPR) provides citizens with a comprehensive set of rights wrapping their personal data, including putting a similarly high bar on consent to process what EU law refers to as ‘special category data’, such as health data (although elsewhere there are differences in what personal information is considered the most sensitive by the respective data laws).

The GDPR is also extraterritorial in scope.

But the context in which China’s data protection law will operate is also of course very different — not least given how the Chinese state uses a vast data-gathering operation to keep tabs on and police the behavior of its own citizens.

Any limits the PIPL might place on Chinese government departments’ ability to collect data on citizens — state organs were covered in draft versions of the law — may be little more than window-dressing to provide a foil for continued data collection by the Chinese Communist Party (CCP)’s state security apparatus while further consolidating its centralized control over government.

It also remains to be seen how the CCP could use the new data protection rules to further regulate — some might say tame — the power of the domestic tech sector.

It has been cracking down on the sector in a number of ways, using regulatory changes as leverage over giants like Tencent. Earlier this month, for example, Beijing filed a civil suit against the tech giant — citing claims that its messaging-app WeChat’s youth mode does not comply with laws protecting minors.

The PIPL provides the Chinese regime with plenty more attack surface to put strictures on local tech companies.

Nor is it wasting any time in attacking data-mining practices that are common place among Western tech giants but now look likely to face growing friction if deployed by companies within China.

Reuters notes that the National People’s Congress marked the passage of the law today by publishing an op-ed from state media outlet People’s Court Daily which lauds the legislation and calls for entities that use algorithms for “personalized decision making” — such as recommendation engines — to obtain user consent first.

Quoting the op-ed, it writes: “Personalization is the result of a user’s choice, and true personalized recommendations must ensure the user’s freedom to choose, without compulsion. Therefore, users must be given the right to not make use of personalized recommendation functions.”

There is growing concern over algorithmic targeting outside China, too, of course.

In Europe, lawmaker and regulators have been calling for tighter restrictions on behavioral advertising — as the bloc is in the process of negotiating a swathe of new digital regulations that will expand its power to regulate the sector, such as the proposed Digital Markets Act and Digital Services Act.

Regulating the Internet is clearly the new geopolitical battleground as regions compete to shape the future of data flows to suit their respective economic, political and social goals.

#asia, #china, #data-protection, #gdpr, #pipl, #policy, #privacy

Ireland must ‘swiftly’ investigate legality of Facebook-WhatsApp data-sharing, says EDPB

Facebook’s lead regulator in the European Union must “swiftly” investigate the legality of data-sharing related to a controversial WhatsApp policy update, following an order by the European Data Protection Board (EDPB).

We’ve reached out to the Irish Data Protection Commission (DPC) for a response.

Updated terms had been set to be imposed upon users of the Facebook-owned messaging app early this year — but in January Facebook delayed the WhatsApp terms update until May after a major privacy backlash and ongoing confusion over the details of its user data processing.

Despite WhatsApp going ahead with the policy update, the ToS has continued to face scrutiny from regulators and rights organizations around the world.

The Indian government, for example, has repeatedly ordered Facebook to withdraw the new terms. While, in Europe, privacy regulators and consumer protection organizations have raised objections about how opaque terms are being pushed on users — and in May a German data protection authority issued a temporary (national) blocking order.

Today’s development follows that and is significant as it’s the first urgent binding decision adopted by the EDPB under the bloc’s General Data Protection Regulation (GDPR).

Although the Board has not agreed to order the adoption of final measures against Facebook-WhatsApp as the requesting data supervisor, the Hamburg DPA, had asked — saying that “conditions to demonstrate the existence of an infringement and an urgency are not met”.

The Board’s intervention in the confusing mess around the WhatsApp policy update follows the use of GDPR Article 66 powers by Hamburg’s data protection authority.

In May the latter ordered Facebook not to apply the new terms to users in Germany — saying its analysis found the policy granted “far-reaching powers” to WhatsApp to share data with Facebook, without it being clear what legal basis the tech giant was relying upon to be able process users’ data.

Hamburg also accused the Irish DPC of failing to investigate the Facebook-WhatsApp data sharing when it raised concerns — hence seeking to take matters into its own hands by making an Article 66 intervention.

As part of the process it asked the EDPB to take a binding decision — asking it to take definitive steps to block data-sharing between WhatsApp and Facebook — in a bid to circumvent the Irish regulator’s glacial procedures by getting the Board to order enforcement measures that could be applied stat across the whole bloc.

However the Board’s assessment found that Hamburg had not met the bar for demonstrating the Irish DPC “failed to provide information in the context of a formal request for mutual assistance under Article 61 GDPR”, as it puts it.

It also decided that the adoption of updated terms by WhatsApp — which it nonetheless says “contain similar problematic elements as the previous version” — cannot “on its own” justify the urgency for the EDPB to order the lead supervisor to adopt final measures under Article 66(2) GDPR.

The upshot — as the Hamburg DPA puts it — is that data exchange between WhatsApp and Facebook remains “unregulated at the European level”.

Article 66 powers

The importance of Article 66 of the GDPR is that it allows EU data protection authorities to derogate from the regulation’s one-stop-shop mechanism — which otherwise funnels cross border complaints (such as those against Big Tech) via a lead data supervisor (oftentimes the Irish DPC), and is thus widely seen as a bottleneck to effective enforcement of data protection (especially against tech giants).

An Article 66 urgency proceeding allows any data supervisor across the EU to immediately adopt provisional measures — provided a situation meets the criteria for this kind of emergency intervention. Which is one way to get around a bottleneck, even if only for a time-limited period.

A number of EU data protection authorities have used (or threatened to use) Article 66 powers in recent years, since GDPR came into application in 2018, and the power is increasingly proving its worth in reconfiguring certain Big Tech practices — with, for example, Italy’s DPA using it recently to force TikTok to remove hundreds of thousands of suspected underage accounts.

Just the threat of Article 66’s use back in 2019 (also by Hamburg) was enough to encourage Google to suspend manual reviews of audio reviews of recordings captured by its voice AI, Google Assistant. (And later led to a number of major policy changes by several tech giants who had similarly been manually reviewing users’ interactions with their voice AIs.)

At the same time, Article 66 provisional measures can only last three months — and only apply nationally, not across the whole EU. So it’s a bounded power. (Perhaps especially in this WhatsApp-Facebook case, where the target is a ToS update, and Facebook could just wait out the three months and apply the policy anyway in Germany after the suspension order lapses.)

This is why Hamburg wanted the EDPB to make a binding decision. And it’s certainly a blow to privacy watchers eager for GDPR enforcement to fall on tech giants like Facebook that the Board has declined to do so in this case.

Unregulated data-sharing

Responding to the Board’s decision not to impose definitive measures to prevent data sharing between WhatsApp and Facebook, the Hamburg authority expressed disappointment — see below for its full statement — and also lamented that the EDPB has not set a deadline for the Irish DPC to conduct the investigation into the legal basis of the data-sharing.

Ireland’s data protection authority has only issued one final GDPR decision against a tech giant to date (Twitter) — so there is plenty of cause to be concerned that without a concrete deadline the ordered probe could be kicked down the road for years.

Nonetheless, the EDPB’s order to the Irish DPC to “swiftly” investigate the finer-grained detail of the Facebook-WhatsApp data-sharing does look like a significant intervention by a pan-EU body — as it very publicly pokes a regulator with a now infamous reputation for reluctance to actually do the job of rigorously investigating privacy concerns. 

Demonstrably it has failed to do so in this WhatsApp case. Despite major concerns being raised about the policy update — within Europe and globally — Facebook’s lead EU data supervisor did not open a formal investigation and has not raised any public objections to the update.

Back in January when we asked about concerns over the update, the DPC told TechCrunch it had obtained a ‘confirmation’ from Facebook-owned WhatsApp that there was no change to data-sharing practices that would affect EU users — reiterating Facebook’s line that the update didn’t change anything, ergo ‘nothing to see here’. 

“The updates made by WhatsApp last week are about providing clearer, more detailed information to users on how and why they use data. WhatsApp have confirmed to us that there is no change to data-sharing practices either in the European Region or the rest of the world arising from these updates,” the DPC told us then, although it also noted that it had received “numerous queries” from stakeholders who it described as “confused and concerned about these updates”, mirroring Facebook’s own characterization of complaints.

“We engaged with WhatsApp on the matter and they confirmed to us that they will delay the date by which people will be asked to review and accept the terms from February 8th to May 15th,” the DPC went on, referring to a pause in the ToS application deadline which Facebook enacted after a public backlash that saw scores of users signing up to alternative messaging apps, before adding: “In the meantime, WhatsApp will launch information campaigns to provide further clarity about how privacy and security works on the platform. We will continue to engage with WhatsApp on these updates.”

The EDPB’s assessment of the knotty WhatsApp-Facebook data-sharing terms looks rather different — with the Board calling out WhatsApp’s user communications as confusing and simultaneously raising concerns about the legal basis for the data exchange.

In a press release, the EDPB writes that there’s a “high likelihood of infringements” — highlighting purposes contained in the updated ToS in the areas of “safety, security and integrity of WhatsApp IE [Ireland] and the other Facebook Companies, as well as for the purpose of improvement of the products of the Facebook Companies” as being of particular concern.

From the Board’s PR [emphasis its]:

“Considering the high likelihood of infringements in particular for the purpose of safety, security and integrity of WhatsApp IE [Ireland] and the other Facebook Companies, as well as for the purpose of improvement of the products of the Facebook Companies, the EDPB considered that this matter requires swift further investigations. In particular to verify if, in practice, Facebook Companies are carrying out processing operations which imply the combination or comparison of WhatsApp IE’s [Ireland] user data with other data sets processed by other Facebook Companies in the context of other apps or services offered by the Facebook Companies, facilitated inter alia by the use of unique identifiers. For this reason, the EDPB requests the IE SA [Irish supervisory authority] to carry out, as a matter of priority, a statutory investigation to determine whether such processing activities are taking place or not, and if this is the case, whether they have a proper legal basis under Article 5(1)(a) and Article 6(1) GDPR.”

NB: It’s worth recalling that WhatsApp users were initially told they must accept the updated policy or else the app would stop working. (Although Facebook later changed its approach — after the public backlash.) While WhatsApp users who still haven’t accepted the terms continue to be nagged to do so via regular pop-ups, although the tech giant does not appear to be taking steps to degrade the user experience further as yet (i.e. beyond annoying, recurring pop-ups).

The EDPB’s concerns over the WhatsApp-Facebook data-sharing extend to what it says is “a lack of information around how data is processed for marketing purposes, cooperation with the other Facebook Companies and in relation to WhatsApp Business API” — hence its order to Ireland to fully investigate.

The Board also essentially confirms the view that WhatsApp users themselves have no hope of understanding what Facebook is doing with their data by reading the comms material it has provided them with — with the Board writing [emphasis ours]:

“Based on the evidence provided, the EDPB concluded that there is a high likelihood that Facebook IE [Ireland] already processes WhatsApp IE [Ireland] user data as a (joint) controller for the common purpose of safety, security and integrity of WhatsApp IE [Ireland] and the other Facebook Companies, and for the common purpose of improvement of the products of the Facebook Companies. However, in the face of the various contradictions, ambiguities and uncertainties noted in WhatsApp’s user-facing information, some written commitments adopted by Facebook IE [Ireland] and WhatsApp IE’s [Ireland] written submissions, the EDPB concluded that it is not in a position to determine with certainty which processing operations are actually being carried out and in which capacity.”

We contacted Facebook for a response to the EDPB’s order, and the company sent us this statement — attributed to a WhatsApp spokesperson:

“We welcome the EDPB’s decision not to extend the Hamburg DPA’s order, which was based on fundamental misunderstandings as to the purpose and effect of the update to our terms of service. We remain fully committed to delivering secure and private communications for everyone and will work with the Irish Data Protection Commission as our lead regulator in the region in order to fully address the questions raised by the EDPB.”

Facebook also claimed it has controls in place for ‘controller to processor data sharing’ (i.e. between WhatsApp and Facebook) — which it said prohibit it (Facebook) from using WhatsApp user data for its own purposes.

The tech giant went on to reiterate its line that the update does not expand WhatsApp’s ability to share data with Facebook.

GDPR enforcement stalemate

A further vital component to this saga is the fact the Irish DPC has, for years, been investigating long-standing complaints against WhatsApp’s compliance with GDPR’s transparency requirements — and still hasn’t issued a final decision.

So when the EDPB says it’s highly likely that some of the WhatsApp-Facebook data-processing being objected to is already going on it doesn’t mean Facebook gets a pass for that — because the DPC hasn’t issued a verdict on whether or not WhatsApp has been up front enough with users.

tl;dr: The regulatory oversight process is still ongoing.

The DPC provisionally concluded its WhatsApp transparency investigation last year — saying in January that it sent a draft decision to the other EU data protection authorities for review (and the chance to object) on December 24, 2020; a step that’s required under the GDPR’s co-decision-making process.

In January, when it said it was still waiting to receive comments on the draft decision, it also said: “When the process is completed and a final decision issues, it will make clear the standard of transparency to which WhatsApp is expected to adhere as articulated by EU Data Protection Authorities.”

Over a half a year later and WhatsApp users in the EU are still waiting to find out whether the company’s comms lives up to the required legal standard of transparency or not — with their data continuing to pass between Facebook and WhatsApp in the meanwhile.

The Irish DPC was contacted for comment on the EDPB’s order today and with questions on the current status of the WhatsApp transparency investigation.

It told us it would have a response later today — we’ll update this report when we get it.

Back in November the Irish Times reported that WhatsApp Ireland had set aside €77.5M for “possible administrative fines arising from regulatory compliance matters presently under investigation”. No fines against Facebook have yet been forthcoming, though.

Indeed, the DPC has yet to issue a single final GDPR decision against Facebook (or a Facebook-owned company) — despite more than three years having passed since the regulation started being applied.

Scores of GDPR complaints against the Facebook’s data-processing empire — such as this May 2018 complaint against Facebook, Instagram and WhatsApp’s use of so-called ‘forced consent’ — continue to languish without regulatory enforcement in the EU because there’s been no decisions from Ireland (and sometimes no investigations either).

The situation is a huge black mark against the EU’s flagship data protection regulation. So the Board’s failure to step in more firmly now — to course-correct — does look like a missed opportunity to tackle a problematic GDPR enforcement bottleneck.

That said, any failure to follow the procedural letter of the law could invite a legal challenge that unpicked any progress. So it’s hard to see any quick wins in the glacial game of GDPR enforcement.

In the meanwhile, the winners of the stalemate are of course the tech giants who get to continue processing people’s data how they choose, with plenty of time to work on reconfiguring their legal, business and system structures to route around any enforcement damage that does eventually come.

Hamburg’s deputy commissioner for data protection, Ulrich Kühn, essentially warns as much in a statement responding to the EDPB’s decision in a statement — in which he writes:

“The decision of the European Data Protection Board is disappointing. The body, which was created to ensure the uniform application of the GDPR throughout the European Union, is missing the opportunity to clearly stand up for the protection of the rights and freedoms of millions of data subjects in Europe. It continues to leave this solely to the Irish supervisory authority. Despite our repeated requests over more than two years to investigate and, if necessary, sanction the matter of data exchanges between WhatsApp and Facebook, the IDPC has not taken action in this regard. It is a success of our efforts over many years that IDPC is now being urged to conduct an investigation. Nonetheless, this non-binding measure does not do justice to the importance of the issue. It is hard to imagine a case in which, against the background of the risks for the rights and freedoms of a very large number of data subjects and their de facto powerlessness vis-à-vis monopoly-like providers, the urgent need for concrete action is more obvious. The EDPB is thus depriving itself of a crucial instrument for enforcing the GDPR throughout Europe. This is no good news for data subjects and data protection in Europe as a whole.“

In further remarks the Hamburg authority emphasizes that the Board noted “considerable inconsistencies between the information with which WhatsApp users are informed about the extensive use of their data by Facebook on the one hand, and on the other the commitments made by the company to data protection authorities not (yet) to do so”; and also that it “expressed considerable doubts about the legal basis on which Facebook intends to rely when using WhatsApp data for its own or joint processing” — arguing that the Board therefore agrees with the “essential parts” of its arguments against WhatsApp-Facebook data sharing.

Despite carrying that weight of argument, the call for action is once again back in Ireland’s court.

 

#data-sharing, #edpb, #europe, #european-union, #facebook, #gdpr, #irish-dpc, #privacy, #social, #whatsapp

Controversial WhatsApp policy change hit with consumer law complaint in Europe

Facebook has been accused of multiple breaches of European Union consumer protection law as a result of its attempts to force WhatsApp users to accept controversial changes to the messaging platforms’ terms of use — such as threatening users that the app would stop working if they did not accept the updated policies by May 15.

The consumer protection association umbrella group, the Beuc, said today that together with eight of its member organizations it’s filed a complaint with the European Commission and with the European network of consumer authorities.

“The complaint is first due to the persistent, recurrent and intrusive notifications pushing users to accept WhatsApp’s policy updates,” it wrote in a press release.

“The content of these notifications, their nature, timing and recurrence put an undue pressure on users and impair their freedom of choice. As such, they are a breach of the EU Directive on Unfair Commercial Practices.”

After earlier telling users that notifications about the need to accept the new policy would become persistent, interfering with their ability to use the service, WhatsApp later rowed back from its own draconian deadline.

However the app continues to bug users to accept the update — with no option not to do so (users can close the policy prompt but are unable to decline the new terms or stop the app continuing to pop-up a screen asking them to accept the update).

“In addition, the complaint highlights the opacity of the new terms and the fact that WhatsApp has failed to explain in plain and intelligible language the nature of the changes,” the Beuc went on. “It is basically impossible for consumers to get a clear understanding of what consequences WhatsApp’s changes entail for their privacy, particularly in relation to the transfer of their personal data to Facebook and other third parties. This ambiguity amounts to a breach of EU consumer law which obliges companies to use clear and transparent contract terms and commercial communications.”

The organization pointed out that WhatsApp’s policy updates remain under scrutiny by privacy regulations in Europe — which it argues is another factor that makes Facebook’s aggressive attempts to push the policy on users highly inappropriate.

And while this consumer-law focused complaint is separate to the privacy issues the Beuc also flags — which are being investigated by EU data protection authorities (DPAs) — it has called on those regulators to speed up their investigations, adding: “We urge the European network of consumer authorities and the network of data protection authorities to work in close cooperation on these issues.”

The Beuc has produced a report setting out its concerns about the WhatsApp ToS change in more detail — where it hits out at the “opacity” of the new policies, further asserting:

“WhatsApp remains very vague about the sections it has removed and the ones it has added. It is up to users to seek out this information by themselves. Ultimately, it is almost impossible for users to clearly understand what is new and what has been amended. The opacity of the new policies is in breach of Article 5 of the UCTD [Unfair Contract Terms Directive] and is also a misleading and unfair practice prohibited under Article 5 and 6 of the UCPD [Unfair Commercial Practices Directive].”

Reached for comment on the consumer complaint, a WhatsApp spokesperson told us:

“Beuc’s action is based on a misunderstanding of the purpose and effect of the update to our terms of service. Our recent update explains the options people have to message a business on WhatsApp and provides further transparency about how we collect and use data. The update does not expand our ability to share data with Facebook, and does not impact the privacy of your messages with friends or family, wherever they are in the world. We would welcome an opportunity to explain the update to Beuc and to clarify what it means for people.”

The Commission was also contacted for comment on the Beuc’s complaint — we’ll update this report if we get a response.

The complaint is just the latest pushback in Europe over the controversial terms change by Facebook-owned WhatsApp — which triggered a privacy warning from Italy back in January, followed by an urgency procedure in Germany in May when Hamburg’s DPA banned the company from processing additional WhatsApp user data.

Although, earlier this year, Facebook’s lead data regulator in the EU, Ireland’s Data Protection Commission, appeared to accept Facebook’s reassurances that the ToS changes do not affect users in the region.

German DPAs were less happy, though. And Hamburg invoked emergency powers allowed for in the General Data Protection Regulation (GDPR) in a bid to circumvent a mechanism in the regulation that (otherwise) funnels cross-border complaints and concerns via a lead regulator — typically where a data controller has their regional base (in Facebook/WhatsApp’s case that’s Ireland).

Such emergency procedures are time-limited to three months. But the European Data Protection Board (EDPB) confirmed today that its plenary meeting will discuss the Hamburg DPA’s request for it to make an urgent binding decision — which could see the Hamburg DPA’s intervention set on a more lasting footing, depending upon what the EDPB decides.

In the meanwhile, calls for Europe’s regulators to work together to better tackle the challenges posed by platform power are growing, with a number of regional competition authorities and privacy regulators actively taking steps to dial up their joint working — in a bid to ensure that expertise across distinct areas of law doesn’t stay siloed and, thereby, risk disjointed enforcement, with conflicting and contradictory outcomes for Internet users.

There seems to be a growing understanding on both sides of the Atlantic for a joined up approach to regulating platform power and ensuring powerful platforms don’t simply get let off the hook.

 

#beuc, #europe, #european-commission, #european-data-protection-board, #european-union, #facebook, #gdpr, #general-data-protection-regulation, #germany, #hamburg, #ireland, #policy, #privacy, #social, #social-media, #whatsapp

Didomi raises $40 million to help you manage customer consent

French startup Didomi has raised a $40 million Series B funding round led by Elephant and Breega. The company manages consent flows for web publishers and app developers. Didomi is already doing well in Europe with billions of consent interactions per month — it plans to expand to the U.S. with today’s funding round.

“Jawad, Raphaël and I have co-founded Didomi to make privacy easier for everyone and an obvious choice for companies. This fundraising is a major milestone on our journey to deliver on this mission,” co-founder and CEO Romain Gauthier said in a statement.

“We look forward to helping brands and publishers make customer journeys more transparent and trustworthy through a delightful consent and preferences management experience,” he added.

In recent years, many regulators have implemented new privacy-focused frameworks. You might think about GDPR in Europe for instance.

And if you live in a country that is affected by those changes, you are now well aware that you’ll get a consent popup or banner whenever you visit a new website or open an app for the first time.

I wouldn’t say that these popups are “delightful” as the best consent popup is the one that doesn’t exist because the site you’re visiting doesn’t collect and share personal data. But that’s not always possible and there are different reasons why you may need to collect data — including on this current site techcrunch dot com.

In that case, a product like Didomi can be really helpful. Taking those consent flows seriously is extremely important as you don’t want to mess up the implementation and get fined. Didomi is a developer-focused consent platform that works across many different devices. You can configure your consent flow for a desktop website, a mobile website, a mobile app or a connected TV.

Having a unified solution also means that you don’t have to ask for permission over and over again. Didomi can store and synchronize preferences across devices. Everything is auditable in case regulators want to see how you’re collecting consent.

With today’s funding round, the company wants to make its product even more developer friendly with open APIs and open-source SDKs. It doesn’t mean that Didomi is for everyone as the company focuses on premium clients in particular. Clients include Rakuten, Orange, Giphy and Weight Watchers International.

The company will also hire more people with local marketing and sales teams for different markets. Didomi plans to open offices in Germany, Spain and the U.S.

At the same time, the landscape is quickly evolving. Web browsers are gradually blocking third-party trackers and Apple now even asks you if an app can track you at the operating system level. It’s going to be interesting to see how Didomi evolves with user expectations.

#consent, #didomi, #europe, #fundings-exits, #gdpr, #startups

Italy’s DPA fines Glovo-owned Foodinho $3M, orders changes to algorithmic management of riders

Algorithmic management of gig workers has landed Glovo-owned on-demand delivery firm Foodinho in trouble in Italy where the country’s data protection authority issued a €2.6 million penalty (~$3M) yesterday after an investigation found a laundry list of problems.

The delivery company has been ordered to make a number of changes to how it operates in the market, with the Garante’s order giving it two months to correct the most serious violations found, and a further month (so three months total) to amend how its algorithms function — to ensure compliance with privacy legislation, Italy’s workers’ statute and recent legislation protecting platform workers.

One of the issues of concern to the data watchdog is the risk of discrimination arising from a rider rating system operated by Foodinho — which had some 19,000 riders operating on its platform in Italy at the time of the Garante’s investigation.

Likely of relevance here is a long running litigation brought by riders gigging for another food delivery brand in Italy, Foodora, which culminated in a ruling by the country’s Supreme Court last year that asserted riders should be treated as having workers rights, regardless of whether they are employed or self-employed — bolstering the case for challenges against delivery apps that apply algorithms to opaquely micromanage platform workers’ labor.

In the injunction against Foodinho, Italy’s DPA says it found numerous violations of privacy legislation, as well as a risk of discrimination against gig workers based on how Foodinho’s booking and assignments algorithms function, in addition to flagging concerns over how the system uses ratings and reputational mechanisms as further levers of labor control.

Article 22 of the European Union’s General Data Protection Regulation (GDPR) provides protections for individuals against being solely subject to automated decision-making including profiling where such decisions produce a legal or similarly substantial effect (and access to paid work would meet that bar) — giving them the right to get information on a specific decision and object to it and/or ask for human review.

But it does not appear that Foodinho provided riders with such rights, per the Garante’s assessment.

In a press release about the injunction (which we’ve translated from Italian with Google Translate), the watchdog writes:

“The Authority found a series of serious offences, in particular with regard to the algorithms used for the management of workers. The company, for example, had not adequately informed the workers on the functioning of the system and did not guarantee the accuracy and correctness of the results of the algorithmic systems used for the evaluation of the riders. Nor did it guarantee procedures to protect the right to obtain human intervention, express one’s opinion and contest the decisions adopted through the use of the algorithms in question, including the exclusion of a part of the riders from job opportunities.

“The Guarantor has therefore required the company to identify measures to protect the rights and freedoms of riders in the face of automated decisions, including profiling.

The watchdog also says it has asked Foodinho to verify the “accuracy and relevance” of data that feeds the algorithmic management system — listing a wide variety of signals that are factored in (such as chats, emails and phone calls between riders and customer care; geolocation data captured every 15 seconds and displayed on the app map; estimated and actual delivery times; details of the management of the order in progress and those already made; customer and partner feedback; remaining battery level of device etc).

“This is also in order to minimize the risk of errors and distortions which could, for example, lead to the limitation of the deliveries assigned to each rider or to the exclusion itself from the platform. These risks also arise from the rating system,” it goes on, adding: “The company will also need to identify measures that prevent improper or discriminatory use of reputational mechanisms based on customer and business partner feedback.”

Glovo, Foodinho’s parent entity — which is named as the owner of the platform in the Garante’s injunction — was contacted for comment on the injunction.

A company spokesperson told us they were discussing a response — so we’ll update this report if we get one.

Glovo acquired the Italian food delivery company Foodinho back in 2016, making its first foray into international expansion. The Barcelona-based business went on to try to build out a business in the Middle East and LatAm — before retrenching back to largely focus on Southern and Eastern Europe. (In 2018 Glovo also picked up the Foodora brand in Italy, which had been owned by German rival Delivery Hero.)

The Garante says it collaborated with Spain’s privacy watchdog, the AEDP — which is Glovo’s lead data protection supervisor under the GDPR — on the investigation into Foodinho and the platform tech provided to it by Glovo.

Its press release also notes that Glovo is the subject of “an independent procedure” carried out by the AEPD, which it says it’s also assisting with.

The Spanish watchdog confirmed to TechCrunch that joint working between the AEPD and the Garante had resulted in the resolution against the Glovo-owned company, Foodinho.

The AEPD also said it has undertaken its own procedures against Glovo — pointing to a 2019 sanction related to the latter not appointing a data protection officer, as is required by the GDPR. The watchdog later issued Glovo with a fined of €25,000 for that compliance failure.

However it’s not clear why the AEDP has — seemingly — not taken a deep dive look at Glovo’s own compliance with the Article 22 of the GDPR. (We’ve asked it for more on this and will update if we get a response.)

It did point us to recently published guidance on data protection and labor relations, which it worked on with Spain’s Ministry of Labor and the employers and trade union organizations, and which it said includes information on the right of a works council to be informed by a platform company of the parameters on which the algorithms or artificial intelligence systems are based — including “the elaboration of profiles, which may affect the conditions, access and maintenance of employment”.

Earlier this year the Spanish government agreed upon a labor reform to expand the protections available to platform workers by recognizing platform couriers as employees.

The amendments to the Spanish Workers Statute Law were approved by Royal Decree in May — but aren’t due to start being applied until the middle of next month, per El Pais.

Notably, the reform also contains a provision that requires workers’ legal representatives to be informed of the criteria powering any algorithms or AI systems that are used to manage them and which may affect their working conditions — such as those affecting access to employment or rating systems that monitor performance or profile workers. And that additional incoming algorithmic transparency provision has evidently been factored into the AEPD’s guidance.

So it may be that the watchdog is giving affected platforms like Glovo a few months’ grace to allow them to get their systems in order for the new rules.

Spanish labor law also of course remains distinct to Italian law, so there will be ongoing differences of application related to elements that concern delivery apps, regardless of what appears to be a similar trajectory on the issue of expanding platform workers rights.

Back in January, for example, an Italian court found that a reputation-ranking algorithm that had been used by another on-demand delivery app, Deliveroo, had discriminated against riders because it had failed to distinguish between legally protected reasons for withholding labour (e.g., because a rider was sick; or exercising their protected right to strike) and other reasons for not being as productive as they’d indicated they would be.

In that case, Deliveroo said the judgement referred to a historic booking system that it said was no longer used in Italy or any other markets.

More recently a tribunal ruling in Bologna — found a Collective Bargaining Agreement signed by, AssoDelivery, a trade association that represents a number of delivery platforms in the market (including Deliveroo and Glovo), and a minority union with far right affiliations, the UGL trade union, to be unlawful.

Deliveroo told us it planned to appeal that ruling.

The agreement attracted controversy because it seeks to derogate unfavorably from Italian law that protects workers and the signing trade body is not representative enough in the sector.

Zooming out, EU lawmakers are also looking at the issue of platform workers rights — kicking off a consultation in February on how to improve working conditions for gig workers, with the possibility that Brussels could propose legislation later this year.

However platform giants have seen the exercise as an opportunity to lobby for deregulation — pushing to reduce employment standards for gig workers across the EU. The strategy looks intended to circumvent or at least try to limit momentum for beefed up rules coming a national level, such as Spain’s labor reform.

#algorithmic-accountability, #artificial-intelligence, #barcelona, #deliveroo, #delivery-hero, #europe, #european-union, #food-delivery, #gdpr, #general-data-protection-regulation, #glovo, #italy, #labor, #online-food-ordering, #policy, #privacy, #spain

Europe’s cookie consent reckoning is coming

Cookie pop-ups getting you down? Complaints that the web is ‘unusable’ in Europe because of frustrating and confusing ‘data choices’ notifications that get in the way of what you’re trying to do online certainly aren’t hard to find.

What is hard to find is the ‘reject all’ button that lets you opt out of non-essential cookies which power unpopular stuff like creepy ads. Yet the law says there should be an opt-out clearly offered. So people who complain that EU ‘regulatory bureaucracy’ is the problem are taking aim at the wrong target.

EU law on cookie consent is clear: Web users should be offered a simple, free choice — to accept or reject.

The problem is that most websites simply aren’t compliant. They choose to make a mockery of the law by offering a skewed choice: Typically a super simple opt-in (to hand them all your data) vs a highly confusing, frustrating, tedious opt-out (and sometimes even no reject option at all).

Make no mistake: This is ignoring the law by design. Sites are choosing to try to wear people down so they can keep grabbing their data by only offering the most cynically asymmetrical ‘choice’ possible.

However since that’s not how cookie consent is supposed to work under EU law sites that are doing this are opening themselves to large fines under the General Data Protection Regulation (GDPR) and/or ePrivacy Directive for flouting the rules.

See, for example, these two whopping fines handed to Google and Amazon in France at the back end of last year for dropping tracking cookies without consent…

While those fines were certainly head-turning, we haven’t generally seen much EU enforcement on cookie consent — yet.

This is because data protection agencies have mostly taken a softly-softly approach to bringing sites into compliance. But there are signs enforcement is going to get a lot tougher. For one thing, DPAs have published detailed guidance on what proper cookie compliance looks like — so there are zero excuses for getting it wrong.

Some agencies had also been offering compliance grace periods to allow companies time to make the necessary changes to their cookie consent flows. But it’s now a full three years since the EU’s flagship data protection regime (GDPR) came into application. So, again, there’s no valid excuse to still have a horribly cynical cookie banner. It just means a site is trying its luck by breaking the law.

There is another reason to expect cookie consent enforcement to dial up soon, too: European privacy group noyb is today kicking off a major campaign to clean up the trashfire of non-compliance — with a plan to file up to 10,000 complaints against offenders over the course of this year. And as part of this action it’s offering freebie guidance for offenders to come into compliance.

Today it’s announcing the first batch of 560 complaints already filed against sites, large and small, located all over the EU (33 countries are covered). noyb said the complaints target companies that range from large players like Google and Twitter to local pages “that have relevant visitor numbers”.

“A whole industry of consultants and designers develop crazy click labyrinths to ensure imaginary consent rates. Frustrating people into clicking ‘okay’ is a clear violation of the GDPR’s principles. Under the law, companies must facilitate users to express their choice and design systems fairly. Companies openly admit that only 3% of all users actually want to accept cookies, but more than 90% can be nudged into clicking the ‘agree’ button,” said noyb chair and long-time EU privacy campaigner, Max Schrems, in a statement.

“Instead of giving a simple yes or no option, companies use every trick in the book to manipulate users. We have identified more than fifteen common abuses. The most common issue is that there is simply no ‘reject’ button on the initial page,” he added. “We focus on popular pages in Europe. We estimate that this project can easily reach 10,000 complaints. As we are funded by donations, we provide companies a free and easy settlement option — contrary to law firms. We hope most complaints will quickly be settled and we can soon see banners become more and more privacy friendly.”

To scale its action, noyb developed a tool which automatically parses cookie consent flows to identify compliance problems (such as no opt out being offered at the top layer; or confusing button coloring; or bogus ‘legitimate interest’ opt-ins, to name a few of the many chronicled offences); and automatically create a draft report which can be emailed to the offender after it’s been reviewed by a member of the not-for-profit’s legal staff.

It’s an innovative, scalable approach to tackling systematically cynical cookie manipulation in a way that could really move the needle and clean up the trashfire of horrible cookie pop-ups.

noyb is even giving offenders a warning first — and a full month to clean up their ways — before it will file an official complaint with their relevant DPA (which could lead to an eye-watering fine).

Its first batch of complaints are focused on the OneTrust consent management platform (CMP), one of the most popular template tools used in the region — and which European privacy researchers have previously shown (cynically) provides its client base with ample options to set non-compliant choices like pre-checked boxes… Talk about taking the biscuit.

A noyb spokeswoman said it’s started with OneTrust because its tool is popular but confirmed the group will expand the action to cover other CMPs in the future.

The first batch of noyb’s cookie consent complaints reveal the rotten depth of dark patterns being deployed — with 81% of the 500+ pages not offering a reject option on the initial page (meaning users have to dig into sub-menus to try to find it); and 73% using “deceptive colors and contrasts” to try to trick users into clicking the ‘accept’ option.

noyb’s assessment of this batch also found that a full 90% did not provide a way to easily withdraw consent as the law requires.

Cookie compliance problems found in the first batch of sites facing complaints (Image credit: noyb)

It’s a snapshot of truly massive enforcement failure. But dodgy cookie consents are now operating on borrowed time.

Asked if it was able to work out how prevalent cookie abuse might be across the EU based on the sites it crawled, noyb’s spokeswoman said it was difficult to determine, owing to technical difficulties encountered through its process, but she said an initial intake of 5,000 websites was whittled down to 3,600 sites to focus on. And of those it was able to determine that 3,300 violated the GDPR.

That still left 300 — as either having technical issues or no violations — but, again, the vast majority (90%) were found to have violations. And with so much rule-breaking going on it really does require a systematic approach to fixing the ‘bogus consent’ problem — so noyb’s use of automation tech is very fitting.

More innovation is also on the way from the not-for-profit — which told us it’s working on an automated system that will allow Europeans to “signal their privacy choices in the background, without annoying cookie banners”.

At the time of writing it couldn’t provide us with more details on how that will work (presumably it will be some kind of browser plug-in) but said it will be publishing more details “in the next weeks” — so hopefully we’ll learn more soon.

A browser plug-in that can automatically detect and select the ‘reject all’ button (even if only from a subset of the most prevalent CMPs) sounds like it could revive the ‘do not track’ dream. At the very least, it would be a powerful weapon to fight back against the scourge of dark patterns in cookie banners and kick non-compliant cookies to digital dust.

 

#advertising-tech, #cookie-consent, #data-protection, #eprivacy, #europe, #european-union, #gdpr, #general-data-protection-regulation, #max-schrems, #noyb, #policy, #privacy, #tc

Facebook ordered not to apply controversial WhatsApp T&Cs in Germany

The Hamburg data protection agency has banned Facebook from processing the additional WhatsApp user data that the tech giant is granting itself access to under a mandatory update to WhatsApp’s terms of service.

The controversial WhatsApp privacy policy update has caused widespread confusion around the world since being announced — and already been delayed by Facebook for several months after a major user backlash saw rivals messaging apps benefitting from an influx of angry users.

The Indian government has also sought to block the changes to WhatApp’s T&Cs in court — and the country’s antitrust authority is investigating.

Globally, WhatsApp users have until May 15 to accept the new terms (after which the requirement to accept the T&Cs update will become persistent, per a WhatsApp FAQ).

The majority of users who have had the terms pushed on them have already accepted them, according to Facebook, although it hasn’t disclosed what proportion of users that is.

But the intervention by Hamburg’s DPA could further delay Facebook’s rollout of the T&Cs — at least in Germany — as the agency has used an urgency procedure, allowed for under the European Union’s General Data Protection Regulation (GDPR), to order the tech giant not to share the data for three months.

A WhatsApp spokesperson disputed the legal validity of Hamburg’s order — calling it “a fundamental misunderstanding of the purpose and effect of WhatsApp’s update” and arguing that it “therefore has no legitimate basis”.

“Our recent update explains the options people have to message a business on WhatsApp and provides further transparency about how we collect and use data. As the Hamburg DPA’s claims are wrong, the order will not impact the continued roll-out of the update. We remain fully committed to delivering secure and private communications for everyone,” the spokesperson added, suggesting that Facebook-owned WhatsApp may be intending to ignore the order.

We understand that Facebook is considering its options to appeal Hamburg’s procedure.

The emergency powers Hamburg is using can’t extend beyond three months but the agency is also applying pressure to the European Data Protection Board (EDPB) to step in and make what it calls “a binding decision” for the 27 Member State bloc.

We’ve reached out to the EDPB to ask what action, if any, it could take in response to the Hamburg DPA’s call.

The body is not usually involved in making binding GDPR decisions related to specific complaints — unless EU DPAs cannot agree over a draft GDPR decision brought to them for review by a lead supervisory authority under the one-stop-shop mechanism for handling cross-border cases.

In such a scenario the EDPB can cast a deciding vote — but it’s not clear that an urgency procedure would qualify.

In taking the emergency action, the German DPA is not only attacking Facebook for continuing to thumb its nose at EU data protection rules, but throwing shade at its lead data supervisor in the region, Ireland’s Data Protection Commission (DPC) — accusing the latter of failing to investigate the very widespread concerns attached to the incoming WhatsApp T&Cs.

(“Our request to the lead supervisory authority for an investigation into the actual practice of data sharing was not honoured so far,” is the polite framing of this shade in Hamburg’s press release).

We’ve reached out to the DPC for a response and will update this report if we get one.

Ireland’s data watchdog is no stranger to criticism that it indulges in creative regulatory inaction when it comes to enforcing the GDPR — with critics charging commissioner Helen Dixon and her team of failing to investigate scores of complaints and, in the instances when it has opened probes, taking years to investigate — and opting for weak enforcements at the last.

The only GDPR decision the DPC has issued to date against a tech giant (against Twitter, in relation to a data breach) was disputed by other EU DPAs — which wanted a far tougher penalty than the $550k fine eventually handed down by Ireland.

GDPR investigations into Facebook and WhatsApp remain on the DPC’s desk. Although a draft decision in one WhatsApp data-sharing transparency case was sent to other EU DPAs in January for review — but a resolution has still yet to see the light of day almost three years after the regulation begun being applied.

In short, frustrations about the lack of GDPR enforcement against the biggest tech giants are riding high among other EU DPAs — some of whom are now resorting to creative regulatory actions to try to sidestep the bottleneck created by the one-stop-shop (OSS) mechanism which funnels so many complaints through Ireland.

The Italian DPA also issued a warning over the WhatsApp T&Cs change, back in January — saying it had contacted the EDPB to raise concerns about a lack of clear information over what’s changing.

At that point the EDPB emphasized that its role is to promote cooperation between supervisory authorities. It added that it will continue to facilitate exchanges between DPAs “in order to ensure a consistent application of data protection law across the EU in accordance with its mandate”. But the always fragile consensus between EU DPAs is becoming increasingly fraught over enforcement bottlenecks and the perception that the regulation is failing to be upheld because of OSS forum shopping.

That will increase pressure on the EDPB to find some way to resolve the impasse and avoid a wider break down of the regulation — i.e. if more and more Member State agencies resort to unilateral ’emergency’ action.

The Hamburg DPA writes that the update to WhatsApp’s terms grant the messaging platform “far-reaching powers to share data with Facebook” for the company’s own purposes (including for advertising and marketing) — such as by passing WhatApp users’ location data to Facebook and allowing for the communication data of WhatsApp users to be transferred to third-parties if businesses make use of Facebook’s hosting services.

Its assessment is that Facebook cannot rely on legitimate interests as a legal base for the expanded data sharing under EU law.

And if the tech giant is intending to rely on user consent it’s not meeting the bar either because the changes are not clearly explained nor are users offered a free choice to consent or not (which is the required standard under GDPR).

“The investigation of the new provisions has shown that they aim to further expand the close connection between the two companies in order for Facebook to be able to use the data of WhatsApp users for their own purposes at any time,” Hamburg goes on. “For the areas of product improvement and advertising, WhatsApp reserves the right to pass on data to Facebook companies without requiring any further consent from data subjects. In other areas, use for the company’s own purposes in accordance to the privacy policy can already be assumed at present.

“The privacy policy submitted by WhatsApp and the FAQ describe, for example, that WhatsApp users’ data, such as phone numbers and device identifiers, are already being exchanged between the companies for joint purposes such as network security and to prevent spam from being sent.”

DPAs like Hamburg may be feeling buoyed to take matters into their own hands on GDPR enforcement by a recent opinion by an advisor to the EU’s top court, as we suggested in our coverage at the time. Advocate General Bobek took the view that EU law allows agencies to bring their own proceedings in certain situations, including in order to adopt “urgent measures” or to intervene “following the lead data protection authority having decided not to handle a case.”

The CJEU ruling on that case is still pending — but the court tends to align with the position of its advisors.

 

#data-protection, #data-protection-commission, #data-protection-law, #europe, #european-data-protection-board, #european-union, #facebook, #gdpr, #general-data-protection-regulation, #germany, #hamburg, #helen-dixon, #ireland, #privacy, #privacy-policy, #social, #social-media, #terms-of-service, #whatsapp

Disqus facing $3M fine in Norway for tracking users without consent

Disqus, a commenting plugin that’s used by a number of news websites and which can share user data for ad targeting purposes, has got into hot water in Norway for tracking users without their consent.

The local data protection agency said today it has notified the U.S.-based company of an intent to fine it €2.5 million (~$3M) for failures to comply with requirements in Europe’s General Data Protection Regulation (GDPR) on accountability, lawfulness and transparency.

Disqus’ parent, Zeta Global, has been contacted for comment.

Datatilsynet said it acted following a 2019 investigation in Norway’s national press — which found that default settings buried in the Disqus’ plug-in opted sites into sharing user data on millions of users in markets including the U.S.

And while in most of Europe the company was found to have applied an opt-in to gather consent from users to be tracked — likely in order to avoid trouble with the GDPR — it appears to have been unaware that the regulation applies in Norway.

Norway is not a member of the European Union but is in the European Economic Area — which adopted the GDPR in July 2018, slightly after it came into force elsewhere in the EU. (Norway transposed the regulation into national law also in July 2018.)

The Norwegian DPA writes that Disqus’ unlawful data-sharing has “predominantly been an issue in Norway” — and says that seven websites are affected: NRK.no/ytring, P3.no, tv.2.no/broom, khrono.no, adressa.no, rights.no and document.no.

“Disqus has argued that their practices could be based on the legitimate interest balancing test as a lawful basis, despite the company being unaware that the GDPR applied to data subjects in Norway,” the DPA’s director-general, Bjørn Erik Thon, goes on.

“Based on our investigation so far, we believe that Disqus could not rely on legitimate interest as a legal basis for tracking across websites, services or devices, profiling and disclosure of personal data for marketing purposes, and that this type of tracking would require consent.”

“Our preliminary conclusion is that Disqus has processed personal data unlawfully. However, our investigation also discovered serious issues regarding transparency and accountability,” Thon added.

The DPA said the infringements are serious and have affected “several hundred thousands of individuals”, adding that the affected personal data “are highly private and may relate to minors or reveal political opinions”.

“The tracking, profiling and disclosure of data was invasive and nontransparent,” it added.

The DPA has given Disqus until May 31 to comment on the findings ahead of issuing a fine decision.

Publishers reminded of their responsibility

Datatilsynet has also fired a warning shot at local publishers who were using the Disqus platform — pointing out that website owners “are also responsible under the GDPR for which third parties they allow on their websites”.

So, in other words, even if you didn’t know about a default data-sharing setting that’s not an excuse because it’s your legal responsibility to know what any code you put on your website is doing with user data.

The DPA adds that “in the present case” it has focused the investigation on Disqus — providing publishers with an opportunity to get their houses in order ahead of any future checks it might make.

Norway’s DPA also has some admirably plain language to explain the “serious” problem of profiling people without their consent. “Hidden tracking and profiling is very invasive,” says Thon. “Without information that someone is using our personal data, we lose the opportunity to exercise our rights to access, and to object to the use of our personal data for marketing purposes.

“An aggravating circumstance is that disclosure of personal data for programmatic advertising entails a high risk that individuals will lose control over who processes their personal data.”

Zooming out, the issue of adtech industry tracking and GDPR compliance has become a major headache for DPAs across Europe — which have been repeatedly slammed for failing to enforce the law in this area since GDPR came into application in May 2018.

In the UK, for example (which transposed the GDPR before Brexit so still has an equivalent data protection framework for now), the ICO has been investigating GDPR complaints against real-time bidding’s (RTB) use of personal data to run behavioral ads for years — yet hasn’t issued a single fine or order, despite repeatedly warning the industry that it’s acting unlawfully.

The regulator is now being sued by complainants over its inaction.

Ireland’s DPC, meanwhile — which is the lead DPA for a swathe of adtech giants which site their regional HQ in the country — has a number of open GDPR investigations into adtech (including RTB). But has also failed to issue any decisions in this area almost three years after the regulation begun being applied.

Its lack of action on adtech complaints has contributed significantly to rising domestic (and international) pressure on its GDPR enforcement record more generally, including from the European Commission. (And it’s notable that the latter’s most recent legislative proposals in the digital arena include provisions that seek to avoid the risk of similar enforcement bottlenecks.)

The story on adtech and the GDPR looks a little different in Belgium, though, where the DPA appears to be inching toward a major slap-down of current adtech practices.

A preliminary report last year by its investigatory division called into question the legal standard of the consents being gathered via a flagship industry framework, designed by the IAB Europe. This so-called ‘Transparency and Consent’ framework (TCF) was found not to comply with the GDPR’s principles of transparency, fairness and accountability, or the lawfulness of processing.

A final decision is expected on that case this year — but if the DPA upholds the division’s findings it could deal a massive blow to the behavioral ad industry’s ability to track and target Europeans.

Studies suggest Internet users in Europe would overwhelmingly choose not to be tracked if they were actually offered the GDPR standard of a specific, clear, informed and free choice, i.e. without any loopholes or manipulative dark patterns.

#advertising-tech, #belgium, #data-protection, #data-security, #disqus, #europe, #european-commission, #european-union, #gdpr, #general-data-protection-regulation, #ireland, #norway, #personal-data, #privacy, #programmatic-advertising, #united-kingdom, #zeta-global

Facebook faces ‘mass action’ lawsuit in Europe over 2019 breach

Facebook is to be sued in Europe over the major leak of user data that dates back to 2019 but which only came to light recently after information on 533M+ accounts was found posted for free download on a hacker forum.

Today Digital Rights Ireland (DRI) announced it’s commencing a “mass action” to sue Facebook, citing the right to monetary compensation for breaches of personal data that’s set out in the European Union’s General Data Protection Regulation (GDPR).

Article 82 of the GDPR provides for a ‘right to compensation and liability’ for those affected by violations of the law. Since the regulation came into force, in May 2018, related civil litigation has been on the rise in the region.

The Ireland-based digital rights group is urging Facebook users who live in the European Union or European Economic Area to check whether their data was breach — via the haveibeenpwned website (which lets you check by email address or mobile number) — and sign up to join the case if so.

Information leaked via the breach includes Facebook IDs, location, mobile phone numbers, email address, relationship status and employer.

Facebook has been contacted for comment on the litigation.

The tech giant’s European headquarters is located in Ireland — and earlier this week the national data watchdog opened an investigation, under EU and Irish data protection laws.

A mechanism in the GDPR for simplifying investigation of cross-border cases means Ireland’s Data Protection Commission (DPC) is Facebook’s lead data regulator in the EU. However it has been criticized over its handling of and approach to GDPR complaints and investigations — including the length of time it’s taking to issue decisions on major cross-border cases. And this is particularly true for Facebook.

With the three-year anniversary of the GDPR fast approaching, the DPC has multiple open investigations into various aspects of Facebook’s business but has yet to issue a single decision against the company.

(The closest it’s come is a preliminary suspension order issued last year, in relation to Facebook’s EU to US data transfers. However that complaint long predates GDPR; and Facebook immediately filed to block the order via the courts. A resolution is expected later this year after the litigant filed his own judicial review of the DPC’s processes).

Since May 2018 the EU’s data protection regime has — at least on paper — baked in fines of up to 4% of a company’s global annual turnover for the most serious violations.

Again, though, the sole GDPR fine issued to date by the DPC against a tech giant (Twitter) is very far off that theoretical maximum. Last December the regulator announced a €450k (~$547k) sanction against Twitter — which works out to around just 0.1% of the company’s full-year revenue.

That penalty was also for a data breach — but one which, unlike the Facebook leak, had been publicly disclosed when Twitter found it in 2019. So Facebook’s failure to disclose the vulnerability it discovered and claims it fixed by September 2019, which led to the leak of 533M accounts now, suggests it should face a higher sanction from the DPC than Twitter received.

However even if Facebook ends up with a more substantial GDPR penalty for this breach the watchdog’s caseload backlog and plodding procedural pace makes it hard to envisage a swift resolution to an investigation that’s only a few days old.

Judging by past performance it’ll be years before the DPC decides on this 2019 Facebook leak — which likely explains why the DRI sees value in instigating class-action style litigation in parallel to the regulatory investigation.

“Compensation is not the only thing that makes this mass action worth joining. It is important to send a message to large data controllers that they must comply with the law and that there is a cost to them if they do not,” DRI writes on its website.

It also submitted a complaint about the Facebook breach to the DPC earlier this month, writing then that it was “also consulting with its legal advisors on other options including a mass action for damages in the Irish Courts”.

It’s clear that the GDPR enforcement gap is creating a growing opportunity for litigation funders to step in in Europe and take a punt on suing for data-related compensation damages — with a number of other mass actions announced last year.

In the case of DRI its focus is evidently on seeking to ensure that digital rights are upheld. But it told RTE that it believes compensation claims which force tech giants to pay money to users whose privacy rights have been violated is the best way to make them legally compliant.

Facebook, meanwhile, has sought to play down the breach it failed to disclose in 2019 — claiming it’s ‘old data’ — a deflection that ignores the fact that people’s dates of birth don’t change (nor do most people routinely change their mobile number or email address).

Plenty of the ‘old’ data exposed in this latest massive Facebook leak will be very handy for spammers and fraudsters to target Facebook users — and also now for litigators to target Facebook for data-related damages.

#data-protection, #data-protection-commission, #data-security, #digital-rights, #digital-rights-ireland, #europe, #european-union, #facebook, #gdpr, #general-data-protection-regulation, #ireland, #lawsuit, #litigation, #personal-data, #privacy, #social, #social-media, #tc, #twitter

Uber hit with default ‘robo-firing’ ruling after another EU labor rights GDPR challenge

Labor activists challenging Uber over what they allege are ‘robo-firings’ of drivers in Europe have trumpeted winning a default judgement in the Netherlands — where the Court of Amsterdam ordered the ride-hailing giant to reinstate six drivers who the litigants claim were unfairly terminated “by algorithmic means.”

The court also ordered Uber to pay the fired drivers compensation.

The challenge references Article 22 of the European Union’s General Data Protection Regulation (GDPR) — which provides protection for individuals against purely automated decisions with a legal or significant impact.

The activists say this is the first time a court has ordered the overturning of an automated decision to dismiss workers from employment.

However the judgement, which was issued on February 24, was issued by default — and Uber says it was not aware of the case until last week, claiming that was why it did not contest it (nor, indeed, comply with the order).

It had until March 29 to do so, per the litigants, who are being supported by the App Drivers & Couriers Union (ADCU) and Worker Info Exchange (WIE).

Uber argues the default judgement was not correctly served and says it is now making an application to set the default ruling aside and have its case heard “on the basis that the correct procedure was not followed.”

It envisages the hearing taking place within four weeks of its Dutch entity, Uber BV, being made aware of the judgement — which it says occurred on April 8.

“Uber only became aware of this default judgement last week, due to representatives for the ADCU not following proper legal procedure,” an Uber spokesperson told TechCrunch.

A spokesperson for WIE denied that correct procedure was not followed but welcomed the opportunity for Uber to respond to questions over how its driver ID systems operate in court, adding: “They [Uber] are out of time. But we’d be happy to see them in court. They will need to show meaningful human intervention and provide transparency.”

Uber pointed to a separate judgement by the Amsterdam Court last month — which rejected another ADCU- and WIE-backed challenge to Uber’s anti-fraud systems, with the court accepting its explanation that algorithmic tools are mere aids to human “anti-fraud” teams who it said take all decisions on terminations.

“With no knowledge of the case, the Court handed down a default judgement in our absence, which was automatic and not considered. Only weeks later, the very same Court found comprehensively in Uber’s favour on similar issues in a separate case. We will now contest this judgement,” Uber’s spokesperson added.

However WIE said this default judgement “robo-firing” challenge specifically targets Uber’s Hybrid Real-Time ID System — a system that incorporates facial recognition checks and which labor activists recently found misidentifying drivers in a number of instances.

It also pointed to a separate development this week in the U.K. where it said the City of London Magistrates Court ordered the city’s transport regulator, TfL, to reinstate the licence of one of the drivers revoked after Uber routinely notified it of a dismissal (also triggered by Uber’s real time ID system, per WIE).

Reached for comment on that, a TfL spokesperson said: “The safety of the travelling public is our top priority and where we are notified of cases of driver identity fraud, we take immediate licensing action so that passenger safety is not compromised. We always require the evidence behind an operator’s decision to dismiss a driver and review it along with any other relevant information as part of any decision to revoke a licence. All drivers have the right to appeal a decision to remove a licence through the Magistrates’ Court.”

The regulator has been applying pressure to Uber since 2017 when it took the (shocking to Uber) decision to revoke the company’s licence to operate — citing safety and corporate governance concerns.

Since then Uber has been able to continue to operate in the U.K. capital but the company remains under pressure to comply with a laundry list of requirements set by TfL as it tries to regain a full operator licence.

Commenting on the default Dutch judgement on the Uber driver terminations in a statement, James Farrar, director of WIE, accused gig platforms of “hiding management control in algorithms.”

“For the Uber drivers robbed of their jobs and livelihoods this has been a dystopian nightmare come true,” he said. “They were publicly accused of ‘fraudulent activity’ on the back of poorly governed use of bad technology. This case is a wake-up call for lawmakers about the abuse of surveillance technology now proliferating in the gig economy. In the aftermath of the recent U.K. Supreme Court ruling on worker rights gig economy platforms are hiding management control in algorithms. This is misclassification 2.0.”

In another supporting statement, Yaseen Aslam, president of the ADCU, added: “I am deeply concerned about the complicit role Transport for London has played in this catastrophe. They have encouraged Uber to introduce surveillance technology as a price for keeping their operator’s license and the result has been devastating for a TfL licensed workforce that is 94% BAME. The Mayor of London must step in and guarantee the rights and freedoms of Uber drivers licensed under his administration.”  

When pressed on the driver termination challenge being specifically targeted at its Hybrid Real-Time ID system, Uber declined to comment in greater detail — claiming the case is “now a live court case again”.

But its spokesman suggested it will seek to apply the same defence against the earlier “robo-firing” charge — when it argued its anti-fraud systems do not equate to automated decision making under EU law because “meaningful human involvement [is] involved in decisions of this nature”.

 

#app-drivers-couriers-union, #artificial-intelligence, #automated-decisions, #europe, #european-union, #facial-recognition, #gdpr, #general-data-protection-regulation, #gig-worker, #james-farrar, #labor, #lawsuit, #london, #netherlands, #transport-for-london, #uber, #united-kingdom

Ireland opens GDPR investigation into Facebook leak

Facebook’s lead data supervisor in the European Union has opened an investigation into whether the tech giant violated data protection rules vis-a-vis the leak of data reported earlier this month.

Here’s the Irish Data Protection Commission’s statement:

“The Data Protection Commission (DPC) today launched an own-volition inquiry pursuant to section 110 of the Data Protection Act 2018 in relation to multiple international media reports, which highlighted that a collated dataset of Facebook user personal data had been made available on the internet. This dataset was reported to contain personal data relating to approximately 533 million Facebook users worldwide. The DPC engaged with Facebook Ireland in relation to this reported issue, raising queries in relation to GDPR compliance to which Facebook Ireland furnished a number of responses.

The DPC, having considered the information provided by Facebook Ireland regarding this matter to date, is of the opinion that one or more provisions of the GDPR and/or the Data Protection Act 2018 may have been, and/or are being, infringed in relation to Facebook Users’ personal data.

Accordingly, the Commission considers it appropriate to determine whether Facebook Ireland has complied with its obligations, as data controller, in connection with the processing of personal data of its users by means of the Facebook Search, Facebook Messenger Contact Importer and Instagram Contact Importer features of its service, or whether any provision(s) of the GDPR and/or the Data Protection Act 2018 have been, and/or are being, infringed by Facebook in this respect.”

Facebook has been contacted for comment.

The move comes after the European Commission intervened to apply pressure on Ireland’s data protection commissioner. Justice commissioner, Didier Reynders, tweeted Monday that he had spoken with Helen Dixon about the Facebook data leak.

“The Commission continues to follow this case closely and is committed to supporting national authorities,” he added, going on to urge Facebook to “cooperate actively and swiftly to shed light on the identified issues”.

A spokeswoman for the Commission confirmed the virtual meeting between Reynders and Dixon, saying: “Dixon informed the Commissioner about the issues at stake and the different tracks of work to clarify the situation.

“They both urge Facebook to cooperate swiftly and to share the necessary information. It is crucial to shed light on this leak that has affected millions of European citizens.”

“It is up to the Irish data protection authority to assess this case. The Commission remains available if support is needed. The situation will also have to be further analyzed for the future. Lessons should be learned,” she added.

The revelation that a vulnerability in Facebook’s platform enabled unidentified ‘malicious actors’ to extract the personal data (including email addresses, mobile phone numbers and more) of more than 500 million Facebook accounts up until September 2019 — when Facebook claims it fixed the issue — only emerged in the wake of the data being found for free download on a hacker forum earlier this month.

Despite the European Union’s data protection framework (the GDPR) baking in a regime of data breach notifications — with the risk of hefty fines for compliance failure — Facebook did not inform its lead EU data supervisory when it found and fixed the issue. Ireland’s Data Protection Commission (DPC) was left to find out in the press, like everyone else.

Nor has Facebook individually informed the 533M+ users that their information was taken without their knowledge or consent, saying last week it has no plans to do so — despite the heightened risk for affected users of spam and phishing attacks.

Privacy experts have, meanwhile, been swift to point out that the company has still not faced any regulatory sanction under the GDPR — with a number of investigations ongoing into various Facebook businesses and practices and no decisions yet issued in those cases by Ireland’s DPC.

Last month the European Parliament adopted a resolution on the implementation of the GDPR which expressed “great concern” over the functioning of the mechanism — raising particular concern over the Irish data protection authority by writing that it “generally closes most cases with a settlement instead of a sanction and that cases referred to Ireland in 2018 have not even reached the stage of a draft decision pursuant to Article 60(3) of the GDPR”.

The latest Facebook data scandal further amps up the pressure on the DPC — providing further succour to critics of the GDPR who argue the regulation is unworkable under the current foot-dragging enforcement structure, given the major bottlenecks in Ireland (and Luxembourg) where many tech giants choose to locate regional HQ.

On Thursday Reynders made his concern over Ireland’s response to the Facebook data leak public, tweeting to say the Commission had been in contact with the DPC.

He does have reason to be personally concerned. Earlier last week Politico reported that Reynders’ own digits had been among the cache of leaked data, along with those of the Luxembourg prime minister Xavier Bettel — and “dozens of EU officials”. However the problem of weak GDPR enforcement affects everyone across the bloc — some 446M people whose rights are not being uniformly and vigorously upheld.

“A strong enforcement of GDPR is of key importance,” Reynders also remarked on Twitter, urging Facebook to “fully cooperate with Irish authorities”.

Last week Italy’s data protection commission also called on Facebook to immediately offer a service for Italian users to check whether they had been affected by the breach. But Facebook made no public acknowledgment or response to the call. Under the GDPR’s one-stop-shop mechanism the tech giant can limit its regulatory exposure by direct dealing only with its lead EU data supervisor in Ireland.

A two-year Commission review of how the data protection regime is functioning, which reported last summer, already drew attention to problems with patchy enforcement. A lack of progress on unblocking GDPR bottlenecks is thus a growing problem for the Commission — which is in the midst of proposing a package of additional digital regulations. That makes the enforcement point a very pressing one as EU lawmakers are being asked how new digital rules will be upheld if existing ones keep being trampled on?

It’s certainly notable that the EU’s executive has proposed a different, centralized enforcement structure for incoming pan-EU legislation targeted at digital services and tech giants. Albeit, getting agreement from all the EU’s institutions and elected representatives on how to reshape platform oversight looks challenging.

And in the meanwhile the data leaks continue: Motherboard reported Friday on another alarming leak of Facebook data it found being made accessible via a bot on the Telegram messaging platform that gives out the names and phone numbers of users who have liked a Facebook page (in exchange for a fee unless the page has had less than 100 likes).

The publication said this data appears to be separate to the 533M+ scraped dataset — after it ran checks against the larger dataset via the breach advice site, haveibeenpwned. It also asked Alon Gal, the person who discovered the aforementioned leaked Facebook dataset being offered for free download online, to compare data obtained via the bot and he did not find any matches.

We contacted Facebook about the source of this leaked data and will update this report with any response.

In his tweet about the 500M+ Facebook data leak last week, Reynders made reference to the Europe Data Protection Board (EDPB), a steering body comprised of representatives from Member State data protection agencies which works to ensure a consistent application of the GDPR.

However the body does not lead on GDPR enforcement — so it’s not clear why he would invoke it. Optics is one possibility, if he was trying to encourage a perception that the EU has vigorous and uniform enforcement structures where people’s data is concerned.

“Under the GDPR, enforcement and the investigation of potential violations lies with the national supervisory authorities. The EDPB does not have investigative powers per se and is not involved in investigations at the national level. As such, the EDPB cannot comment on the processing activities of specific companies,” an EDPB spokeswoman told us when we enquired about Reynders’ remarks.

But she also noted the Commission attends plenary meetings of the EDPB — adding it’s possible there will be an exchange of views among members about the Facebook leak case in the future, as attending supervisory authorities “regularly exchange information on cases at the national level”.

 

#data-breach, #dpc, #eu, #europe, #facebook, #gdpr, #ireland, #privacy, #social, #tc

Facebook’s tardy disclosure of breach timing raises GDPR compliance questions

The question of whether Facebook will face any regulatory sanction over the latest massive historical platform privacy fail to come to light remains unclear. But the timeline of the incident looks increasingly awkward for the tech giant.

While it initially sought to play down the data breach revelations published by Business Insider at the weekend by suggesting that information like people’s birth dates and phone numbers was “old”, in a blog post late yesterday the tech giant finally revealed that the data in question had in fact been scraped from its platform by malicious actors “in 2019” and “prior to September 2019”.

That new detail about the timing of this incident raises the issue of compliance with Europe’s General Data Protection Regulation (GDPR) — which came into application in May 2018.

Under the EU regulation data controllers can face fines of up to 2% of their global annual turnover for failures to notify breaches, and up to 4% of annual turnover for more serious compliance violations.

The European framework looks important because Facebook indemnified itself against historical privacy issues in the US when it settled with the FTC for $5BN back in July 2019 — although that does still mean there’s a period of several months (June to September 2019) which could fall outside that settlement.

Yesterday, in its own statement responding to the breach revelations, Facebook’s lead data supervisor in the EU said the provenance of the newly published dataset wasn’t entirely clear, writing that it “seems to comprise the original 2018 (pre-GDPR) dataset” — referring to an earlier breach incident Facebook disclosed in 2018 which related to a vulnerability in its phone lookup functionality that it had said occurred between June 2017 and April 2018 — but also writing that the newly published dataset also looked to have been “combined with additional records, which may be from a later period”.

Facebook followed up the Irish Data Protection Commission (DPC)’s statement by confirming that suspicion — admitting that the data had been extracted from its platform in 2019, up until September of that year.

Another new detail that emerged in Facebook’s blog post yesterday was the fact users’ data was scraped not via the aforementioned phone lookup vulnerability — but via another method altogether: A contact importer tool vulnerability.

This route allowed an unknown number of “malicious actors” to use software to imitate Facebook’s app and upload large sets of phone numbers to see which ones matched Facebook users.

In this way a spammer (for example), could upload a database of potential phone numbers and link them to not only names but other data like birth date, email address, location — all the better to phish you with.

In its PR response to the breach, Facebook quickly claimed it had fixed this vulnerability in August 2019. But, again, that timing places the incident squarely in the period of GDPR being active.

As a reminder, Europe’s data protection framework bakes in a data breach notification regime that requires data controllers to notify a relevant supervisory authority if they believe a loss of personal data is likely to constitute a risk to users’ rights and freedoms — and to do so without undue delay (ideally within 72 hours of becoming aware of it).

Yet Facebook made no disclosure at all of this incident to the DPC. Indeed, the regulator made it clear yesterday that it had to proactively seek information from Facebook in the wake of BI’s report. That’s the opposite of how EU lawmakers intended the regulation to function.

Data breaches, meanwhile, are broadly defined under the GDPR. It could mean personal data being lost or stolen and/or accessed by unauthorized third parties. It can also relate to deliberate or accidental action or inaction by a data controller which exposes personal data.

Legal risk attached to the breach likely explains why Facebook has studiously avoided describing this latest data protection failure, in which the personal information of more than half a billion users was posted for free download on an online forum, as a ‘breach’.

And, indeed, why it’s sought to downplay the significance of the leaked information — dubbing people’s personal information “old data”. (Even as few people regularly change their mobile numbers, email address, full names and biographical information and so on, and no one (legally) gets a new birth date… )

Its blog post instead refers to data being scraped; and to scraping being “a common tactic that often relies on automated software to lift public information from the internet that can end up being distributed in online forums” — tacitly implying that the personal information leaked via its contact importer tool was somehow public.

The self-serving suggestion being peddled here by Facebook is that hundreds of millions of users had both published sensitive stuff like their mobile phone numbers on their Facebook profiles and left default settings on their accounts — thereby making this personal information ‘publicly available for scraping/no longer private/uncovered by data protection legislation’.

This is an argument as obviously absurd as it is viciously hostile to people’s rights and privacy. It’s also an argument that EU data protection regulators must quickly and definitively reject or be complicit in allowing Facebook (ab)use its market power to torch the very fundamental rights that regulators’ sole purpose is to defend and uphold.

Even if some Facebook users affected by this breach had their information exposed via the contact importer tool because they had not changed Facebook’s privacy-hostile defaults that still raises key questions of GPDR compliance — because the regulation also requires data controllers to adequately secure personal data and apply privacy by design and default.

Facebook allowing hundreds of millions of accounts to have their info freely pillaged by spammers (or whoever) doesn’t sound like good security or default privacy.

In short, it’s the Cambridge Analytica scandal all over again.

Facebook is trying to get away with continuing to be terrible at privacy and data protection because it’s been so terrible at it in the past — and likely feels confident in keeping on with this tactic because it’s faced relatively little regulatory sanction for an endless parade of data scandals. (A one-time $5BN FTC fine for a company than turns over $85BN+ in annual revenue is just another business expense.)

We asked Facebook why it failed to notify the DPC about this 2019 breach back in 2019, when it realized people’s information was once again being maliciously extracted from its platform — or, indeed, why it hasn’t bothered to tell affected Facebook users themselves — but the company declined to comment beyond what it said yesterday.

Then it told us it would not be commenting on its communications with regulators.

Under the GDPR, if a breach poses a high risk to users’ rights and freedoms a data controller is required to notify affected individuals — with the rational being that prompt notification of a threat can help people take steps to protect themselves from the risks of their data being breached, such as fraud and ID theft.

Yesterday Facebook also said it does not have plans to notify users either.

Perhaps the company’s trademark ‘thumbs up’ symbol would be more aptly expressed as a middle finger raised at everyone else.

 

#data-controller, #data-protection, #dpc, #europe, #european-union, #facebook, #federal-trade-commission, #gdpr, #general-data-protection-regulation, #personal-data, #privacy, #security-breaches, #united-states

Answers being sought from Facebook over latest data breach

Facebook’s lead data protection regulator in the European Union is seeking answers from the tech giant over a major data breach reported on over the weekend.

The breach was reported on by Business Insider on Saturday which said personal data (including email addresses and mobile phone numbers) of more than 500M Facebook accounts had been posted to a low level hacking forum — making the personal information on hundreds of millions of Facebook users’ accounts freely available.

“The exposed data includes the personal information of over 533M Facebook users from 106 countries, including over 32M records on users in the US, 11M on users in the UK, and 6M on users in India,” Business Insider said, noting that the dump includes phone numbers, Facebook IDs, full names, locations, birthdates, bios, and some email addresses.

Facebook responded to the report of the data dump by saying it related to a vulnerability in its platform it had “found and fixed” in August 2019 — dubbing the info “old data” which it also claimed had been reported on in 2019. However as security experts were quick to point out, most people don’t change their mobile phone number often — so Facebook’s trigger reaction to downplay the breach looks like an ill-thought through attempt to deflect blame.

It’s also not clear whether all the data is all ‘old’, as Facebook’s initial response suggests.

There’s plenty of reasons for Facebook to try to downplay yet another data scandal. Not least because, under European Union data protection rules, there are stiff penalties for companies that fail to promptly report significant breaches to relevant authorities. And indeed for breaches themselves — as the bloc’s General Data Protection Regulation (GDPR) bakes in an expectation of security by design and default.

By pushing the claim that the leaked data is “old” Facebook may be hoping to peddle the idea that it predates the GDPR coming into application (in May 2018).

However the Irish Data Protection Commission (DPC), Facebook’s lead data supervisor in the EU, told TechCrunch that it’s not abundantly clear whether that’s the case at this point.

“The newly published dataset seems to comprise the original 2018 (pre-GDPR) dataset and combined with additional records, which may be from a later period,” the DPC’s deputy commissioner, Graham Doyle said in a statement.

“A significant number of the users are EU users. Much of the data appears to been data scraped some time ago from Facebook public profiles,” he also said.

“Previous datasets were published in 2019 and 2018 relating to a large-scale scraping of the Facebook website which at the time Facebook advised occurred between June 2017 and April 2018 when Facebook closed off a vulnerability in its phone lookup functionality. Because the scraping took place prior to GDPR, Facebook chose not to notify this as a personal data breach under GDPR.”

Doyle said the regulator sought to establish “the full facts” about the breach from Facebook over the weekend and is “continuing to do so” — making it clear that there’s an ongoing lack of clarity on the issue, despite the breach itself being claimed as “old” by Facebook.

The DPC also made it clear that it did not receive any proactive communication from Facebook on the issue — despite the GDPR putting the onus on companies to proactively inform regulators about significant data protection issues. Rather the regulator had to approach Facebook — using a number of channels to try to obtain answers from the tech giant.

Through this approach the DPC said it learnt Facebook believes the information was scraped prior to the changes it made to its platform in 2018 and 2019 in light of vulnerabilities identified in the wake of the Cambridge Analytica data misuse scandal.

A huge database of Facebook phone numbers was found unprotected online back in September 2019.

Facebook had also earlier admitted to a vulnerability with a search tool it offered — revealing in April 2018 that somewhere between 1BN and 2BN users had had their public Facebook information scraped via a feature which allowed people to look up users by inputting a phone number or email — which is one potential source for the cache of personal data.

Last year Facebook also filed a lawsuit against two companies it accused of engaging in an international data scraping operation.

But the fallout from its poor security design choices continue to dog Facebook years after its ‘fix’.

More importantly, the fallout from the massive personal data spill continues to affect Facebook users whose information is now being openly offered for download on the Internet — opening them up to the risk of spam and phishing attacks and other forms of social engineering (such as for attempted identity theft).

There are still more questions than answers about how this “old” cache of Facebook data came to be published online for free on a hacker forum.

The DPC said it was told by Facebook that “the data at issue appears to have been collated by third parties and potentially stems from multiple sources”.

The company also claimed the matter “requires extensive investigation to establish its provenance with a level of confidence sufficient to provide your Office and our users with additional information” — which is a long way of suggesting that Facebook has no idea either.

“Facebook assures the DPC it is giving highest priority to providing firm answers to the DPC,” Doyle also said. “A percentage of the records released on the hacker website contain phone numbers and email address of users.

“Risks arise for users who may be spammed for marketing purposes but equally users need to be vigilant in relation to any services they use that require authentication using a person’s phone number or email address in case third parties are attempting to gain access.”

“The DPC will communicate further facts as it receives information from Facebook,” he added.

At the time of writing Facebook had not responded to a request for comment about the breach.

Facebook users who are concerned whether their information is in the dump can run a search for their phone number or email address via the data breach advice site, haveibeenpwned.

According to haveibeenpwned’s Troy Hunt, this latest Facebook data dump contains far more mobile phone numbers than email addresses.

He writes that he was sent the data a few weeks ago — initially getting 370M records and later “the larger corpus which is now in very broad circulation”.

“A lot of it is the same, but a lot of it is also different,” Hunt also notes, adding: “There is not one clear source of this data.”

 

#computer-security, #data-breach, #data-security, #european-union, #facebook, #gdpr, #general-data-protection-regulation, #social-media, #tc, #troy-hunt, #united-kingdom

Competition challenge to Facebook’s ‘superprofiling’ of users sparks referral to Europe’s top court

A German court that’s considering Facebook’s appeal against a pioneering pro-privacy order by the country’s competition authority to stop combining user data without consent has said it will refer questions to Europe’s top court.

In a press release today the Düsseldorf court writes [translated by Google]: “…the Senate has come to the conclusion that a decision on the Facebook complaints can only be made after referring to the Court of Justice of the European Union (ECJ).

“The question of whether Facebook is abusing its dominant position as a provider on the German market for social networks because it collects and uses the data of its users in violation of the GDPR can not be decided without referring to the ECJ. Because the ECJ is responsible for the interpretation of European law.”

The Bundeskartellamt (Federal Cartel Office, FCO)’s ‘exploitative abuse’ case links Facebook’s ability to gather data on users of its products from across the web, via third party sites (where it deploys plug-ins and tracking pixels), and across its own suite of products (Facebook, Instagram, WhatsApp, Oculus), to its market power — asserting this data-gathering is not legal under EU privacy law as users are not offered a choice.

The associated competition contention, therefore, is that inappropriate contractual terms allow Facebook to build a unique database for each individual user and unfairly gain market power over rivals who don’t have such broad and deep reach into user’s personal data.

The FOC’s case against Facebook is seen as highly innovative as it combines the (usually) separate (and even conflicting) tracks of competition and privacy law — offering the tantalizing prospect, were the order to actually get enforced, of a structural separation of Facebook’s business empire without having to order a break up of its various business units up.

However enforcement at this point — some five years after the FCO started investigating Facebook’s data practices in March 2016 — is still a big if.

Soon after the FCO’s February 2019 order to stop combining user data, Facebook succeeded in blocking the order via a court appeal in August 2019.

But then last summer Germany’s federal court unblocked the ‘superprofiling’ case — reviving the FCO’s challenge to the tech giant’s data-harvesting-by-default.

The latest development means another long wait to see whether competition law innovation can achieve what the EU’s privacy regulators have so far failed to do — with multiple GDPR challenges against Facebook still sitting undecided on the desk of the Irish Data Protection Commission.

Albeit, it’s fair to say that neither route looks capable of ‘moving fast and breaking’ platform power at this point.

In its opinion the Düsseldorf court does appear to raise questions over the level of Facebook’s data collection, suggesting the company could avoid antitrust concerns by offering users a choice to base profiling on only the data they upload themselves rather than on a wider range of data sources, and querying its use of Instagram and Oculus data.

But it also found fault with the FCO’s approach — saying Facebook’s US and Irish business entities were not granted a fair hearing before the order against its German sister company was issued, among other procedural quibbles.

Referrals to the EU’s Court of Justice can take years to return a final interpretation.

In this case the ECJ will likely be asked to consider whether the FCO has exceeded its remit, although the exact questions being referred by the court have not been confirmed — with a written reference set to be issued in the next few weeks, per its press release.

In a statement responding to the court’s announcement today, a Facebook spokesperson said:

“Today, the Düsseldorf Court has expressed doubts as to the legality of the Bundeskartellamt’s order and decided to refer questions to the Court of Justice of the European Union. We believe that the Bundeskartellamt’s order also violates European law.”

#competition-law, #europe, #facebook, #gdpr, #lawsuit, #privacy