Facebook loses last ditch attempt to derail DPC decision on its EU-US data flows

Facebook has failed in its bid to prevent its lead EU data protection regulator from pushing ahead with a decision on whether to order suspension of its EU-US data flows.

The Irish High Court has just issued a ruling dismissing the company’s challenge to the Irish Data Protection Commission’s (DPC) procedures.

The case has huge potential operational significance for Facebook which may be forced to store European users’ data locally if it’s ordered to stop taking their information to the U.S. for processing.

Last September Irish data watchdog made a preliminary order warning Facebook it may have to suspend EU-US data flows. Facebook responding by filing for a judicial review and obtaining a stay on the DPC’s procedure. That block is now being unblocked.

We understand the involved parties have been given a few days to read the High Court judgement ahead of another hearing on Thursday — when the court is expected to formally lift Facebook’s stay on the DPC’s investigation (and settle the matter of case costs).

The DPC declined to comment on today’s ruling in any detail — or on the timeline for making a decision on Facebook’s EU-US data flows — but deputy commissioner Graham Doyle told us it “welcomes today’s judgment”.

Its preliminary suspension order last fall followed a landmark judgement by Europe’s top court in the summer — when the CJEU struck down a flagship transatlantic agreement on data flows, on the grounds that US mass surveillance is incompatible with the EU’s data protection regime.

The fall-out from the CJEU’s invalidation of Privacy Shield (as well as an earlier ruling striking down its predecessor Safe Harbor) has been ongoing for years — as companies that rely on shifting EU users’ data to the US for processing have had to scramble to find valid legal alternatives.

While the CJEU did not outright ban data transfers out of the EU, it made it crystal clear that data protection agencies must step in and suspend international data flows if they suspect EU data is at risk. And EU to US data flows were signalled as at clear risk given the court simultaneously struck down Privacy Shield.

The problem for some businesses is that there may simply not be a valid legal alternative. And that’s where things look particularly sticky for Facebook, since its service falls under NSA surveillance via Section 702 of the FISA (which is used to authorize mass surveillance programs like Prism).

So what happens now for Facebook, following the Irish High Court ruling?

As ever in this complex legal saga — which has been going on in various forms since an original 2013 complaint made by European privacy campaigner Max Schrems — there’s still some track left to run.

After this unblocking the DPC will have two enquiries in train: Both the original one, related to Schrems’ complaint, and an own volition enquiry it decided to open last year — when it said it was pausing investigation of Schrems’ original complaint.

Schrems, via his privacy not-for-profit noyb, filed for his own judicial review of the DPC’s proceedings. And the DPC quickly agreed to settle — agreeing in January that it would ‘swiftly’ finalize Schrems’ original complaint. So things were already moving.

The tl;dr of all that is this: The last of the bungs which have been used to delay regulatory action in Ireland over Facebook’s EU-US data flows are finally being extracted — and the DPC must decide on the complaint.

Or, to put it another way, the clock is ticking for Facebook’s EU-US data flows. So expect another wordy blog post from Nick Clegg very soon.

Schrems previously told TechCrunch he expects the DPC to issue a suspension order against Facebook within months — perhaps as soon as this summer (and failing that by fall).

In a statement reacting to the Court ruling today he reiterated that position, saying: “After eight years, the DPC is now required to stop Facebook’s EU-US data transfers, likely before summer. Now we simply have two procedures instead of one.”

When Ireland (finally) decides it won’t mark the end of the regulatory procedures, though.

A decision by the DPC on Facebook’s transfers would need to go to the other EU DPAs for review — and if there’s disagreement there (as seems highly likely, given what’s happened with draft DPC GDPR decisions) it will trigger a further delay (weeks to months) as the European Data Protection Board seeks consensus.

If a majority of EU DPAs can’t agree the Board may itself have to cast a deciding vote. So that could extend the timeline around any suspension order. But an end to the process is, at long last, in sight.

And, well, if a critical mass of domestic pressure is ever going to build for pro-privacy reform of U.S. surveillance laws now looks like a really good time…

“We now expect the DPC to issue a decision to stop Facebook’s data transfers before summer,” added Schrems. “This would require Facebook to store most data from Europe locally, to ensure that Facebook USA does not have access to European data. The other option would be for the US to change its surveillance laws.”

Facebook has been contacted for comment on the Irish High Court ruling.

Update: The company has now sent us this statement:

“Today’s ruling was about the process the IDPC followed. The larger issue of how data can move around the world remains of significant importance to thousands of European and American businesses that connect customers, friends, family and employees across the Atlantic. Like other companies, we have followed European rules and rely on Standard Contractual Clauses, and appropriate data safeguards, to provide a global service and connect people, businesses and charities. We look forward to defending our compliance to the IDPC, as their preliminary decision could be damaging not only to Facebook, but also to users and other businesses.”

#data-protection, #data-security, #digital-rights, #dpc, #eu-us-privacy-shield, #europe, #european-data-protection-board, #european-union, #facebook, #human-rights, #ireland, #lawsuit, #max-schrems, #nick-clegg, #noyb, #policy, #privacy, #safe-harbor, #united-states

0

Facebook faces ‘mass action’ lawsuit in Europe over 2019 breach

Facebook is to be sued in Europe over the major leak of user data that dates back to 2019 but which only came to light recently after information on 533M+ accounts was found posted for free download on a hacker forum.

Today Digital Rights Ireland (DRI) announced it’s commencing a “mass action” to sue Facebook, citing the right to monetary compensation for breaches of personal data that’s set out in the European Union’s General Data Protection Regulation (GDPR).

Article 82 of the GDPR provides for a ‘right to compensation and liability’ for those affected by violations of the law. Since the regulation came into force, in May 2018, related civil litigation has been on the rise in the region.

The Ireland-based digital rights group is urging Facebook users who live in the European Union or European Economic Area to check whether their data was breach — via the haveibeenpwned website (which lets you check by email address or mobile number) — and sign up to join the case if so.

Information leaked via the breach includes Facebook IDs, location, mobile phone numbers, email address, relationship status and employer.

Facebook has been contacted for comment on the litigation.

The tech giant’s European headquarters is located in Ireland — and earlier this week the national data watchdog opened an investigation, under EU and Irish data protection laws.

A mechanism in the GDPR for simplifying investigation of cross-border cases means Ireland’s Data Protection Commission (DPC) is Facebook’s lead data regulator in the EU. However it has been criticized over its handling of and approach to GDPR complaints and investigations — including the length of time it’s taking to issue decisions on major cross-border cases. And this is particularly true for Facebook.

With the three-year anniversary of the GDPR fast approaching, the DPC has multiple open investigations into various aspects of Facebook’s business but has yet to issue a single decision against the company.

(The closest it’s come is a preliminary suspension order issued last year, in relation to Facebook’s EU to US data transfers. However that complaint long predates GDPR; and Facebook immediately filed to block the order via the courts. A resolution is expected later this year after the litigant filed his own judicial review of the DPC’s processes).

Since May 2018 the EU’s data protection regime has — at least on paper — baked in fines of up to 4% of a company’s global annual turnover for the most serious violations.

Again, though, the sole GDPR fine issued to date by the DPC against a tech giant (Twitter) is very far off that theoretical maximum. Last December the regulator announced a €450k (~$547k) sanction against Twitter — which works out to around just 0.1% of the company’s full-year revenue.

That penalty was also for a data breach — but one which, unlike the Facebook leak, had been publicly disclosed when Twitter found it in 2019. So Facebook’s failure to disclose the vulnerability it discovered and claims it fixed by September 2019, which led to the leak of 533M accounts now, suggests it should face a higher sanction from the DPC than Twitter received.

However even if Facebook ends up with a more substantial GDPR penalty for this breach the watchdog’s caseload backlog and plodding procedural pace makes it hard to envisage a swift resolution to an investigation that’s only a few days old.

Judging by past performance it’ll be years before the DPC decides on this 2019 Facebook leak — which likely explains why the DRI sees value in instigating class-action style litigation in parallel to the regulatory investigation.

“Compensation is not the only thing that makes this mass action worth joining. It is important to send a message to large data controllers that they must comply with the law and that there is a cost to them if they do not,” DRI writes on its website.

It also submitted a complaint about the Facebook breach to the DPC earlier this month, writing then that it was “also consulting with its legal advisors on other options including a mass action for damages in the Irish Courts”.

It’s clear that the GDPR enforcement gap is creating a growing opportunity for litigation funders to step in in Europe and take a punt on suing for data-related compensation damages — with a number of other mass actions announced last year.

In the case of DRI its focus is evidently on seeking to ensure that digital rights are upheld. But it told RTE that it believes compensation claims which force tech giants to pay money to users whose privacy rights have been violated is the best way to make them legally compliant.

Facebook, meanwhile, has sought to play down the breach it failed to disclose in 2019 — claiming it’s ‘old data’ — a deflection that ignores the fact that people’s dates of birth don’t change (nor do most people routinely change their mobile number or email address).

Plenty of the ‘old’ data exposed in this latest massive Facebook leak will be very handy for spammers and fraudsters to target Facebook users — and also now for litigators to target Facebook for data-related damages.

#data-protection, #data-protection-commission, #data-security, #digital-rights, #digital-rights-ireland, #europe, #european-union, #facebook, #gdpr, #general-data-protection-regulation, #ireland, #lawsuit, #litigation, #personal-data, #privacy, #social, #social-media, #tc, #twitter

0

How startups can ensure CCPA and GDPR compliance in 2021

Data is the most valuable asset for any business in 2021. If your business is online and collecting customer personal information, your business is dealing in data, which means data privacy compliance regulations will apply to everyone — no matter the company’s size.

Small startups might not think the world’s strictest data privacy laws — the California Consumer Privacy Act (CCPA) and Europe’s General Data Protection Regulation (GDPR) — apply to them, but it’s important to enact best data management practices before a legal situation arises.

Data compliance is not only critical to a company’s daily functions; if done wrong or not done at all, it can be quite costly for companies of all sizes.

For example, failing to comply with the GDPR can result in legal fines of €20 million or 4% of annual revenue. Under the CCPA, fines can also escalate quickly, to the tune of $2,500 to $7,500 per person whose data is exposed during a data breach.

If the data of 1,000 customers is compromised in a cybersecurity incident, that would add up to $7.5 million. The company can also be sued in class action claims or suffer reputational damage, resulting in lost business costs.

It is also important to recognize some benefits of good data management. If a company takes a proactive approach to data privacy, it may mitigate the impact of a data breach, which the government can take into consideration when assessing legal fines. In addition, companies can benefit from business insights, reduced storage costs and increased employee productivity, which can all make a big impact on the company’s bottom line.

Challenges of data compliance for startups

Data compliance is not only critical to a company’s daily functions; if done wrong or not done at all, it can be quite costly for companies of all sizes. For example, Vodafone Spain was recently fined $9.72 million under GDPR data protection failures, and enforcement trackers show schools, associations, municipalities, homeowners associations and more are also receiving fines.

GDPR regulators have issued $332.4 million in fines since the law was enacted almost two years ago and are being more aggressive with enforcement. While California’s attorney general started CCPA enforcement on July 1, 2020, the newly passed California Privacy Rights Act (CPRA) only recently created a state agency to more effectively enforce compliance for any company storing information of residents in California, a major hub of U.S. startups.

That is why in this age, data privacy compliance is key to a successful business. Unfortunately, many startups are at a disadvantage for many reasons, including:

0

Identiq, a privacy-friendly fraud prevention startup, secures $47M at Series A

Israeli fraud prevention startup Identiq has raised $47 million at Series A as the company eyes international growth, driven in large part by the spike in online spending during the pandemic.

The round was led by Insight Partners and Entrée Capital, with participation from Amdocs, Sony Innovation Fund by IGV, as well as existing investors Vertex Ventures Israel, Oryzn Capital, and Slow Ventures.

Fraud prevention is big business, which is slated to be worth $145 billion by 2026, ballooning by eightfold in size compared to 2018. But it’s a data hungry industry, fraught with security and privacy risks, having to rely on sharing enormous sets of consumer data in order to learn who legitimate customers are in order to weed out the fraudsters, and therefore.

Identiq takes a different, more privacy-friendly approach to fraud prevention, without having to share a customer’s data with a third-party.

“Before now, the only way companies could solve this problem was by exposing the data they were given by the user to a third party data provider for validation, creating huge privacy problems,” Identiq’s chief executive Itay Levy told TechCrunch. “We solved this by allowing these companies to validate that the data they’ve been given matches the data of other companies that already know and trust the user, without sharing any sensitive information at all.”

When an Identiq customer — such as an online store — sees a new customer for the first time, the store can ask other stores in Identiq’s network if they know or trust that new customer. This peer-to-peer network uses cryptography to help online stores anonymously vet new customers to help weed out bad actors, like fraudsters and scammers, without needing to collect private user data.

So far, the company says it already counts Fortune 500 companies as customers.

Identiq said it plans to use the $47 million raise to hire and grow the company’s workforce, and aims to scale up its support for its international customers.

#articles, #cryptography, #customer-data, #digital-rights, #entree-capital, #human-rights, #identity-management, #insight-partners, #marketing, #online-shopping, #online-stores, #peer-to-peer, #privacy, #security, #slow-ventures, #sony, #sony-innovation-fund, #startups, #terms-of-service, #vertex-ventures

0

Clearview AI ruled ‘illegal’ by Canadian privacy authorities

Controversial facial recognition startup Clearview AI violated Canadian privacy laws when it collected photos of Canadians without their knowledge or permission, the country’s top privacy watchdog has ruled.

The New York-based company made its splashy newspaper debut a year ago by claiming it had collected over 3 billion photos of people’s faces and touting its connections to law enforcement and police departments. But the startup has faced a slew of criticism for scraping social media sites also without their permission, prompting Facebook, LinkedIn and Twitter to send cease and desist letters to demand it stops.

In a statement, Canada’s Office of the Privacy Commissioner said its investigation found Clearview had “collected highly sensitive biometric information without the knowledge or consent of individuals,” and that the startup “collected, used and disclosed Canadians’ personal information for inappropriate purposes, which cannot be rendered appropriate via consent.”

Clearview rebuffed the allegations, claiming Canada’s privacy laws do not apply because the company doesn’t have a “real and substantial connection” to the country, and that consent was not required because the images it scraped were publicly available.

That’s a challenge the company continues to face in court, as it faces a class action suit citing Illinois’ biometric protection laws that last year dinged Facebook to the tune of $550 million for violating the same law.

The Canadian privacy watchdog rejected Clearview’s arguments, and said it would “pursue other actions” if the company does not follow its recommendations, which included stopping the collection on Canadians and deleting all previously collected images. Clearview said in July that it stopped providing its technology to Canadian customers after the Royal Canadian Mounted Police and the Toronto Police Service were using the startup’s technology.

“What Clearview does is mass surveillance and it is illegal,” said Daniel Therrien, Canada’s privacy commissioner. “It is an affront to individuals’ privacy rights and inflicts broad-based harm on all members of society, who find themselves continually in a police lineup. This is completely unacceptable.”

A spokesperson for Clearview AI did not immediately return a request for comment.

#articles, #canada, #clearview-ai, #digital-rights, #facebook, #facial-recognition, #facial-recognition-software, #human-rights, #illinois, #law-enforcement, #mass-surveillance, #new-york, #privacy, #security, #social-issues, #spokesperson, #terms-of-service

0

Inadequate federal privacy regulations leave US startups lagging behind Europe

“A new law to follow” seems unlikely to have featured on many business wishlists this holiday season, particularly if that law concerned data privacy. Digital privacy management is an area that takes considerable resources to whip into shape, and most SMBs just aren’t equipped for it.

But for 2021, I believe startups in the United States should be demanding that legislators deliver a federal privacy law. Yes, they should demand to be regulated.

For every day that goes by without agreed-upon federal standards for data, these companies lose competitive edge to the rest of the world. Soon there may be no coming back.

For every day that goes by without agreed-upon federal standards for data, these companies lose competitive edge to the rest of the world.

Businesses should not view privacy and trust infrastructure requirements as burdensome. They should view them as keys that can unlock the full power of the data they possess. They should stop thinking about privacy as compliance and begin thinking of it as a harmonization of the customer relationship. The rewards flowing to each party from such harmonization are bountiful. The U.S. federal government is in a unique position to help realize those rewards.

To understand what I mean, cast your eyes to Europe, where it’s become clear that the GDPR was nowhere near the final destination of EU data policy. Indeed it was just the launchpad. Europe’s data regime can frustrate (endless cookie banners anyone?), but it has set an agreed-upon standard of protection for citizens and elevated their trust in internet infrastructure.

For example, a Deloitte survey found that 44% of consumers felt that organizations cared more about their privacy after GDPR came into force. With a baseline standard established — seatbelts in every car — Europe is now squarely focused on raising the speed limit.

EU lawmakers recently unveiled plans for “A Europe fit for the Digital Age.” in the words of Internal Market Commissioner Thierry Breton, it’s a plan to make Europe “the most data-empowered continent in the world.”

Here are some pillars of the plan. While reading, imagine that you are a U.S.-based health tech startup. Imagine the disadvantage you would face against a similar, European-based company, if these initiatives came to fruition:

  • A regulatory framework covering data governance, access and reuse between businesses, between businesses and government, and within administrations to create incentives for data sharing.
  • A push to make public-sector data more widely available by opening up “high-value datasets” to enable their reuse to foster innovation.
  • Support for cloud infrastructure, platforms and systems to support the data reuse goals, with investments in European high-impact projects on European data spaces and trustworthy, energy-efficient cloud infrastructures.
  • Sector-specific actions to build European data spaces that focus on specific areas such as industrial manufacturing, the Green New Deal, mobility or health.

There are so many ways governments can help businesses maximize their data leverage in ways that improve society. But the American public currently has no appetite for that. They don’t trust the internet.

They want to see Mark Zuckerberg and Jeff Bezos sweating it out under Senate Committee questioning. Until we trust our leaders to protect basic online rights, widespread data empowerment initiatives will not be politically viable.

In Europe, the equation is totally different. GDPR was the foundation of a European data strategy, not the capstone.

While the EU powers forward, America’s ability to enact federal privacy reform is stymied by two quintessentially American privacy sticking points:

  • Can I personally sue a business that violates my privacy rights?
  • Can individual states build additional privacy protections on top of a federal law, or will it act as a nationwide “ceiling”?

These are important questions that must be answered as a function of our country’s unique cultural and political history. But currently they’re the roadblocks that stall American industry while the EU, seatbelts secure, begins speeding down the data autobahn.

If you want a visceral example of how this gap is already impacting American businesses, look no further than the fallout of the ECJ’s Schrems II decision in the middle of last summer. Europe’s highest court invalidated a key agreement used to transfer EU data back to the U.S., essentially because there’s no federal law to ensure EU citizens’ data would be protected once it lands in America.

The legal wrangling continues, but the impact of this decision was so considerable that Facebook legitimately threatened to quit operating Europe if the Schrems II ruling was enforced.

While issues generated for smaller businesses don’t grab as many headlines, rest assured that on the front lines of this issue, I’ve seen many SMB’s data operations thrown into total chaos. In other words, the geopolitical battle for a data-driven business edge is already well underway. We are losing.

To sum it up, the United States increasingly finds itself in a position that’s unprecedented since the dawn of the internet era: laggard. American tech companies still innovate at a fantastic rate, but America’s inability to marshal private sector practices to reflect evolving public sentiment threatens to become a yoke around the economy’s neck.

The catastrophic response to the COVID-19 pandemic fell far short of other nations’ efforts. Our handling of data privacy protection costs far less in human terms, but it grows astronomically more expensive in dollar terms with every passing day.

The technology exists to treat users respectfully in a cost-effective manner. The public will is there.

The business will is there. The legislative capability is there.

That’s why I believe America’s startup community should demand federal lawmakers follow the recent example of Europe, India, New Zealand, Brazil, South Africa and Canada. They need to introduce federally guaranteed modern data privacy protections as soon as possible.

#column, #data-security, #digital-rights, #europe, #general-data-protection-regulation, #opinion, #policy, #privacy, #startups

0

Understanding Europe’s big push to rewrite the digital rulebook

European Union lawmakers have set out the biggest update of digital regulations for around two decades — likening it to the introduction of traffic lights to highways to bring order to the chaos wrought by increased mobility. Just switch cars for packets of data.

The proposals for a Digital Services Act (DSA) to standardize safety rules for online business, and a Digital Markets Act (DMA), which will put limits on tech giants aimed at boosting competition in the digital markets they dominate, are intended to shape the future of online business for the next two decades — both in Europe and beyond.

The bloc is far ahead of the U.S. on internet regulation. So while the tech giants of today are (mostly) made in the USA, rules that determine how they can and can’t operate in the future are being shaped in Brussels.

What will come faster, a U.S. breakup of a tech empire or effective enforcement of EU rules on internet gatekeepers is an interesting question to ponder.

The latter part of this year has seen Ursula von der Leyen’s European Commission, which took up its five-mandate last December, unleash a flotilla of digital proposals — and tease more coming in 2021. The Commission has proposed a Data Governance Act to encourage reuse of industrial (and other) data, with another data regulation and rules on political ads transparency proposal slated as coming next year. European-flavored guardrails for use of AI will also be presented next year.

But it’s the DSA and DMA that are core to understanding how the EU executive body hopes to reshape internet business practices to increase accountability and fairness — and in so doing promote the region’s interests for years to come.

These are themes being seen elsewhere in the world at a national level. The U.K., for example, is coming with an “Online Safety Bill” next year in response to public concern about the societal impacts of big tech. While rising interest in tech antitrust has led to Google and Facebook facing charges of abusive business practices on home turf.

What will come faster, a U.S. breakup of a tech empire or effective enforcement of EU rules on internet gatekeepers is an interesting question to ponder. Both are now live possibilities — so entrepreneurs can dare to dream of a different, freer and fairer digital playground. One that’s not ruled over by a handful of abusive giants. Though we’re certainly not there yet.

With the DSA and DMA the EU is proposing an e-commerce and digital markets framework that, once adopted, will apply for its 27 Member States — and the ~445 million people who live there — exerting both a sizable regional pull and seeking to punch up and out at global internet giants.

While there are many challenges ahead to turn the planned framework into pan-EU law, it looks a savvy move by the Commission to separate the DSA and DMA — making it harder for big tech to co-opt the wider industry to lobby against measures that will only affect them in the 160+ pages of proposed legislation now on the table.

It’s also notable that the DSA contains a sliding scale of requirements, with audits, risk assessments and the deepest algorithmic accountability provisions reserved for larger players.

Tech sovereignty — by scaling up Europe’s tech capacity and businesses — is a strategic priority for the Commission. And rule-setting is a key part of how it intends to get there — building on data protection rules that have already been updated, with the GDPR being applied from 2018.

Though what the two new major policy packages will mean for tech companies, startup-sized or market-dominating, won’t be clear for months — or even years. The DSA and DMA have to go through the EU’s typically bruising co-legislative process, looping in representatives of Member States’ governments and directly elected MEPs in the European parliament (which often are coming at the process with different policy priorities and agendas).

The draft presented this month is thus a starting point. Plenty could shift — or even change radically — through the coming debates and amendments. Which means the lobbying starts in earnest now. The coming months will be crucial to determining who will be the future winners and losers under the new regime so startups will need to work hard to make their voices heard.

While tech giants have been pouring increasing amounts of money into Brussels “whispering” for years, the EU is keen to champion homegrown tech — and most of big tech isn’t that.

A fight is almost certainly brewing to influence the world’s most ambitious digital rulebook — including in key areas like the surveillance-based adtech business models that currently dominate the web (to the detriment of individual rights and pro-privacy innovation). So for those dreaming of a better web there’s plenty to play for.

Early responses to the DSA and DMA show the two warring sides, with U.S.-based tech lobbies blasting the plan to expand internet regulation as “anti-innovation” (and anti-U.S.), while EU rights groups are making positive noises over the draft — albeit, with an ambition to go further and ensure stronger protections for web users.

On the startup side, there’s early relief that key tenets of the EU’s existing e-commerce framework look set to remain untouched, mingled with concern that plans to rein in tech giants may have knock-on impacts — such as on startup exits (and valuations). European founders, whose ability to scale is being directly throttled by big tech’s market muscle, have other reasons to be cheerful about the direction of policy travel.

In short, major shifts are coming and businesses and entrepreneurs would do well to prepare for changing requirements — and to seize new opportunities.

Read on for a breakdown of the key aims and requirements of the DSA and the DMA, and additional discussion on how the policy plan could shape the future of the startup business.

Digital Services Act

The DSA aims to standardize rules for digital services that act as intermediaries by connecting consumers to goods, services and content. It will apply to various types of digital services, including network infrastructure providers (like ISPs); hosting services (like cloud storage providers); and online platforms (like social media and marketplaces) — applying to all that offer services in the EU, regardless of where they’re based.

The existing EU e-Commerce Directive was adopted in the year 2000 so revisiting it to see if core principles are still fit for purpose is important. And the Commission has essentially decided that they are. But it also wants to improve consumer protections and dial up transparency and accountability on services businesses by setting new due diligence obligations — responding to a smorgasbord of concerns around the impact of what’s now being hawked and monetized online (whether hateful content or dangerous/illegal products).

Some EU Member States have also been drafting their own laws (in areas like hate speech) that threatens regulatory fragmentation of the bloc’s single market, giving lawmakers added impetus to come with harmonized pan-EU rules (hence the DSA being a regulation, not a directive).

The package will introduce obligations aimed at setting rules for how internet businesses respond to illegal stuff (content, services, goods and so on) — including standardized notice and response procedures for swiftly tackling illegal content (an areas that’s been managed by a voluntary EU code of conduct on illegal hate speech up til now); and a “Know Your Customer” principle for online marketplaces (already a familiar feature in more heavily regulated sectors like fintech) that’s aimed at making it harder for sellers of illegal products to simply respawn within a marketplace under a new name.

There’s also a big push around transparency obligations — with requirements in the proposal for platforms to provide “meaningful” criteria used to target ads (Article 24); and explain the “main parameters” of recommender algorithms (Article 29), as well as requirements to foreground user controls (including at least one “nonprofiling” option).

Here the overarching aim is to increase accountability by ensuring European users can get the information needed to be able to exercise their rights.

#digital-markets-act, #digital-rights, #digital-services-act, #eu, #europe, #platform-regulation, #policy, #tc

0

Privacy is the new competitive battleground

In November, Californians voted to pass Proposition 24, a ballot measure that imposes new regulations on the collection of data by businesses. As part of the California Privacy Rights Act (CPRA), individuals will now have the right to opt out of the sharing and sale of their personal information, while companies must “reasonably” minimize data collection to protect user privacy.

For companies like Apple, Facebook, Uber and Google, all of which are headquartered in California, these new requirements may seem like a limitation on their existing data collection capabilities.

Looking more closely, it’s a nuanced story: By not only meeting the demands of these new regulations but exceeding them, companies have an opportunity to differentiate themselves from competitors to grow their bottom line, thanks to new technologies that put data privacy in the hands of consumers.

Take Apple, the world’s most valuable tech company, as an example. When Google and Facebook — two of Apple’s largest competitors — were under fire for exploiting customer data, CEO Tim Cook saw an opportunity to turn privacy into a competitive advantage.

The tech giant rolled out a suite of new privacy-maximizing features, including a new Sign In With Apple feature that allows users to securely log in to apps without sharing personal information with the apps’ developers. More recently, the company updated its privacy page to better showcase how its flagship apps are designed with privacy in mind.

By not only meeting the demands of these new regulations but exceeding them, companies have an opportunity to differentiate themselves from their competition.

This doubling down on privacy took center stage in the company’s marketing campaigns, too, with “Privacy Matters” becoming the central message of its prime-time air spots and its 10,000+ billboards around the world.

And of course, the company could hardly resist taking the occasional jab at its data-hungry competitors:

“The truth is, we could make a ton of money if we monetized our customer — if our customer was our product,” said Cook in an interview with MSNBC. “We’ve elected not to do that.”

Apple’s commitment to privacy not only puts them in a stronger position to comply with new CPRA regulations. It also sends a strong message to an industry that has profited off of customer data, and an even stronger message to consumers: It’s time to respect personal data.

The growing demand for privacy

The prioritization of consumer data privacy comes out of a need to address growing consumer concerns, which have consistently made headlines in recent years. Attention-grabbing stories such as the Cambridge Analytica data privacy scandal, as well as major breaches at companies such as Equifax, have left consumers wondering whom they can trust and how they can protect themselves. And the research is pretty conclusive — consumers want more out of their businesses and governments:

  • Only 52% of consumers feel like they can trust businesses, and only 41% worldwide trust their governments (Edelman).
  • 85% of consumers believe businesses should be doing more to actively protect their data (IBM).
  • 61% of consumers say their fears of having personal data compromised have increased in the last two years (Salesforce).

It’s hard to say exactly how this trust crisis will manifest in the global economy, but we’ve already seen several large boycotts, like the #DeleteFacebook movement, and a staggering 75% of consumers who say they won’t purchase from a company they don’t trust with their data.

And it’s not just Big Tech. From loyalty programs and inventory planning to smart cities and election advertising, it’s hard to overestimate the appetite — and effect — of using data to optimize processes and drive behavioral change.

As we look toward a new data-driven decade, however, we’re starting to realize the cost of this big data arms race: Consumers have lost trust in both the private and public sectors.

Private sector initiatives like Apple’s strengthened commitment to privacy, alongside public policy legislation like the CPRA, have the potential to not only build back consumer trust but to go even further beyond the minimum requirements. Thanks to new technologies like self-sovereign identity, companies can transform their data privacy policies, while cutting costs, reducing fraud and improving customer experiences.

The value of SSI

Self-sovereign identity (or SSI) leverages a thin layer of distributed ledger technology and a dose of very advanced cryptography to enable companies to prove the identities of their customers, without putting privacy at risk.

At its simplest, SSI is a way of giving consumers more control over their personal information. It offers a way for consumers to digitally store and manage personal information (in the form of verifiable credentials) that are issued and signed by a trusted authority (like a government, bank or university) in a way that can never be altered, embellished or manipulated. Consumers can then share this information when, where and with whom they wish as a way of proving things about themselves.

While sharing digital records online is nothing new, SSI changes the game in two fundamental ways:

  1. Organizations can capture the required data, without overcollection. Unlike the physical credentials we carry in our wallets, like driver’s licenses and insurance cards, a digital verifiable credential can be divided into individual attributes, which can be shared separately.

The classic example is walking into a bar and showing the bouncer your driver’s license to verify that you are of legal age. The card reveals the necessary data, but it also includes information that the bar has no business knowing — such as your name and address. With verifiable credentials, we can share proof of age without revealing anything else.

For sensitive cases, self-sovereign identity even allows us to cryptographically prove something about ourselves without revealing the actual data. In this case, we could provide a yes/no answer to whether we are of a legal age, without revealing our date of birth.

For individuals, data minimization represents a great stride forward in privacy. For organizations, it’s a way of avoiding the massive liability of storing and securing excess personally identifiable information.

  1. Correlation becomes much, much harder. While there are those who say privacy is a myth and our data will all be correlated anyway, self-sovereign identity protects us against many of the leading concerns with other digital identity solutions.

For example, if we look at other tools that give us some level of data portability, like single-sign-on, there is always a concern that a single player in the middle can track what we do online. There’s a reason those Facebook ads are eerily relevant: They know every site and app we have signed into using our Facebook profile.

With SSI, there’s no one player or centralized registry in the middle. Verifiers (those requesting an identity verification) can verify the authenticity cryptographically, meaning they don’t have to “phone home” to the original credential issuer and the credential issuer has no way of knowing when, where or to whom a credential was shared. No correlatable signatures are shared, and your digital identity is truly under your control and for your eyes only.

As a result, the consumer benefits from better privacy and security, while businesses benefit from:

  • Reduced fraud, with better, more accurate data verification at the time of account creation.
  • Reduced friction, with a dramatically faster sign-up process.
  • Reduced costs, both from time savings and from smarter KYC compliance (which normally costs large banks $500 million+ each year).
  • Increased efficiency, with less back-and-forth verifying third-party data.
  • Better customer experiences, with the ability to create a personalized, omnichannel customer experience without data harvesting.

And it’s not science fiction, either. Several major governments, businesses and NGOs have already launched self-sovereign solutions. These include financial institutions like UNIFY, Desert Financial and TruWest, healthcare organizations like Providence Health and the NHS, and telecom and travel giants like LG and the International Air Transport Association.

It’s not clear how soon the technology will become ubiquitous, but it is clear that privacy is quickly emerging as the next competitive battleground. Newly passed regulations like CPRA codify the measures companies need to take, but it’s consumer expectations that will drive long-term shifts within the companies themselves.

For those ahead of the curve, there will be significant cost savings and growth — especially as customers start to shift their loyalty toward those businesses that respect and protect their privacy. For everyone else, it will be a major wake-up call as consumers demand to take back their data.

#column, #cryptography, #digital-identity, #digital-rights, #identity-management, #personal-data, #privacy

0

Apple launches its new app privacy labels across all its App Stores

At Apple’s Worldwide Developers Conference in June, the company announced it would soon require developers to disclose their app’s privacy practices to customers via new, glanceable summaries that appear on their apps’ product pages on the App Store. Today, these new app privacy labels are going live across all of Apple’s App Stores, including iOS, iPadOS, macOS, watchOS and tvOS.

On the developers’ side, Apple began requiring developers to submit their privacy practices with the submission of new apps and app updates. However, it hadn’t begun to publish this information on the App Stores until today.

The new labels aim to give Apple customers an easier way to understand what sort of information an app collects across three categories: data used to track you, data linked to you and data not linked to you. Tracking, Apple explains, refers to the act of linking either user or device data collected from an app with user or device data collected from other apps, websites or even offline properties (like data aggregated from retail receipts) that’s used for targeted advertising or advertisement measurement. It can also include sharing user or device data with data brokers.

This aspect alone will expose the industry of third-party adtech and analytics SDKs (software development kits) — basically code from external vendors that developers add to their apps to boost their revenues.

Meanwhile, “data linked to you” is the personal information tied to your identity, through your user account on the app, your device or other details.

Image Credits: Apple

Broken down, there are a number of data types apps may collect on their users, including things like personal contact information (e.g. address, email, phone, etc.); health and fitness information (eg. from the Clinical Health Records API, HealthKit API, MovementDisorderAPIs or health-related human subject research); financial information (e.g. payment and credit info); location (either precise or coarse); contacts; user content (e.g. emails, audio, texts, gameplay, customer support, etc.); browsing and search histories; purchases; identifiers like user or device IDs; usage and diagnostic info; and more.

Developers are expected to understand not only what data their app may collect, but also how it’s ultimately used.

For example, if an app shares user data with a third-party partner, the developer will need to know what data that partner uses and for what purposes — like displaying targeted ads in the app, sharing location data or email lists with a data broker, using data for retargeting users in other apps or measuring ad efficiencies. And while the developer will need to disclose when they’re collecting data from Apple frameworks or services, they aren’t responsible for disclosing data collected by Apple itself.

There are a few exceptions to the new disclosure requirements, including data collected in optional feedback forms or customer service requests. But, in general, almost any data an app collects has to be disclosed. Even Apple’s own apps that aren’t offered on the App Store will have their privacy labels published on the web.

Apps will also be required to include a link to their publicly accessible privacy policy and can optionally now include a link to a page explaining their privacy choices in more detail. For example, they could link to a page where users can manage their data for the app or request deletion.

The privacy information itself is presented on a screen in the app’s product listing page in easy-to-read tabs that explain what data is collected across the different categories, starting with “data used to track you.”

Apple says it will not remove apps from the App Store if they don’t include this privacy information, but it’s no longer allowing apps to update until their privacy information is listed. That means, eventually, all apps that haven’t been abandoned will include these details.

Apple’s decision to implement privacy labels is a big win for consumer privacy and could establish a new baseline for how app stores disclose data.

However, they also arrive at a time when Apple is pushing its own adtech agenda under the banner of being a privacy-forward company. The company is forcing the adtech industry to shift from the identifier IDFA to its own SKAdNetwork — a shakeup that’s been controversial enough for Apple to delay the transition from 2020 to 2021. The decision to delay may have been, as Apple stated, to give marketers panicked about the sizable revenue hit, time to adapt. But Apple is, of course, keenly aware that regulators were weighing whether the App Store was behaving in anticompetitive ways toward third-parties.

Facebook, for example, had warned businesses they would see a 50% drop in Audience Network revenue on iOS as a result of the changes that would remove personalization from mobile app ad install campaigns.

Apple, in the meantime, took some of the regulatory heat off itself by reducing its App Store commissions to 15% for developers making less than $1 million.

As all these consumer privacy changes are underway, Apple itself continues to use its customer data to personalize ads in its own apps, including the App Store and Apple News. These settings, which are enabled by default, can be toggled off in the iPhone’s Settings. App publishers, on the other hand, will soon have to ask permission from users to track them. And Apple now runs plenty of other services it could expand ads to in the future, if it chose.

It will be interesting to see how consumers react to these new privacy labels as they go live. Apps that collect too much data may find their downloads are impacted, as wary users pass them over. Or, consumers may end up ignoring the labels — much as they do the other policies and terms they “agree” to when installing new software.

Details about Apple’s privacy practices were also published today on a new website, Apple.com/privacy, which includes not only the changes to the App Store, but lists all other areas where Apple protects consumer privacy.

#adtech, #advertising-tech, #app-stores, #app-store, #apple, #apps, #digital-rights, #iphone, #labels, #mobile, #privacy

0

Ring to offer opt-in end-to-end encryption for videos beginning later this year

Ring will be stepping up its efforts to make its security products secure for users by enabling end-to-end video encryption later this year. The company will be providing this toggle in a new page in tits app’s Control Center, which will provide more information about Ring’s current encryption practices, and measures to keep user video secure, until the end-to-end encryption feature goes live. Ring is also taking the covers off a range of new devices todayincluding its first drone – but Ring CEO and founder Jamie Siminoff says that this new security measure could actually make the biggest difference to its customers.

“[End-to-end encryption] could be our most important product that we’re sort of putting out there, because security and privacy, and user control are foundational to Ring, and continuing to push those further than even the industry, and really even pushing the res of the industry, is something I think that we have a responsibility to do.”

Siminoff also points to Ring’s introduction of mandatory two-factor authentication earlier this year as something that’s above and beyond the standard across the industry. I asked him them why not make end-to-end encryption for video on by default, with an opt-out option instead if users feel strongly that they don’t want to take part.

“Privacy, as you know, is really individualized – we see people have different needs,” he said. Just one example for end-to-end, is thatwhen you enable it, you cannot use your Alexa to say ‘Show me who’s at the front door,’ because of the physics of locking down to an end-to-end key. As soon as you do something like that, it would actually break what you’re trying to achieve. So it really is something that is optional, because it doesn’t fit every user in terms of the way in which they want to use the product. But there are some users  that really do want this type of security – so I think what you’re going to see from us in the future, and I hope the industry as well, is just really allowing people to dial in the security that they want, and having transparency, which is also with the Video Control Center that we’ve launched today to provide you with the knowledge of what’s happening with your data, in this case with Ring videos.”

Overall, Siminoff said that the company hopes through all of its products, to be able to provide its users to build the system that they want to use, its the way that they want to use it. The Alway Home Cam drone, he points out, is another expression of that, since it provides the potential to monitor every room in your home – but also the ability to be selective about when and where.

“I think it’s just about building the options to allow people to use technology – but use it comfortably, understand it, and control it,” he said.

#alexa, #amazon-hardware-event-2020, #computer-security, #control-center, #cryptography, #data-security, #digital-rights, #encryption, #end-to-end-encryption, #gadgets, #hardware, #jamie-siminoff, #ring, #security, #tc

0

Facebook trails expanding portability tools ahead of FTC hearing

Facebook is considering expanding the types of data its users are able to port directly to alternative platforms.

In comments on portability sent to US regulators ahead of an FTC hearing on the topic next month, Facebook says it intends to expand the scope of its data portability offerings “in the coming months”.

It also offers some “possible examples” of how it could build on the photo portability tool it began rolling out last year — suggesting it could in future allow users to transfer media they’ve produced or shared on Facebook to a rival platform or take a copy of their “most meaningful posts” elsewhere.

Allowing Facebook-based events to be shared to third party cloud-based calendar services is another example cited in Facebook’s paper.

It suggests expanding portability in such ways could help content creators build their brands on other platforms or help event organizers by enabling them to track Facebook events using calendar based tools.

However there are no firm commitments from Facebook to any specific portability product launches or expansions of what it offers currently.

For now the tech giant only lets Facebook users directly send copies of their photos to Google’s eponymous photo storage service — a transfer tool it switched on for all users this June.

“We remain committed to ensuring the current product remains stable and performant for people and we are also exploring how we might extend this tool, mindful of the need to preserve the privacy of our users and the integrity of our services,” Facebook writes of its photo transfer tool.

On whether it will expand support for porting photos to other rival services (i.e. not just Google Photos) Facebook has this non-committal line to offer regulators: “Supporting these additional use cases will mean finding more destinations to which people can transfer their data. In the short term, we’ll pursue these destination partnerships through bilateral agreements informed by user interest and expressions of interest from potential partners.”

Beyond allowing photo porting to Google Photos, Facebook users have long been able to download a copy of some of the information it holds on them.

But the kind of portability regulators are increasingly interested in is about going much further than that — meaning offering mechanisms that enable easy and secure data transfers to other services in a way that could encourage and support fast-moving competition to attention-monopolizing tech giants.

The Federal Trade Commission is due to host a public workshop on September 22, 2020, which it says will  “examine the potential benefits and challenges to consumers and competition raised by data portability”.

The regulator notes that the topic has gained interest following the implementation of major privacy laws that include data portability requirements — such as the European Union’s General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA).

It asked for comment submissions by August 21, which is what Facebook’s paper is responding to.

In comments to the Reuters news agency, Facebook’s privacy and public policy manager, Bijan Madhani, said the company wants to see “dedicated portability legislation” coming out of any post-workshop recommendations.

It reports that Facebook supports a portability bill that’s doing the rounds in Congress — called the Access Act, which is sponsored by Democratic Senators Richard Blumenthal and Mark Warner, and Republican senator Josh Hawley — which would require large tech platforms to let their users easily move their data to other services.

Albeit Madhani dubs it a good first step, adding that the company will continue to engage with the lawmakers on shaping its contents.

“Although some laws already guarantee the right to portability, our experience suggests that companies and people would benefit from additional guidance about what it means to put those rules into practice,” Facebook also writes in its comments to the FTC .

Ahead of dipping its toe into portability via the photo transfer tool, Facebook released a white paper on portability last year, seeking to shape the debate and influence regulatory thinking around any tighter or more narrowly defined portability requirements.

In recent months Mark Zuckerberg has also put in facetime to lobby EU lawmakers on the topic, as they work on updating regulations around digital services.

The Facebook founder pushed the European Commission to narrow the types of data that should fall under portability rules. In the public discussion with commissioner Thierry Breton, in May, he raised the example of the Cambridge Analytica Facebook data misuse scandal, claiming the episode illustrated the risks of too much platform “openness” — and arguing that there are “direct trade-offs about openness and privacy”.

Zuckerberg went on to press for regulation that helps industry “balance these two important values around openness and privacy”. So it’s clear the company is hoping to shape the conversation about what portability should mean in practice.

Or, to put it another way, Facebook wants to be able to define which data can flow to rivals and which can’t.

“Our position is that portability obligations should not mandate the inclusion of observed and inferred data types,” Facebook writes in further comments to the FTC — lobbying to put broad limits on how much insight rivals would be able to gain into Facebook users who wish to take their data elsewhere.

Both its white paper and comments to the FTC plough this preferred furrow of making portability into a ‘hard problem’ for regulators, by digging up downsides and fleshing out conundrums — such as how to tackle social graph data.

On portability requests that wrap up data on what Facebook refers to as “non-requesting users”, its comments to the FTC work to sew doubt about the use of consent mechanisms to allow people to grant each other permission to have their data exported from a particular service — with the company questioning whether services “could offer meaningful choice and control to non-requesting users”.

“Would requiring consent inappropriately restrict portability? If not, how could consent be obtained? Should, for example, non-requesting users have the ability to choose whether their data is exported each time one of their friends wants to share it with an app? Could an approach offering this level of granularity or frequency of notice could lead to notice fatigue?” Facebook writes, skipping lightly over the irony given the levels of fatigue its own apps’ default notifications can generate for users.

Facebook also appears to be advocating for an independent body or regulator to focus on policy questions and liability issues tied to portability, writing in a blog post announcing its FTC submission: “In our comments, we encourage the FTC to examine portability in practice. We also ask it to recommend dedicated federal portability legislation and provide advice to industry on the policy and regulatory tensions we highlight, so that companies implementing data portability have the clear rules and certainty necessary to build privacy-protective products that enhance people’s choice and control online.”

In its FTC submission the company goes on to suggest that “an independent mechanism or body” could “collaboratively set privacy and security standards to ensure data portability partnerships or participation in a portability ecosystem that are transparent and consistent with the broader goals of data portability”.

Facebook then further floats the idea of an accreditation model under which recipients of user data “could demonstrate, through certification to an independent body, that they meet the data protection and processing standards found in a particular regulation, such as the [EU’s] GDPR or associated code of conduct”.

“Accredited entities could then be identified with a seal and would be eligible to receive data from transferring service providers. The independent body (potentially in consultation with relevant regulators) could work to assess compliance of certifying entities, revoking accreditation where appropriate,” it further suggests.

However its paper also notes the risk that requiring accreditation might present a barrier to entry for the small businesses and startups that might otherwise be best positioned to benefit from portability.

#apps, #congress, #data-portability, #data-protection, #digital-media, #digital-rights, #europe, #european-commission, #european-union, #facebook, #federal-trade-commission, #ftc, #gdpr, #general-data-protection-regulation, #google, #josh-hawley, #mark-warner, #mark-zuckerberg, #policy, #richard-blumenthal, #social, #terms-of-service, #thierry-breton, #united-states

0

EU websites’ use of Google Analytics and Facebook Connect targeted by post-Schrems II privacy complaints

A month after Europe’s top court struck down a flagship data transfer arrangement between the EU and the US as unsafe, European privacy campaign group, noyb, has filed complaints against 101 websites with regional operators which it’s identified as still sending data to the US via Google Analytics and/or Facebook Connect integrations.

Among the entities listed in its complaint are ecommerce companies, publishers & broadcasters, telcos & ISPs, banks and universities — including Airbnb Ireland, Allied Irish Banks, Danske Bank, Fastweb, MTV Internet, Sky Deutschland, Takeaway.com and Tele2, to name a few.

“A quick analysis of the HTML source code of major EU webpages shows that many companies still use Google Analytics or Facebook Connect one month after a major judgment by the Court of Justice of the European Union (CJEU) — despite both companies clearly falling under US surveillance laws, such as FISA 702,” the campaign group writes on its website.

“Neither Facebook nor Google seem to have a legal basis for the data transfers. Google still claims to rely on the ‘Privacy Shield’ a month after it was invalidated, while Facebook continues to use the ‘SCCs’ [Standard Contractual Clauses], despite the Court finding that US surveillance laws violate the essence of EU fundamental rights.”

We’ve reached out to Facebook and Google with questions about their legal bases for such transfers — and will update this report with any response.

Privacy watchers will know that noyb’s founder, Max Schrems, was responsible for the original legal challenge that took down an anterior EU-US data arrangement, Safe Harbor, all the way back in 2015. His updated complaint ended up taking down the EU-US Privacy Shield last month — although he’d actually targeted Facebook’s use of a separate data transfer mechanism (SCCs), urging its data supervisor, Ireland’s DPC, to step in and suspend its use of that tool.

The regulator chose to go to court instead, raising wider concerns about the legality of EU-US data transfer arrangements — which resulted in the CJEU concluding that the Commission should not have granted the US a so-called ‘adequacy agreement’, thus pulling the rug out from under Privacy Shield.

The decision means the US is now what’s considered a ‘third country’ in data protection terms, with no special arrangement to enable it to process EU users’ information.

More than that, the court’s ruling also made it clear EU data watchdogs have a responsibility to intervene where they suspect there are risks to EU people’s data if it’s being transferred to a third country via SCCs.

European data watchdogs swiftly warned there would be no grace period for entities still illegally relying on Privacy Shield — so anyone listed in the above complaint that’s still referencing the defunct mechanism in their privacy policy won’t even have a proverbial figleaf to hide their legal blushes.

noyb’s contention with this latest clutch of complaints is that none of the aforementioned 101 websites has a valid legal basis to keep transferring visitor data to the US via the embedded Google Analytics and/or Facebook Connect integrations.

“We have done a quick search on major websites in each EU member state for code from Facebook and Google. These code snippets forward data on each visitor to Google or Facebook. Both companies admit that they transfer data of Europeans to the US for processing, where these companies are under a legal obligation to make such data available to US agencies like the NSA. Neither Google Analytics nor Facebook Connect are essential to run these webpages and are services that could have been replaced or at least deactivated by now,” said Schrems, honorary chair of noyb.eu, in a statement.

Since the CJEU’s Schrems II ruling, and indeed since the Safe Harbor strike down, the US Department of Commerce and European Commission have stuck their heads in the sand — signalling they intend to try cobbling together another data pact to replace the defunct Privacy Shield (which replaced the blasted-to-smithereens (un)Safe Harbor. So, er… ).

Yet without root-and-branch reform of US surveillance law, any third pop by respective lawmakers at papering over the legal schism of US national security priorities vs EU privacy rights is just as surely doomed to fail.

The more cynical among you might say the high level administrative manoeuvers around this topic are, in fact, simply intended to buy more time — for the data to keep flowing and ‘business as usual’ to continue.

But there is now substantial legal risk attached to a strategy of trying to pretend US surveillance law doesn’t exist.

Here’s Schrems again, on last month’s CJEU ruling, suggesting that Facebook and Google could be in the frame for legal liability if they don’t proactively warn EU customers of their data responsibilities: “The Court was explicit that you cannot use the SCCs when the recipient in the US falls under these mass surveillance laws. It seems US companies are still trying to convince their EU customers of the opposite. This is more than shady. Under the SCCs the US data importer would instead have to inform the EU data sender of these laws and warn them. If this is not done, then these US companies are actually liable for any financial damage caused.”

And as noyb’s press release notes, GDPR’s penalties regime can scale as high as 4% of the worldwide turnover of the EU sender and the US recipient of personal data. So, again, hi Facebook, hi Google…

The crowdfunded campaign group has pledged to continue dialling up the pressure on EU regulators to act and on EU data processors to review any US data transfer arrangements — and “adapt to the clear ruling by the EU’s supreme court”, as it puts it.

Other types of legal action are also starting to draw on Europe’s General Data Protection Regulation (GDPR) framework — and, importantly, attract funding — such as two class action style suits filed against Oracle and Salesforce’s use of tracking cookies earlier this month. (As we said when GDPR came into force back in 2018, the lawsuits are coming.)

Now, with two clear strikes from the CJEU on the issue of US surveillance law vs EU data protection, it looks like it’ll be diminishing returns for US tech giants hoping to pretend everything’s okay on the data processing front.

noyb is also putting its money where its mouth is — offering free guidelines and model requests for EU entities to use to help them get their data affairs in prompt legal order. 

“While we understand that some things may need some time to rearrange, it is unacceptable that some players seem to simply ignore Europe’s top court,” Schrems added, in further comments on the latest flotilla of complaints. “This is also unfair towards competitors that comply with these rules. We will gradually take steps against controllers and processors that violate the GDPR and against authorities that do not enforce the Court’s ruling, like the Irish DPC that stays dormant.”

We’ve reached out to Ireland’s Data Protection Commission to ask what steps it will be taking in light of the latest noyb complaints, a number of which target websites that appear to be operated by an Ireland-based legal entity.

Schrems original 2013 complaint against Facebook’s use of SCCs also ended up in Ireland, where the tech giant — and many others — locates its EU EQ. Schrem’s request that the DPC order Facebook to suspend its use of SCCs still hasn’t been fulfilled, some seven years and five complaints later. And the regulator continues to face accusations of inaction, given the growing backlog of cross-border GDPR complaints against tech giants like Facebook and Google.

Ireland’s DPC has still yet to issue a single final decision on any of these major GDPR complaints. But the legal pressure for it and all EU regulators to get a move on and enforce the bloc’s law will only increase, even as class action style lawsuits are filed to try to do what regulators have failed to.

Earlier this summer the Commission acknowledged a lack of uniformly “vigorous” enforcement of GDPR in a review of the mechanism’s first two years of operation.

“The European Data Protection Board [EDPB] and the data protection authorities have to step up their work to create a truly common European culture — providing more coherent and more practical guidance, and work on vigorous but uniform enforcement,” said Věra Jourová, Commission VP for values and transparency then, giving the Commission’s first public assessment of whether GDPR is working.

We’ve also reached out to France’s CNIL to ask what action it will be taking in light of the noyb complaints.

Following the judgement in July the French regulator said it was “conducting a precise analysis”, along with the EDPB, with a view to “drawing conclusions as soon as possible on the consequences of the ruling for data transfers from the European Union to the United States”.

Since then the EDPB guidance has come out — inking the obvious: That transfers on the basis of Privacy Shield “are illegal”. And while the CJEU ruling did not invalidate the use of SCCs it gave only a very qualified green light to continued use.

As we reported last month, the ability to use SCCs to transfer data to the U.S. hinges on a data controller being able to offer a legal guarantee that “U.S. law does not impinge on the adequate level of protection” for the transferred data.

“Whether or not you can transfer personal data on the basis of SCCs will depend on the result of your assessment, taking into account the circumstances of the transfers, and supplementary measures you could put in place,” the EDPB added.

#airbnb, #campaign, #cjeu, #data-controller, #data-protection, #data-security, #digital-rights, #ecommerce, #eu-us-privacy-shield, #europe, #european-commission, #european-data-protection-board, #european-union, #facebook, #france, #gdpr, #general-data-protection-regulation, #html, #human-rights, #ireland, #lawsuit, #max-schrems, #noyb, #oracle, #privacy, #privacy-shield, #safe-harbor, #salesforce, #sccs, #schrems-ii, #takeaway-com, #tc, #united-states, #us-department-of-commerce, #vera-jourova

0

Oracle and Salesforce hit with GDPR class action lawsuits over cookie tracking consent

The use of third party cookies for ad tracking and targeting by data broker giants Oracle and Salesforce is the focus of class action style litigation announced today in the UK and the Netherlands.

The suits will argue that mass surveillance of Internet users to carry out real-time bidding ad auctions cannot possibly be compatible with strict EU laws around consent to process personal data.

The litigants believe the collective claims could exceed €10BN, should they eventually prevail in their arguments — though such legal actions can take several years to work their way through the courts.

In the UK, the case may also face some legal hurdles given the lack of an established model for pursuing collective damages in cases relating to data rights. Though there are signs that’s changing.

Non-profit foundation, The Privacy Collective, has filed one case today with the District Court of Amsterdam, accusing the two data broker giants of breaching the EU’s General Data Protection Regulation (GDPR) in their processing and sharing of people’s information via third party tracking cookies and other adtech methods.

The Dutch case, which is being led by law-firm bureau Brandeis, is the biggest-ever class action in The Netherlands related to violation of the GDPR — with the claimant foundation representing the interests of all Dutch citizens whose personal data has been used without their consent and knowledge by Oracle and Salesforce. 

A similar case is due to be filed later this month at the High Court in London England, which will make reference to the GDPR and the UK’s PECR (Privacy of Electronic Communications Regulation) — the latter governing the use of personal data for marketing communications. The case there is being led by law firm Cadwalader

Under GDPR, consent for processing EU citizens’ personal data must be informed, specific and freely given. The regulation also confers rights on individuals around their data — such as the ability to receive a copy of their personal information.

It’s those requirements the litigation is focused on, with the cases set to argue that the tech giants’ third party tracking cookies, BlueKai and Krux — trackers that are hosted on scores of popular websites, such as Amazon, Booking.com, Dropbox, Reddit and Spotify to name a few — along with a number of other tracking techniques are being used to misuse Europeans’ data on a massive scale.

Per Oracle marketing materials, its Data Cloud and BlueKai Marketplace provider partners with access to some 2BN global consumer profiles. (Meanwhile, as we reported in June, BlueKai suffered a data breach that exposed billions of those records to the open web.)

While Salesforce claims its marketing cloud ‘interacts’ with more than 3BN browsers and devices monthly.

Both companies have grown their tracking and targeting capabilities via acquisition for years; Oracle bagging BlueKai in 2014 — and Salesforce snaffling Krux in 2016.

 

Discussing the lawsuit in a telephone call with TechCrunch, Dr Rebecca Rumbul, class representative and claimant in England & Wales, said: “There is, I think, no way that any normal person can really give informed consent to the way in which their data is going to be processed by the cookies that have been placed by Oracle and Salesforce.

“When you start digging into it there are numerous, fairly pernicious ways in which these cookies can and probably do operate — such as cookie syncing, and the aggregation of personal data — so there’s really, really serious privacy concerns there.”

The real-time-bidding (RTB) process that the pair’s tracking cookies and techniques feed, enabling the background, high velocity trading of profiles of individual web users as they browse in order to run dynamic ad auctions and serve behavioral ads targeting their interests, has, in recent years, been subject to a number of GDPR complaints, including in the UK.

These complaints argue that RTB’s handling of people’s information is a breach of the regulation because it’s inherently insecure to broadcast data to so many other entities — while, conversely, GDPR bakes in a requirement for privacy by design and default.

The UK Information Commissioner’s Office has, meanwhile, accepted for well over a year that adtech has a lawfulness problem. But the regulator has so far sat on its hands, instead of enforcing the law — leaving the complainants dangling. (Last year, Ireland’s DPC opened a formal investigation of Google’s adtech, following a similar complaint, but has yet to issue a single GDPR decision in a cross-border complaint — leading to concerns of an enforcement bottleneck.)

The two lawsuits targeting RTB aren’t focused on the security allegation, per Rumbul, but are mostly concerned with consent and data access rights.

She confirms they opted to litigate rather than trying to try a regulatory complaint route as a way of exercising their rights given the “David vs Goliath” nature of bringing claims against the tech giants in question.

“If I was just one tiny person trying to complaint to Oracle and trying to use the UK Information Commissioner to achieve that… they simply do not have the resources to direct at one complaint from one person against a company like Oracle — in terms of this kind of scale,” Rumbul told TechCrunch.

“In terms of being able to demonstrate harm, that’s quite a lot of work and what you get back in recompense would probably be quite small. It certainly wouldn’t compensate me for the time I would spend on it… Whereas doing it as a representative class action I can represent everyone in the UK that has been affected by this.

“The sums of money then work — in terms of the depths of Oracle’s pockets, the costs of litigation, which are enormous, and the fact that, hopefully, doing it this way, in a very large-scale, very public forum it’s not just about getting money back at the end of it; it’s about trying to achieve more standardized change in the industry.”

“If Salesforce and Oracle are not successful in fighting this then hopefully that send out ripples across the adtech industry as a whole — encouraging those that are using these quite pernicious cookies to change their behaviours,” she added.

The litigation is being funded by Innsworth, a litigation funder which is also funding Walter Merricks’ class action for 46 million consumers against Mastercard in London courts. And the GDPR appears to be helping to change the class action landscape in the UK — as it allows individuals to take private legal action. The framework can also support third parties to bring claims for redress on behalf of individuals. While changes to domestic consumer rights law also appear to be driving class actions.

Commenting in a statement, Ian Garrard, managing director of Innsworth Advisors, said: “The development of class action regimes in the UK and the availability of collective redress in the EU/EEA mean Innsworth can put money to work enabling access to justice for millions of individuals whose personal data has been misused.”

A separate and still ongoing lawsuit in the UK, which is seeking damages from Google on behalf of Safari users whose privacy settings it historically ignored, also looks to have bolstered the prospects of class action style legal actions related to data issues.

While the courts initially tossed the suit last year, the appeals court overturned that ruling — rejecting Google’s argument that UK and EU law requires “proof of causation and consequential damage” in order to bring a claim related to loss of control of data.

The judge said the claimant did not need to prove “pecuniary loss or distress” to recover damages, and also allowed the class to proceed without all the members having the same interest.

Discussing that case, Rumbul suggests a pending final judgement there (likely next year) may have a bearing on whether the lawsuit she’s involved with can be taken forward in the UK.

“I’m very much hoping that the UK judiciary are open to seeing these kind of cases come forward because without these kinds of things as very large class actions it’s almost like closing the door on this whole sphere of litigation. If there’s a legal ruling that says that case can’t go forward and therefore this case can’t go forward I’d be fascinated to understand how the judiciary think we’d have any recourse to these private companies for these kind of actions,” she said.

Asked why the litigation has focused on Oracle and Saleforce, given there are so many firms involved in the adtech pipeline, she said: “I am not saying that they are necessarily the worst or the only companies that are doing this. They are however huge, huge international multimillion-billion dollar companies. And they specifically went out and purchased different bits of adtech software, like BlueKai, in order to bolster their presence in this area — to bolster their own profits.

“This was a strategic business decision that they made to move into this space and become massive players. So in terms of the adtech marketplace they are very, very big players. If they are able to be held to account for this then it will hopefully change the industry as a whole. It will hopefully reduce the places to hide for the other more pernicious cookie manufacturers out there. And obviously they have huge, huge revenues so in terms of targeting people who are doing a lot of harm and that can afford to compensate people these are the right companies to be targeting.”

Rumbul also told us The Privacy Collective is looking to collect stories from web users who feel they have experienced harm related to online tracking.

“There’s plenty of evidence out there to show that how these cookies work means you can have very, very egregious outcomes for people at an individual level,” she added. “Whether that can be related to personal finance, to manipulation of addictive behaviors, whatever, these are all very, very possible — and they cover every aspect of our lives.”

Consumers in England and Wales and the Netherlands are being encouraged to register their support of the actions via The Privacy Collective’s website.

In a statement, Christiaan Alberdingk Thijm, lead lawyer at Brandeis, said: “Your data is being sold off in real-time to the highest bidder, in a flagrant violation of EU data protection regulations. This ad-targeting technology is insidious in that most people are unaware of its impact or the violations of privacy and data rights it entails. Within this adtech environment, Oracle and Salesforce perform activities which violate European privacy rules on a daily basis, but this is the first time they are being held to account. These cases will draw attention to astronomical profits being made from people’s personal information, and the risks to individuals and society of this lack of accountability.”

“Thousands of organisations are processing billions of bid requests each week with at best inconsistent application of adequate technical and organisational measures to secure the data, and with little or no consideration as to the requirements of data protection law about international transfers of personal data. The GDPR gives us the tool to assert individuals’ rights. The class action means we can aggregate the harm done,” added partner Melis Acuner from Cadwalader in another supporting statement.

We reached out to Oracle and Salesforce for comment on the litigation.

Oracle EVP and general counsel, Dorian Daley, said:

The Privacy Collective knowingly filed a meritless action based on deliberate misrepresentations of the facts.  As Oracle previously informed the Privacy Collective, Oracle has no direct role in the real-time bidding process (RTB), has a minimal data footprint in the EU, and has a comprehensive GDPR compliance program. Despite Oracle’s fulsome explanation, the Privacy Collective has decided to pursue its shake-down through litigation filed in bad faith.  Oracle will vigorously defend against these baseless claims.

A spokeswoman for Salesforce sent us this statement:

At Salesforce, Trust is our #1 value and nothing is more important to us than the privacy and security of our corporate customers’ data. We design and build our services with privacy at the forefront, providing our corporate customers with tools to help them comply with their own obligations under applicable privacy laws — including the EU GDPR — to preserve the privacy rights of their own customers.

Salesforce and another Data Management Platform provider, have received a privacy related complaint from a Dutch group called The Privacy Collective. The claim applies to the Salesforce Audience Studio service and does not relate to any other Salesforce service.

Salesforce disagrees with the allegations and intends to demonstrate they are without merit.

Our comprehensive privacy program provides tools to help our customers preserve the privacy rights of their own customers. To read more about the tools we provide our corporate customers and our commitment to privacy, visit salesforce.com/privacy/products/

#advertising-tech, #amazon, #bluekai, #booking-com, #consumer-rights-law, #data-protection, #data-protection-law, #data-security, #digital-rights, #dropbox, #europe, #european-union, #gdpr, #general-data-protection-regulation, #google, #information-commissioners-office, #ireland, #krux, #law, #lawsuit, #london, #marketing, #mastercard, #netherlands, #online-tracking, #oracle, #privacy, #salesforce, #spotify, #tc, #united-kingdom

0

EU-US Privacy Shield is dead. Long live Privacy Shield

As the saying goes, insanity is doing the same thing over and over again and expecting different results.

And so we arrive at the news, put out yesterday in the horse latitudes of summer via joint press statement, that the EU’s executive body and the US Department of Commerce have begun talks toward fashioning a shiny new papier-mâché ‘Privacy Shield’.

“The U.S. Department of Commerce and the European Commission have initiated discussions to evaluate the potential for an enhanced EU-U.S. Privacy Shield framework to comply with the July 16 judgment of the Court of Justice of the European Union in the Schrems II case,” the pair write.

The EU-US Privacy Shield, as you may recall, refers to the four-year-old data transfer mechanism which Europe’s top court just sunk with the legal equivalent of a nuclear bomb.

Five years ago the same court carpet-bombed its predecessor, a fifteen-year-old arrangement known — without apparent irony — as ‘Safe Harbor’.

Thousands of companies had been signed up to the Privacy Shield, relying on the claimed legal protection to authorize transatlantic transfers of EU users’ data. The mirage collapsed on cue last month, raising legal questions over continued use of cloud services based in a third country like the US — barring data localization.

Alternative data transfer mechanisms do exist but data controllers wanting to use an alternative tool, like Standard Contractual Clauses (SCCs), to take EU citizens’ data over the pond are legally required to carry out an assessment of whether US law provides adequate protections. If they cannot guarantee the data’s safety they cannot use SCCs legally either. (And if they go ahead they are risking costly regulatory intervention.)

The fall of Privacy Shield should really have shocked no one, given the warnings, right from the get-go, that it amounted to ‘lipstick on a pig‘. Nothing has changed the fundamental problems identified by the Court of Justice of the EU in 2015 — so carrying on doing bulk data transfers to the US was headed for the same legal slapdown.

The basic problem is the mechanism failed to do what’s claimed on the tin. Which is to say EU people’s personal data is not safe as houses over there because US government security agencies have their hands in tech platforms’ cookie jars (and all the other jars and tubes of the modern Internet), as the 2013 Snowden revelations illustrated beyond doubt.

Nothing since the Snowden disclosures has substantially reworked US surveillance law to make it less incompatible with EU privacy law. President Obama made a few encouraging noises but under Trump the administration has dug in on helping itself to people’s data without a warrant. So it’s closer to a funnel than a shield.

Turns out neither a ‘Shield’ nor a ‘Harbor’ were metaphors grand enough to paper over this fundamental clash of legal priorities, when a regional trading bloc with long standing laws that protect privacy butts up against an alien regime that rubberstamps digital intrusion on national security grounds, with zero concern for privacy.

And so we arrive at the prospect of a new, papier-mâché ‘Privacy Shield II(I)’ — which looks to be the most appropriate metaphor for this latest round of EU-US ‘negotiations’ aimed at cobbling something together to buy more time for data to keep flowing. Bottom line: Even if Commission and US negotiators ink something on paper any claimed legal protections will, without root and branch reform of US surveillance law, sum to another sham headed for a speedy demolition day in court. 

It’s also worth noting that Europe’s judges are likely to step on the gas in this respect, with Privacy Shield standing for just a fraction of the time Safe Harbor hung around. So any Privacy Shield II (III if you count Safe Harbor) would likely get even shorter shrift. 

Not that legal reality and legal clarity is preventing fuzzy soundbites from being despatched from both sides of the Atlantic, of course.

“The European Union and the United States recognize the vital importance of data protection and the significance of cross-border data transfers to our citizens and economies. We share a commitment to privacy and the rule of law, and to further deepening our economic relationship, and have collaborated on these matters for several decades,” the pair write in a fresh attempt to re-spin a legal car crash disaster that everyone could see coming, years ahead.

“As we face new challenges together, including the recovery of the global economy after the COVID-19 pandemic, our partnership will strengthen data protection and promote greater prosperity for our nearly 800 million citizens on both sides of the Atlantic.”

There’s no doubting the appetite of the Commission and the US Department of Commerce share for data to keep flowing. Both prioritize ‘business as usual’ and lionize their notion of “prosperity”, to the degree where they’re willing to turn a blind eye to rights impacts (including the Commission).

However neither side has demonstrated that it posses the political clout and influence to remake the US’ data industrial complex — which is what’s needed to meaningfully ‘enhance’ Privacy Shield. Instead, we get publicity for their next pantomime.

We’ve reached out to the Commission with questions, lots of questions.

 

#cloud, #cloud-services, #data-localization, #data-protection, #digital-rights, #eu-us-privacy-shield, #europe, #european-commission, #european-union, #human-rights, #obama, #personal-data, #policy, #privacy, #safe-harbor, #trump, #u-s-department-of-commerce, #united-states, #us-government

0

Let’s close the gap and finally pass a federal data privacy law

My college economics professor, Dr. Charles Britton, often said, “There’s no such thing as a free lunch.” The common principle known as TINSTAFL implies that even if something appears to be free, there is always a cost to someone, even if it is not the individual receiving the benefit.

For decades, the ad-supported ecosystem enjoyed much more than a proverbial free lunch. Brands, technology providers, publishers and platforms successfully transformed data provided by individuals into massive revenue gains, creating some of the world’s most profitable corporations. So if TINSTAFL is correct, what is the true cost of monetizing this data? Consumer trust, as it turns out.

Studies overwhelmingly demonstrate that the majority of people believe data collection and data use lack the necessary transparency and control. After a few highly publicized data breaches brought a spotlight on the lack of appropriate governance and regulation, people began to voice concerns that companies had operated with too little oversight for far too long, and unfairly benefited from the data individuals provided.

With increased attention, momentum and legislative activity in multiple individual states, we have never been in a better position to pass a federal data privacy law that can rebalance the system and set standards that rebuild trust with the people providing the data.

Over the last two decades, we’ve seen that individuals benefit from regulated use of data. The competitiveness of the banking markets is partly a result of laws around the collection and use of data for credit decisions. In exchange for data collection and use, individuals now have the ability to go online and get a home loan or buy a car with instant credit. A federal law would strengthen the value exchange and provide rules for companies around the collection and utilization of data, as well as establish consistency and uniformity, which can create a truly national market.

In order to close the gap and pass a law that properly balances the interests of people, society and commerce, the business sector must first unify on the need and the current political reality. Most already agree that a federal law should be preemptive of state laws, and many voices with legitimate differences of opinion have come a long way toward a consensus. Further unification on the following three assertions could help achieve bipartisan support:

A federal law must recognize that one size does not fit all. While some common sense privacy accountability requirements should be universal, a blanket approach for accountability practices is unrealistic. Larger enterprises with significant amounts of data on hand should have stricter requirements than other entities and be required to appoint a Data Ethics Officer and document privacy compliance processes and privacy reviews.

They should be required to regularly perform internal and external audits of data collection and use. These audits should be officer-certified and filed with a regulator. While larger companies are equipped to absorb this burden, smaller businesses should not be forced to forego using the data they need to innovate and thrive by imposing the same standards. Instead, requirements for accountability should be “right-sized,” and based on the amount and type of data collected and its intended use.

A federal law must properly empower the designated regulatory authority. The stated mission of the Federal Trade Commission is to protect American consumers. As the government agency of record for data privacy regulation and enforcement, the FTC has already imposed billions of dollars in penalties for privacy violations. However, in a modern world where every company collects and uses data, the FTC cannot credibly monitor or enforce federal regulation without substantially increasing funding and staffing.

With increased authority, equipped with skilled teams to diligently monitor those companies with the most consumer data, the FTC — with State Attorney Generals designated as back-ups — can hold them accountable by imposing meaningful remedial actions and fines.

A federal law must acknowledge that properly crafted private right-to-action is appropriate and necessary. The earlier points build an effective foundation for the protection of people’s privacy rights, but there will still be situations where a person should have access to the judicial system to seek redress. Certainly, if a business does not honor the data rights of an individual as defined by federal law, people should have the right to bring an action for equitable relief. If a person has suffered actual physical or significant economic harm directly caused by violation of a Federal Data Privacy law, they should be able to bring suit if, after giving notice, the FTC declines to pursue.

Too many leaders have been unwilling to venture toward possible common ground, but public opinion dictates that more must be done, otherwise states, counties, parishes and cities will inevitably continue to act if Congress does not. It is just as certain that those data privacy laws will be inconsistent, creating a patchwork of rules based on geography, leading to unnecessary friction and complexity. Consider how much time is spent sorting through the 50 discrete data breach laws that exist today, an expense that could easily be mitigated with a single national standard.

It is clear that responsible availability of data is critical to fostering innovation. American technology has led the world into this new data-driven era, and it’s time for our laws to catch up.

To drive economic growth and benefit all Americans, we need to properly balance the interests of people, society at-large and business, and pass a data law that levels the playing field and allows American enterprise to continue thinking with data. It should ensure that transparency and accountability are fostered and enforced and help rebuild trust in the system.

Coming together to support the passage of a comprehensive and preemptive federal data privacy law is increasingly important. If not, we are conceding that we’re okay with Americans remaining distrustful of the industry, and that the rest of the world should set the standards for us.

#column, #digital-rights, #federal-trade-commission, #government, #human-rights, #opinion, #policy, #privacy, #social-issues, #terms-of-service

0

Legal clouds gather over US cloud services, after CJEU ruling

In the wake of yesterday’s landmark ruling by Europe’s top court — striking down a flagship transatlantic data transfer framework called Privacy Shield, and cranking up the legal uncertainty around processing EU citizens’ data in the U.S. in the process — Europe’s lead data protection regulator has fired its own warning shot at the region’s data protection authorities (DPAs), essentially telling them to get on and do the job of intervening to stop people’s data flowing to third countries where it’s at risk.

Countries like the U.S.

The original complaint that led to the Court of Justice of the EU (CJEU) ruling focused on Facebook’s use of a data transfer mechanism called Standard Contractual Clauses (SCCs) to authorize moving EU users’ data to the U.S. for processing.

Complainant Max Schrems asked the Irish Data Protection Commission (DPC) to suspend Facebook’s SCC data transfers in light of U.S. government mass surveillance programs. Instead, the regulator went to court to raise wider concerns about the legality of the transfer mechanism.

That in turn led Europe’s top judges to nuke the Commission’s adequacy decision, which underpinned the EU-U.S. Privacy Shield — meaning the U.S. no longer has a special arrangement greasing the flow of personal data from the EU. Yet, at the time of writing, Facebook is still using SCCs to process EU users’ data in the U.S. Much has changed, but the data hasn’t stopped flowing — yet.

Yesterday the tech giant said it would “carefully consider” the findings and implications of the CJEU decision on Privacy Shield, adding that it looked forward to “regulatory guidance.” It certainly didn’t offer to proactively flip a kill switch and stop the processing itself.

Ireland’s DPA, meanwhile, which is Facebook’s lead data regulator in the region, sidestepped questions over what action it would be taking in the wake of yesterday’s ruling — saying it (also) needed (more) time to study the legal nuances.

The DPC’s statement also only went so far as to say the use of SCCs for taking data to the U.S. for processing is “questionable” — adding that case by case analysis would be key.

The regulator remains the focus of sustained criticism in Europe over its enforcement record for major cross-border data protection complaints — with still zero decisions issued more than two years after the EU’s General Data Protection Regulation (GDPR) came into force, and an ever-growing backlog of open investigations into the data processing activities of platform giants.

In May, the DPC finally submitted to other DPAs for review its first draft decision on a cross-border case (an investigation into a Twitter security breach), saying it hoped the decision would be finalized in July. At the time of writing we’re still waiting for the bloc’s regulators to reach consensus on that.

The painstaking pace of enforcement around Europe’s flagship data protection framework remains a problem for EU lawmakers — whose two-year review last month called for uniformly “vigorous” enforcement by regulators.

The European Data Protection Supervisor (EDPS) made a similar call today, in the wake of the Schrems II ruling — which only looks set to further complicate the process of regulating data flows by piling yet more work on the desks of underfunded DPAs.

“European supervisory authorities have the duty to diligently enforce the applicable data protection legislation and, where appropriate, to suspend or prohibit transfers of data to a third country,” writes EDPS Wojciech Wiewiórowski, in a statement, which warns against further dithering or can-kicking on the intervention front.

“The EDPS will continue to strive, as a member of the European Data Protection Board (EDPB), to achieve the necessary coherent approach among the European supervisory authorities in the implementation of the EU framework for international transfers of personal data,” he goes on, calling for more joint working by the bloc’s DPAs.

Wiewiórowski’s statement also highlights what he dubs “welcome clarifications” regarding the responsibilities of data controllers and European DPAs — to “take into account the risks linked to the access to personal data by the public authorities of third countries.”

“As the supervisory authority of the EU institutions, bodies, offices and agencies, the EDPS is carefully analysing the consequences of the judgment on the contracts concluded by EU institutions, bodies, offices and agencies. The example of the recent EDPS’ own-initiative investigation into European institutions’ use of Microsoft products and services confirms the importance of this challenge,” he adds.

Part of the complexity of enforcement of Europe’s data protection rules is the lack of a single authority; a varied patchwork of supervisory authorities responsible for investigating complaints and issuing decisions.

Now, with a CJEU ruling that calls for regulators to assess third countries themselves — to determine whether the use of SCCs is valid in a particular use-case and country — there’s a risk of further fragmentation should different DPAs jump to different conclusions.

Yesterday, in its response to the CJEU decision, Hamburg’s DPA criticized the judges for not also striking down SCCs, saying it was “inconsistent” for them to invalidate Privacy Shield yet allow this other mechanism for international transfers. Supervisory authorities in Germany and Europe must now quickly agree how to deal with companies that continue to rely illegally on the Privacy Shield, the DPA warned.

In the statement, Hamburg’s data commissioner, Johannes Caspar, added: “Difficult times are looming for international data traffic.”

He also shot off a blunt warning that: “Data transmission to countries without an adequate level of data protection will… no longer be permitted in the future.”

Compare and contrast that with the Irish DPC talking about use of SCCs being “questionable,” case by case. (Or the U.K.’s ICO offering this bare minimum.)

Caspar also emphasized the challenge facing the bloc’s patchwork of DPAs to develop and implement a “common strategy” toward dealing with SCCs in the wake of the CJEU ruling.

In a press note today, Berlin’s DPA also took a tough line, warning that data transfers to third countries would only be permitted if they have a level of data protection essentially equivalent to that offered within the EU.

In the case of the U.S. — home to the largest and most used cloud services — Europe’s top judges yesterday reiterated very clearly that that is not in fact the case.

“The CJEU has made it clear that the export of data is not just about the economy but people’s fundamental rights must be paramount,” Berlin data commissioner Maja Smoltczyk said in a statement [which we’ve translated using Google Translate].

“The times when personal data could be transferred to the U.S. for convenience or cost savings are over after this judgment,” she added.

Both DPAs warned the ruling has implications for the use of cloud services where data is processed in other third countries where the protection of EU citizens’ data also cannot be guaranteed too, i.e. not just the U.S.

On this front, Smoltczyk name-checked China, Russia and India as countries EU DPAs will have to assess for similar problems.

“Now is the time for Europe’s digital independence,” she added.

Some commentators (including Schrems himself) have also suggested the ruling could see companies switching to local processing of EU users’ data. Though it’s also interesting to note the judges chose not to invalidate SCCs — thereby offering a path to legal international data transfers, but only provided the necessary protections are in place in that given third country.

Also issuing a response to the CJEU ruling today was the European Data Protection Board (EDPB). AKA the body made up of representatives from DPAs across the bloc. Chair Andrea Jelinek put out an emollient statement, writing that: “The EDPB intends to continue playing a constructive part in securing a transatlantic transfer of personal data that benefits EEA citizens and organisations and stands ready to provide the European Commission with assistance and guidance to help it build, together with the U.S., a new framework that fully complies with EU data protection law.”

Short of radical changes to U.S. surveillance law, it’s tough to see how any new framework could be made to legally stick, though. Privacy Shield’s predecessor arrangement, Safe Harbour, stood for around 15 years. Its shiny “new and improved” replacement didn’t even last five.

In the wake of the CJEU ruling, data exporters and importers are required to carry out an assessment of a country’s data regime to assess adequacy with EU legal standards before using SCCs to transfer data there.

“When performing such prior assessment, the exporter (if necessary, with the assistance of the importer) shall take into consideration the content of the SCCs, the specific circumstances of the transfer, as well as the legal regime applicable in the importer’s country. The examination of the latter shall be done in light of the non-exhaustive factors set out under Art 45(2) GDPR,” Jelinek writes.

“If the result of this assessment is that the country of the importer does not provide an essentially equivalent level of protection, the exporter may have to consider putting in place additional measures to those included in the SCCs. The EDPB is looking further into what these additional measures could consist of.”

Again, it’s not clear what “additional measures” a platform could plausibly deploy to “fix” the gaping lack of redress afforded to foreigners by U.S. surveillance law. Major legal surgery does seem to be required to square this circle.

Jelinek said the EDPB would be studying the judgement with the aim of putting out more granular guidance in the future. But her statement warns data exporters they have an obligation to suspend data transfers or terminate SCCs if contractual obligations are not or cannot be complied with, or else to notify a relevant supervisory authority if it intends to continue transferring data.

In her roundabout way, she also warns that DPAs now have a clear obligation to terminate SCCs where the safety of data cannot be guaranteed in a third country.

“The EDPB takes note of the duties for the competent supervisory authorities (SAs) to suspend or prohibit a transfer of data to a third country pursuant to SCCs, if, in the view of the competent SA and in the light of all the circumstances of that transfer, those clauses are not or cannot be complied with in that third country, and the protection of the data transferred cannot be ensured by other means, in particular where the controller or a processor has not already itself suspended or put an end to the transfer,” Jelinek writes.

One thing is crystal clear: Any sense of legal certainty U.S. cloud services were deriving from the existence of the EU-U.S. Privacy Shield — with its flawed claim of data protection adequacy — has vanished like summer rain.

In its place, a sense of déjà vu and a lot more work for lawyers.

#berlin, #china, #cloud, #cloud-services, #data-protection-law, #data-transmission, #digital-rights, #eu-us-privacy-shield, #europe, #european-commission, #european-data-protection-board, #european-union, #facebook, #general-data-protection-regulation, #germany, #google, #hamburg, #human-rights, #india, #ireland, #law, #mass-surveillance, #max-schrems, #microsoft, #personal-data, #privacy, #russia, #safe-harbour, #schrems-ii, #tc, #twitter, #united-states, #us-government

0

Europe’s top court strikes down flagship EU-US data transfer mechanism

A highly anticipated ruling by Europe’s top court has just landed — striking down a flagship EU-US data flows arrangement called Privacy Shield.

The Court of Justice invalidates Decision 2016/1250 on the adequacy of the protection provided by the EU-US Data Protection Shield,” it writes in a press release. 

The case — known colloquially as Schrems II (in reference to privacy activist and lawyer, Max Schrems, whose original complaints underpin the saga) — has a long and convoluted history. In a nutshell it concerns the clash of two very different legal regimes related to people’s digital data: On the one hand US surveillance law and on the other European data protection and privacy.

Putting a little more meat on the bones, the US’ prioritizing of digital surveillance — as revealed by the 2013 revelations of NSA whistleblower, Edward Snowden; and writ large in the breadth of data capture powers allowed by Section 702 of FISA (Foreign Intelligence Surveillance Act) and executive order 12,333 (which sanctions bulks collection) — collides directly with European fundamental rights which give citizens rights to privacy and data protection, as set out in the EU Charter of Fundamental Rights, the European Convention on Human Rights and specific pieces of pan-EU legislation (such as the General Data Protection Regulation).

The Schrems II case also directly concerns Facebook, while having much broader implications for how large scale data processing of EU citizens data can be done. It does not concern so called ‘necessary’ data transfers — such as being able to send an email to book a hotel room; but rather relates to the bulk outsourcing of data processing from the EU to the US (typically undertaken for cost/ease reasons). So one knock on effect of today’s ruling might be for companies to switch to regional data processing for European users.

The original case raised specific questions of legality around a European data transfer mechanism used by Facebook (and many other companies) for processing regional users’ data in the US — called Standard Contractual Clauses (SCCs).

Schrems challenged Facebook’s use of SCCs at the end of 2015, when he updated an earlier complaint on the same data transfer issue related to US government mass surveillance practices with Ireland’s data watchdog.

He asked the Irish Data Protection Commission (DPC) to suspend Facebook’s use of SCCs. Instead the regulator decided to take him and Facebook to court, saying it had concerns about the legality of the whole mechanism. Irish judges then referred a large number of nuanced legal questions to Europe’s top court, which brings us to today. It’s worth noting Facebook repeatedly tried and failed to block the reference to the Court of Justice. And you can now see exactly why they really wanted to derail this train.

The referral by the Irish High Court also looped in questions over a flagship European Commission data transfer agreement, called the EU-US Privacy Shield. This replaced a long standing EU-US data transfer agreement called Safe Harbor which was struck down by the CJEU in 2015 after an earlier challenge also lodged by Schrems. (Hence Schrems II — and now strike two for Schrems.)

So part of the anticipation associated with this case has been related to whether Europe’s top judges would choose to weigh in on the legality of Privacy Shield — a data transfer framework that’s being used by more than 5,300 companies at this point. And which the European Commission only put in place a handful of years ago.

Critics of the arrangement have maintained from the start that it does not resolve the fundamental clash between US surveillance and EU data protection — and in recent years, with the advent of the Trump administration, the Privacy Shield has looked increasingly precariously placed as we’ve reported.

In the event, the CJEU has sided with critics who have always said Privacy Shield is the equivalent of lipstick on a pig. Today is certainly not a good day for the European Commission (which also had a very bad day in court yesterday on a separate matter).

We reached out to the EU executive for comment on Schrems II and a spokesman told us it will be holding a press briefing at noon. (We’ll dial in so stay tuned for more.)

Privacy Shield had also been under separate legal challenge — with the complainant in that case (La Quadrature du Net) arguing the mechanism breaches fundamental EU rights and does not provide adequate protection for EU citizens’ data. That case now looks moot.

On SCCs, the CJEU has not taken issue with the mechanism itself — which, unlike Privacy Shield, does not contain an assessment on the quality of the protections offered by any third country; it’s merely a tool which may be available to use if the right legal conditions exist to guarantee EU citizens’ data rights — but judges impress the obligation on data controllers to carry out an assessment of the data protection afforded by the country where the data is to be taken. If the level is not equivalent to that offered by EU law then the controller has a legal obligation to suspend the data transfers.

This also means that EU regulators — such as Ireland’s DPC — have a clear obligation to suspend data transfers which are taking place via SCCs to third countries where data protections are not adequate. Like the US. Which was exactly what Schrems had asked the Irish regulator to do in the first place.

It’s not immediately clear what alternative exists for companies such as Facebook which are using SCCs to take EU citizens’ data to the US, given judges have invalidated Privacy Shield on the grounds of the lack of protections afforded to EU citizens data in the country.

US surveillance law is standing in the way of their EU data flows.

Commenting on the ruling in a statement, a jubilant Schrems said: “I am very happy about the judgment. At first sight it seems the Court has followed us in all aspects. This is a total blow to the Irish DPC and Facebook. It is clear that the US will have to seriously change their surveillance laws, if US companies want to continue to play a role on the EU market.”

We’ve also reached out to Facebook and the Irish DPC for comment.

This is a developing story… 

#controller, #data-protection, #digital-rights, #edward-snowden, #eu-us-privacy-shield, #europe, #european-commission, #european-union, #facebook, #human-rights, #ireland, #law, #mass-surveillance, #max-schrems, #privacy, #safe-harbor, #social-issues, #tc, #united-states, #us-government

0

French court slaps down Google’s appeal against $57M GDPR fine

France’s top court for administrative law has dismissed Google’s appeal against a $57M fine issued by the data watchdog last year for not making it clear enough to Android users how it processes their personal information.

The State Council issued the decision today, affirming the data watchdog CNIL’s earlier finding that Google did not provide “sufficiently clear” information to Android users — which in turn meant it had not legally obtained their consent to use their data for targeted ads.

“Google’s request has been rejected,” a spokesperson for the Conseil D’Etat confirmed to TechCrunch via email.

“The Council of State confirms the CNIL’s assessment that information relating to targeting advertising is not presented in a sufficiently clear and distinct manner for the consent of the user to be validly collected,” the court also writes in a press release [translated with Google Translate] on its website.

It found the size of the fine to be proportionate — given the severity and ongoing nature of the violations.

Importantly, the court also affirmed the jurisdiction of France’s national watchdog to regulate Google — at least on the date when this penalty was issued (January 2019).

The CNIL’s multimillion dollar fine against Google remains the largest to date against a tech giant under Europe’s flagship General Data Protection Regulation (GDPR) — lending the case a certain symbolic value, for those concerned about whether the regulation is functioning as intended vs platform power.

While the size of the fine is still relative peanuts vs Google’s parent entity Alphabet’s global revenue, changes the tech giant may have to make to how it harvests user data could be far more impactful to its ad-targeting bottom line. 

Under European law, for consent to be a valid legal basis for processing personal data it must be informed, specific and freely given. Or, to put it another way, consent cannot be strained.

In this case French judges concluded Google had not provided clear enough information for consent to be lawfully obtained — including objecting to a pre-ticked checkbox which the court affirmed does not meet the requirements of the GDPR.

So, tl;dr, the CNIL’s decision has been entirely vindicated.

Reached for comment on the court’s dismissal of its appeal, a Google spokeswoman sent us this statement:

People expect to understand and control how their data is used, and we’ve invested in industry-leading tools that help them do both. This case was not about whether consent is needed for personalised advertising, but about how exactly it should be obtained. In light of this decision, we will now review what changes we need to make.

GDPR came into force in 2018, updating long standing European data protection rules and opening up the possibility of supersized fines of up to 4% of global annual turnover.

However actions against big tech have largely stalled, with scores of complaints being funnelled through Ireland’s Data Protection Commission — on account of a one-stop-shop mechanism in the regulation — causing a major backlog of cases. The Irish DPC has yet to issue decisions on any cross border complaints, though it has said its first ones are imminent — on complaints involving Twitter and Facebook.

Ireland’s data watchdog is also continuing to investigate a number of complaints against Google, following a change Google announced to the legal jurisdiction of where it processes European users’ data — moving them to Google Ireland Limited, based in Dublin, which it said applied from January 22, 2019 — with ongoing investigations by the Irish DPC into a long running complaint related to how Google handles location data and another major probe of its adtech, to name two

On the GDPR one-stop shop mechanism — and, indirectly, the wider problematic issue of ‘forum shopping’ and European data protection regulation — the French State Council writes: “Google believed that the Irish data protection authority was solely competent to control its activities in the European Union, the control of data processing being the responsibility of the authority of the country where the main establishment of the data controller is located, according to a ‘one-stop-shop’ principle instituted by the GDPR. The Council of State notes however that at the date of the sanction, the Irish subsidiary of Google had no power of control over the other European subsidiaries nor any decision-making power over the data processing, the company Google LLC located in the United States with this power alone.”

In its own statement responding to the court’s decision, the CNIL notes the court’s view that GDPR’s one-stop-shop mechanism was not applicable in this case — writing: “It did so by applying the new European framework as interpreted by all the European authorities in the guidelines of the European Data Protection Committee.”

Privacy NGO noyb — one of the privacy campaign groups which lodged the original ‘forced consent’ complaint against Google, all the way back in May 2018 — welcomed the court’s decision on all fronts, including the jurisdiction point.

Commenting in a statement, noyb’s honorary chairman, Max Schrems, said: “It is very important that companies like Google cannot simply declare themselves to be ‘Irish’ to escape the oversight by the privacy regulators.”

A key question is whether CNIL — or another (non-Irish) EU DPA — will be found to be competent to sanction Google in future, following its shift to naming its Google Ireland subsidiary as the regional data processor. (Other tech giants use the same or a similar playbook, seeking out the EU’s more ‘business-friendly’ regulators.)

On the wider ruling, Schrems also said: “This decision requires substantial improvements by Google. Their privacy policy now really needs to make it crystal clear what they do with users’ data. Users must also get an option to agree to only some parts of what Google does with their data and refuse other things.”

French digital rights group, La Quadrature du Net — which had filed a related complaint against Google, feeding the CNIL’s investigation — also declared victory today, noting it’s the first sanction in a number of GDPR complaints it has lodged against tech giants on behalf of 12,000 citizens.

“The rest of the complaints against Google, Facebook, Apple and Microsoft are still under investigation in Ireland. In any case, this is what this authority promises us,” it added in another tweet.

#alphabet, #android, #cnil, #data-controller, #data-processing, #digital-rights, #europe, #european-union, #france, #gdpr, #general-data-protection-regulation, #google, #ireland, #max-schrems, #privacy, #united-states

0

UK’s NHS COVID-19 app lacks robust legal safeguards against data misuse, warns committee

A UK parliamentary committee that focuses on human rights issues has called for primary legislation to be put in place to ensure that legal protections wrap around the national coronavirus contact tracing app.

The app, called NHS COVID-19, is being fast tracked for public use — with a test ongoing this week in the Isle of Wight. It’s set to use Bluetooth Low Energy signals to log social interactions between users to try to automate some contacts tracing based on an algorithmic assessment of users’ infection risk.

The NHSX has said the app could be ready for launch within a matter of weeks but the committee says key choices related to the system architecture create huge risks for people’s rights that demand the safeguard of primary legislation.

“Assurances from Ministers about privacy are not enough. The Government has given assurances about protection of privacy so they should have no objection to those assurances being enshrined in law,” said committee chair, Harriet Harman MP, in a statement.

“The contact tracing app involves unprecedented data gathering. There must be robust legal protection for individuals about what that data will be used for, who will have access to it and how it will be safeguarded from hacking.

“Parliament was able quickly to agree to give the Government sweeping powers. It is perfectly possible for parliament to do the same for legislation to protect privacy.”

The NHSX, a digital arm of the country’s National Health Service, is in the process of testing the app — which it’s said could be launched nationally within a few weeks.

The government has opted for a system design that will centralize large amounts of social graph data when users experiencing COVID-19 symptoms (or who have had a formal diagnosis) choose to upload their proximity logs.

Earlier this week we reported on one of the committee hearings — when it took testimony from NHSX CEO Matthew Gould and the UK’s information commissioner, Elizabeth Denham, among other witnesses.

Warning now over a lack of parliamentary scrutiny — around what it describes as an unprecedented expansion of state surveillance — the committee report calls for primary legislation to ensure “necessary legal clarity and certainty as to how data gathered could be used, stored and disposed of”.

The committee also wants to see an independent body set up to carry out oversight monitoring and guard against ‘mission creep’ — a concern that’s also been raised by a number of UK privacy and security experts in an open letter late last month.

“A Digital Contact Tracing Human Rights Commissioner should be responsible for oversight and they should be able to deal with complaints from the Public and report to Parliament,” the committee suggests.

Prior to publishing its report, the committee wrote to health minister Matt Hancock, raising a full spectrum of concerns — receiving a letter in response.

In this letter, dated May 4, Hancock told it: “We do not consider that legislation is necessary in order to build and deliver the contact tracing app. It is consistent with the powers of, and duties imposed on, the Secretary of State at a time of national crisis in the interests of protecting public health.”

The committee’s view is Hancock’s ‘letter of assurance’ is not enough given the huge risks attached to the state tracking citizens’ social graph data.

“The current data protection framework is contained in a number of different documents and it is nearly impossible for the public to understand what it means for their data which may be collected by the digital contact tracing system. Government’s assurances around data protection and privacy standards will not carry any weight unless the Government is prepared to enshrine these assurances in legislation,” it writes in the report, calling for a bill that it says myst include include a number of “provisions and protections”.

Among the protections the committee is calling for are limits on who has access to data and for what purpose.

“Data held centrally may not be accessed or processed without specific statutory authorisation, for the purpose of combatting Covid-19 and provided adequate security protections are in place for any systems on which this data may be processed,” it urges.

It also wants legal protections against data reconstruction — by different pieces of data being combined “to reconstruct information about an individual”.

The report takes a very strong line — warning that no app should be released without “strong protections and guarantees” on “efficacy and proportionality”.

“Without clear efficacy and benefits of the app, the level of data being collected will be not be justifiable and it will therefore fall foul of data protection law and human rights protections,” says the committee.

The report also calls for regular reviews of the app — looking at efficacy; data safety; and “how privacy is being protected in the use of any such data”.

It also makes a blanket call for transparency, with the committee writing that the government and health authorities “must at all times be transparent about how the app, and data collected through it, is being used”.

A lack of transparency around the project was another of the concerns raised by the 177 academics who signed the open letter last month.

The government has committed to publishing data protection impact assessments for the app. But the ICO’s Denham still hadn’t had sight of this document as of this Monday.

Another call by the committee is for a time-limit to be attached to any data gathered by or generated via the app. “Any digital contact tracing (and data associated with it) must be permanently deleted when no longer required and in any event may not be kept beyond the duration of the public health emergency,” it writes.

We’ve reached out to the Department of Health and NHSX for comment on the human rights committee’s report.

There’s another element to this fast moving story: Yesterday the Financial Times reported that the NHSX has inked a new contract with an IT supplier which suggests it might be looking to change the app architecture — moving away from a centralized database to a decentralized system for contacts tracing. Although NHSX has not confirmed any such switch at this point.

Some other countries have reversed course in their choice of app architecture after running into technical challenges related to Bluetooth. The need to ensure public trust in the system was also cited by Germany for switching to a decentralized model.

The human rights committee report highlights a specific app efficacy issue of relevance to the UK, which it points out is also linked to these system architecture choices, noting that: “The Republic of Ireland has elected to use a decentralised app and if a centralised app is in use in Northern Ireland, there are risks that the two systems will not be interoperable which would be most unfortunate.”

#apps, #bluetooth, #data-protection-law, #digital-rights, #elizabeth-denham, #europe, #germany, #health, #human-rights, #identity-management, #ireland, #law, #matt-hancock, #mobile, #national-health-service, #nhs, #nhs-covid-19, #nhsx, #northern-ireland, #privacy, #privacy-policy, #terms-of-service, #united-kingdom

0

EU lawmakers set out guidance for coronavirus contacts tracing apps

The European Commission has published detailed guidance for Member States on developing coronavirus contacts tracing and warning apps.

The toolbox, which has been developed by the e-Health Network with the support of the Commission, is intended as a practical guide to implementing digital tools for tracking close contacts between device carriers as a proxy for infection risk that seeks to steer Member States in a common, privacy-sensitive direction as they configure their digital responses to the COVID-19 pandemic.

Commenting in a statement, Thierry Breton — the EU commissioner for Internal Market — said: Contact tracing apps to limit the spread of coronavirus can be useful, especially as part of Member States’ exit strategies. However, strong privacy safeguards are a pre-requisite for the uptake of these apps, and therefore their usefulness. While we should be innovative and make the best use of technology in fighting the pandemic, we will not compromise on our values and privacy requirements.”

“Digital tools will be crucial to protect our citizens as we gradually lift confinement measures,” added Stella Kyriakides, commissioner for health and food safety, in another supporting statement. “Mobile apps can warn us of infection risks and support health authorities with contact tracing, which is essential to break transmission chains. We need to be diligent, creative, and flexible in our approaches to opening up our societies again. We need to continue to flatten the curve – and keep it down. Without safe and compliant digital technologies, our approach will not be efficient.”

The Commission’s top-line “essential requirements” for national contacts tracing apps are that they’re:

  • voluntary;
  • approved by the national health authority;
  • privacy-preserving (“personal data is securely encrypted”); and
  • dismantled as soon as no longer needed

In the document the Commission writes that the requirements on how to record contacts and notify individuals are “anchored in accepted epidemiological guidance, and reflect best practice on cybersecurity, and accessibility”.

“They cover how to prevent the appearance of potentially harmful unapproved apps, success criteria and collectively monitoring the effectiveness of the apps, and the outline of a communications strategy to engage with stakeholders and the people affected by these initiatives,” it adds.

Yesterday, setting out a wider roadmap to encourage a co-ordinated lifting of the coronavirus lockdown, the Commission suggested digital tools for contacts tracing will play a key role in easing quarantine measures.

Although today’s toolbox clearly emphasizes the need to use manual contact tracing in parallel with digital contact tracing, with such apps and tools envisaged as a support for health authorities — if widely rolled out — by enabling limited resources to be more focused toward manual contacts tracing.

“Manual contact tracing will continue to play an important role, in particular for those, such as elderly or disabled persons, who could be more vulnerable to infection but less likely to have a mobile phone or have access to these applications,” the Commission writes. “Rolling-out mobile applications on a large-scale will significantly contribute to contact tracing efforts also allowing health authorities to carry manual tracing in a more focussed manner.”

“Mobile apps will not reach all citizens given that they rely on the possession and active use of a smart phone. Evidence from Singapore and a study by Oxford University indicate that 60-75% of a population need to have the app for it to be efficient,” it adds in a section on accessibility and inclusiveness. “However, non-users will benefit from any increased population disease control the widespread use of such an app may bring.”

The toolbox also reiterates a clear message from the Commission in recent days that “appropriate safeguards” must be embedded into digital contacts tracing systems. Though it’s less clear whether all Member States are listening to memos about respecting EU rights and freedoms, as they scrambled for tech and data to beat back COVID-19.

“This digital technology, if deployed correctly, could contribute substantively to containing and reversing its spread. Deployed without appropriate safeguards, however, it could have a significant negative effect on privacy and individual rights and freedoms,” the Commission writes, further warning that: “A fragmented and uncoordinated approach to contact tracing apps risks hampering the effectiveness of measures aimed at combating the COVID-19 crisis, whilst also causing adverse effects to the single market and to fundamental rights and freedoms.”

On safeguards the Commission has a clear warning for EU Member States, writing: “Any contact tracing and warning app officially recognised by Member States’ relevant authorities should present all guarantees for respect of fundamental rights, and in particular privacy and data protection, the prevention of surveillance and stigmatization.”

Its list of key safeguards notably includes avoiding the collection of any location data.

“Location data is not necessary nor recommended for the purpose of contact tracing apps, as their goal is not to follow the movements of individuals or to enforce prescriptions,” it says. “Collecting an individual’s movements in the context of contact tracing apps would violate the principle of data minimisation and would create major security and privacy issues.”

The toolbox also emphasizes that such contacts tracing/warning systems be temporary and voluntary in nature — with “automated/gentle self-dismantling, including deletion of all remaining personal data and proximity information, as soon as the crisis is over”.

“The apps’ installation should be consent-based, while providing users with complete and clear information on intended use and processing,” is another key recommendation. 

The toolbox leans towards suggesting a decentralized approach, in line with earlier Commission missives, with a push for: “Safeguards to ensure the storing of proximity data on the device and data encryption.”

Though the document also includes some discussion of alternative centralized models which involve uploading arbitrary identifiers to a backend server held by public health authorities. 

Users cannot be directly identified through these data. Only the arbitrary identifiers generated by the app are stored on the server. The advantage is that the data stored in the server can be anonymised by aggregation and further used by public authorities as a source of important aggregated information on the intensity of contacts in the population, on the effectiveness of the app in tracing and alerting contacts and on the aggregated number of people that could potentially develop symptoms,” it writes. 

“None of the two options [decentralized vs centralized] includes storing of unnecessary personal information,” it adds, leaving the door open to states that might want their public health authorities to be responsible for centralized data processing.

However the Commission draws a clear distinction between centralized approaches that use arbitrary identifiers and those that store directly-identifiable data on every user — with the latter definitely not recommended.

They would have “major disadvantage”, per the toolbox, because they “would not keep personal data processing to the absolute minimum, and so people may be less willing to install and use the app”.

“Centralised storage of mobile phone numbers could also create risks of data breaches and cyberattacks,” the Commission further warns.

Discussing cross-border interoperability requirements, the toolbox highlights the necessity for a grab-bag of EU contacts tracing apps to be interoperable, in order to successfully break cross-border transmission chains, which requires national health authorities to be technically able to exchange available information about individuals infected with and/or exposed to COVID-19.

“Tracing and warning apps should therefore follow common EU interoperability protocols so that the previous functionalities can be performed, and particularly safeguarding rights to privacy and data protection, regardless of where a device is in the EU,” it suggests.

On preventing the spread of harmful or unlawful apps the document suggests Member States consider setting up a national system of evaluation/accreditation endorsement of national apps, perhaps based on a common set of criteria (that would need to be defined).

“A close cooperation between health and digital authorities should be sought whenever possible for the evaluation/endorsement of the apps,” it writes. 

The Commission also says “close cooperation with app stores will be needed to promote national apps and promote uptake while delisting harmful apps” — putting Apple and Google squarely in the frame.

Earlier this week the pair announced their own collaboration on coronavirus contracts tracing — announcing a plan to offer an API and later opt-in system-level contacts tracing, based on a decentralized tracking architecture with ephemeral IDs processed locally on devices, rather than being uploaded and held on a central server.

Given the dominance of the two tech giants their decision to collaborate on a decentralized system may effectively deprive national health authorities of the option to gain buy in for systems that would give those publicly funded bodies access to anonymized and aggregated data for coronavirus modelling and/or tracking purposes. Which should, in the middle of a pandemic, give more than a little pause for thought.

A note in the toolbox mentions Apple and Google — with the Commission writing that: “By the end of April 2020, Member States with the Commission will seek clarifications on the solution proposed by Google and Apple with regard to contact tracing functionality on Android and iOS in order to ensure that their initiative is compatible with the EU common approach.”

#android, #api, #apple, #apps, #articles, #contact-tracing, #digital-rights, #europe, #european-commission, #european-union, #food-safety, #google, #health, #human-rights, #identity-management, #law, #mobile-app, #oxford-university, #privacy, #singapore, #smart-phone, #terms-of-service, #thierry-breton

0