Woman lost @metaverse Instagram handle days after Facebook name change

Woman lost @metaverse Instagram handle days after Facebook name change

Enlarge (credit: Liu Guanguan/China News Service)

Thea-Mai Baumann had posted to Instagram using the @metaverse handle for nearly a decade when her account was disabled on November 2.

“Your account has been blocked for pretending to be someone else,” the app told her. 

Baumann wasn’t exactly sure what had happened, but the timing was curious. The account block came just days after Facebook had announced its new name, Meta. CEO Mark Zuckerberg said the name reflected the company’s new focus on its vision of the metaverse, a virtual world meant to facilitate commerce, communication, and more. Baumann’s @metaverse handle was suddenly a hot commodity.

Read 11 remaining paragraphs | Comments

#digital-rights, #facebook, #instagram, #meta, #metaverse, #policy, #social-media

FTC says health apps must notify consumers about data breaches — or face fines

The U.S. Federal Trade Commission (FTC) has warned apps and devices that collect personal health information must notify consumers if their data is breached or shared with third parties without their permission.

In a 3-2 vote on Wednesday, the FTC agreed on a new policy statement to clarify a decade-old 2009 Health Breach Notification Rule, which requires companies handling health records to notify consumers if their data is accessed without permission, such as the result of a breach. This has now been extended to apply to health apps and devices — specifically calling out apps that track fertility data, fitness, and blood glucose — which “too often fail to invest in adequate privacy and data security,” according to FTC chair Lina Khan.

“Digital apps are routinely caught playing fast and loose with user data, leaving users’ sensitive health information susceptible to hacks and breaches,” said Khan in a statement, pointing to a study published this year in the British Medical Journal that found health apps suffer from “serious problems” ranging from the insecure transmission of user data to the unauthorized sharing of data with advertisers.

There have also been a number of recent high-profile breaches involving health apps in recent years. Babylon Health, a U.K. AI chatbot and telehealth startup, last year suffered a data breach after a “software error” allowed users to access other patients’ video consultations, while period tracking app Flo was recently found to be sharing users’ health data with third-party analytics and marketing services.

Under the new rule, any company offering health apps or connected fitness devices that collect personal health data must notify consumers if their data has been compromised. However, the rule doesn’t define a “data breach” as just a cybersecurity intrusion; unauthorized access to personal data, including the sharing of information without an individual’s permission, can also trigger notification obligations.

“While this rule imposes some measure of accountability on tech firms that abuse our personal information, a more fundamental problem is the commodification of sensitive health information, where companies can use this data to feed behavioral ads or power user analytics,” Khan said.

If companies don’t comply with the rule, the FTC said it will “vigorously” enforce fines of $43,792 per violation per day.

The FTC has been cracking down on privacy violations in recent weeks. Earlier this month, the agency unanimously voted to ban spyware maker SpyFone and its chief executive Scott Zuckerman from the surveillance industry for harvesting mobile data on thousands of people and leaving it on the open internet.

#articles, #artificial-intelligence, #babylon-health, #chair, #data-breach, #digital-rights, #flo, #government, #identity-management, #lina-khan, #open-internet, #security, #security-breaches, #social-issues, #spyfone, #terms-of-service

20 years later, unchecked data collection is part of 9/11’s legacy

Almost every American adult remembers, in vivid detail, where they were the morning of September 11, 2001. I was on the second floor of the West Wing of the White House, at a National Economic Council Staff meeting — and I will never forget the moment the Secret Service agent abruptly entered the room, shouting: “You must leave now. Ladies, take off your high heels and go!”

Just an hour before, as the National Economic Council White House technology adviser, I was briefing the deputy chief of staff on final details of an Oval Office meeting with the president, scheduled for September 13. Finally, we were ready to get the president’s sign-off to send a federal privacy bill to Capitol Hill — effectively a federal version of the California Privacy Rights Act, but stronger. The legislation would put guardrails around citizens’ data — requiring opt-in consent for their information to be shared, governing how their data could be collected and how it would be used.

But that morning, the world changed. We evacuated the White House and the day unfolded with tragedy after tragedy sending shockwaves through our nation and the world. To be in D.C. that day was to witness and personally experience what felt like the entire spectrum of human emotion: grief, solidarity, disbelief, strength, resolve, urgency … hope.

Much has been written about September 11, but I want to spend a moment reflecting on the day after.

When the National Economic Council staff came back into the office on September 12, I will never forget what Larry Lindsey, our boss at the time, told us: “I would understand it if some of you don’t feel comfortable being here. We are all targets. And I won’t appeal to your patriotism or faith. But I will — as we are all economists in this room — appeal to your rational self-interest. If we back away now, others will follow, and who will be there to defend the pillars of our society? We are holding the line here today. Act in a way that will make this country proud. And don’t abandon your commitment to freedom in the name of safety and security.”

There is so much to be proud of about how the country pulled together and how our government responded to the tragic events on September 11. First, however, as a professional in the cybersecurity and data privacy field, I reflect on Larry’s advice, and many of the critical lessons learned in the years that followed — especially when it comes to defending the pillars of our society.

Even though our collective memories of that day still feel fresh, 20 years have passed, and we now understand the vital role that data played in the months leading up to the 9/11 terrorist attacks. But, unfortunately, we failed to connect the dots that could have saved thousands of lives by holding intelligence data too closely in disparate locations. These data silos obscured the patterns that would have been clear if only a framework had been in place to share information securely.

So, we told ourselves, “Never again,” and government officials set out to increase the amount of intelligence they could gather — without thinking through significant consequences for not only our civil liberties but also the security of our data. So, the Patriot Act came into effect, with 20 years of surveillance requests from intelligence and law enforcement agencies crammed into the bill. Having been in the room for the Patriot Act negotiations with the Department of Justice, I can confidently say that, while the intentions may have been understandable — to prevent another terrorist attack and protect our people — the downstream negative consequences were sweeping and undeniable.

Domestic wiretapping and mass surveillance became the norm, chipping away at personal privacy, data security and public trust. This level of surveillance set a dangerous precedent for data privacy, meanwhile yielding marginal results in the fight against terrorism.

Unfortunately, the federal privacy bill that we had hoped to bring to Capitol Hill the very week of 9/11 — the bill that would have solidified individual privacy protections — was mothballed.

Over the subsequent years, it became easier and cheaper to collect and store massive amounts of surveillance data. As a result, tech and cloud giants quickly scaled up and dominated the internet. As more data was collected (both by the public and the private sectors), more and more people gained visibility into individuals’ private data — but no meaningful privacy protections were put in place to accompany that expanded access.

Now, 20 years later, we find ourselves with a glut of unfettered data collection and access, with behemoth tech companies and IoT devices collecting data points on our movements, conversations, friends, families and bodies. Massive and costly data leaks — whether from ransomware or simply misconfiguring a cloud bucket — have become so common that they barely make the front page. As a result, public trust has eroded. While privacy should be a human right, it’s not one that’s being protected — and everyone knows it.

This is evident in the humanitarian crisis we have seen in Afghanistan. Just one example: Tragically, the Taliban have seized U.S. military devices that contain biometric data on Afghan citizens who supported coalition forces — data that would make it easy for the Taliban to identify and track down those individuals and their families. This is a worst-case scenario of sensitive, private data falling into the wrong hands, and we did not do enough to protect it.

This is unacceptable. Twenty years later, we are once again telling ourselves, “Never again.” 9/11 should have been a reckoning of how we manage, share and safeguard intelligence data, but we still have not gotten it right. And in both cases — in 2001 and 2021 — the way we manage data has a life-or-death impact.

This is not to say we aren’t making progress: The White House and U.S. Department of Defense have turned a spotlight on cybersecurity and Zero Trust data protection this year, with an executive order to spur action toward fortifying federal data systems. The good news is that we have the technology we need to safeguard this sensitive data while still making it shareable. In addition, we can put contingency plans in place to prevent data that falls into the wrong hands. But, unfortunately, we just aren’t moving fast enough — and the slower we solve this problem of secure data management, the more innocent lives will be lost along the way.

Looking ahead to the next 20 years, we have an opportunity to rebuild trust and transform the way we manage data privacy. First and foremost, we have to put some guardrails in place. We need a privacy framework that gives individuals autonomy over their own data by default.

This, of course, means that public- and private-sector organizations have to do the technical, behind-the-scenes work to make this data ownership and control possible, tying identity to data and granting ownership back to the individual. This is not a quick or simple fix, but it’s achievable — and necessary — to protect our people, whether U.S. citizens, residents or allies worldwide.

To accelerate the adoption of such data protection, we need an ecosystem of free, accessible and open source solutions that are interoperable and flexible. By layering data protection and privacy in with existing processes and solutions, government entities can securely collect and aggregate data in a way that reveals the big picture without compromising individuals’ privacy. We have these capabilities today, and now is the time to leverage them.

Because the truth is, with the sheer volume of data that’s being gathered and stored, there are far more opportunities for American data to fall into the wrong hands. The devices seized by the Taliban are just a tiny fraction of the data that’s currently at stake. As we’ve seen so far this year, nation-state cyberattacks are escalating. This threat to human life is not going away.

Larry’s words from September 12, 2001, still resonate: If we back away now, who will be there to defend the pillars of our society? It’s up to us — public- and private-sector technology leaders — to protect and defend the privacy of our people without compromising their freedoms.

It’s not too late for us to rebuild public trust, starting with data. But, 20 years from now, will we look back on this decade as a turning point in protecting and upholding individuals’ right to privacy, or will we still be saying, “Never again,” again and again?

#column, #counter-terrorism, #department-of-justice, #digital-rights, #mass-surveillance, #national-security, #opinion, #policy, #privacy, #taliban, #zero-trust

Have ‘The Privacy Talk’ with your business partners

As a parent of teenagers, I’m used to having tough, sometimes even awkward, conversations about topics that are complex but important. Most parents will likely agree with me when I say those types of conversations never get easier, but over time, you tend to develop a roadmap of how to approach the subject, how to make sure you’re being clear, and how to answer hard questions.

And like many parents, I quickly learned that my children have just as much to teach me as I can teach them. I’ve learned that tough conversations build trust.

I’ve applied this lesson about trust-building conversations to an extremely important aspect of my role as the chief legal officer at Foursquare: Conducting “The Privacy Talk.”

The discussion should convey an understanding of how the legislative and regulatory environment are going to affect product offerings, including what’s being done to get ahead of that change.

What exactly is ‘The Privacy Talk’?

It’s the conversation that goes beyond the written, publicly-posted privacy policy, and dives deep into a customer, vendor, supplier or partner’s approach to ethics. This conversation seeks to convey and align the expectations that two companies must have at the beginning of a new engagement.

RFIs may ask a lot of questions about privacy compliance, information security, and data ethics. But it’s no match for asking your prospective partner to hop on a Zoom to walk you through their broader approach. Unless you hear it first-hand, it can be hard to discern whether a partner is thinking strategically about privacy, if they are truly committed to data ethics, and how compliance is woven into their organization’s culture.

#column, #digital-advertising, #digital-rights, #ec-column, #ec-how-to, #foursquare, #identity-management, #lawyers, #privacy, #security, #startups, #terms-of-service, #verified-experts

UK names John Edwards as its choice for next data protection chief as gov’t eyes watering down privacy standards

The UK government has named the person it wants to take over as its chief data protection watchdog, with sitting commissioner Elizabeth Denham overdue to vacate the post: The Department of Digital, Culture, Media and Sport (DCMS) today said its preferred replacement is New Zealand’s privacy commissioner, John Edwards.

Edwards, who has a legal background, has spent more than seven years heading up the Office of the Privacy Commissioner In New Zealand — in addition to other roles with public bodies in his home country.

He is perhaps best known to the wider world for his verbose Twitter presence and for taking a public dislike to Facebook: In the wake of the 2018 Cambridge Analytica data misuse scandal Edwards publicly announced that he was deleting his account with the social media — accusing Facebook of not complying with the country’s privacy laws.

An anti-‘Big Tech’ stance aligns with the UK government’s agenda to tame the tech giants as it works to bring in safety-focused legislation for digital platforms and reforms of competition rules that take account of platform power.

If confirmed in the role — the DCMS committee has to approve Edwards’ appointment; plus there’s a ceremonial nod needed from the Queen — he will be joining the regulatory body at a crucial moment as digital minister Oliver Dowden has signalled the beginnings of a planned divergence from the European Union’s data protection regime, post-Brexit, by Boris Johnson’s government.

Dial back the clock five years and prior digital minister, Matt Hancock, was defending the EU’s General Data Protection Regulation (GDPR) as a “decent piece of legislation” — and suggesting to parliament that there would be little room for the UK to diverge in data protection post-Brexit.

But Hancock is now out of government (aptly enough after a data leak showed him breaching social distancing rules by kissing his aide inside a government building), and the government mood music around data has changed key to something far more brash — with sitting digital minister Dowden framing unfettered (i.e. deregulated) data-mining as “a great opportunity” for the post-Brexit UK.

For months, now, ministers have been eyeing how to rework the UK’s current (legascy) EU-based data protection framework — to, essentially, reduce user rights in favor of soundbites heavy on claims of slashing ‘red tape’ and turbocharging data-driven ‘innovation’. Of course the government isn’t saying the quiet part out loud; its press releases talk about using “the power of data to drive growth and create jobs while keeping high data protection standards”. But those standards are being reframed as a fig leaf to enable a new era of data capture and sharing by default.

Dowden has said that the emergency data-sharing which was waived through during the pandemic — when the government used the pressing public health emergency to justify handing NHS data to a raft of tech giantsshould be the ‘new normal’ for a post-Brexit UK. So, tl;dr, get used to living in a regulatory crisis.

A special taskforce, which was commissioned by the prime minister to investigate how the UK could reshape its data policies outside the EU, also issued a report this summer — in which it recommended scrapping some elements of the UK’s GDPR altogether — branding the regime “prescriptive and inflexible”; and advocating for changes to “free up data for innovation and in the public interest”, as it put it, including pushing for revisions related to AI and “growth sectors”.

The government is now preparing to reveal how it intends to act on its appetite to ‘reform’ (read: reduce) domestic privacy standards — with proposals for overhauling the data protection regime incoming next month.

Speaking to the Telegraph for a paywalled article published yesterday, Dowden trailed one change that he said he wants to make which appears to target consent requirements — with the minister suggesting the government will remove the legal requirement to gain consent to, for example, track and profile website visitors — all the while framing it as a pro-consumer move; a way to do away with “endless” cookie banners.

Only cookies that pose a ‘high risk’ to privacy would still require consent notices, per the report — whatever that means.

“There’s an awful lot of needless bureaucracy and box ticking and actually we should be looking at how we can focus on protecting people’s privacy but in as light a touch way as possible,” the digital minister also told the Telegraph.

The draft of this Great British ‘light touch’ data protection framework will emerge next month, so all the detail is still to be set out. But the overarching point is that the government intends to redefine UK citizens’ privacy rights, using meaningless soundbites — with Dowden touting a plan for “common sense” privacy rules — to cover up the fact that it intends to reduce the UK’s currently world class privacy standards and replace them with worse protections for data.

If you live in the UK, how much privacy and data protection you get will depend upon how much ‘innovation’ ministers want to ‘turbocharge’ today — so, yes, be afraid.

It will then fall to Edwards — once/if approved in post as head of the ICO — to nod any deregulation through in his capacity as the post-Brexit information commissioner.

We can speculate that the government hopes to slip through the devilish detail of how it will torch citizens’ privacy rights behind flashy, distraction rhetoric about ‘taking action against Big Tech’. But time will tell.

Data protection experts are already warning of a regulatory stooge.

While the Telegraph suggests Edwards is seen by government as an ideal candidate to ensure the ICO takes a “more open and transparent and collaborative approach” in its future dealings with business.

In a particularly eyebrow raising detail, the newspaper goes on to report that government is exploring the idea of requiring the ICO to carry out “economic impact assessments” — to, in the words of Dowden, ensure that “it understands what the cost is on business” before introducing new guidance or codes of practice.

All too soon, UK citizens may find that — in the ‘sunny post-Brexit uplands’ — they are afforded exactly as much privacy as the market deems acceptable to give them. And that Brexit actually means watching your fundamental rights being traded away.

In a statement responding to Edwards’ nomination, Denham, the outgoing information commissioner, appeared to offer some lightly coded words of warning for government, writing [emphasis ours]: “Data driven innovation stands to bring enormous benefits to the UK economy and to our society, but the digital opportunity before us today will only be realised where people continue to trust their data will be used fairly and transparently, both here in the UK and when shared overseas.”

The lurking iceberg for government is of course that if wades in and rips up a carefully balanced, gold standard privacy regime on a soundbite-centric whim — replacing a pan-European standard with ‘anything goes’ rules of its/the market’s choosing — it’s setting the UK up for a post-Brexit future of domestic data misuse scandals.

You only have to look at the dire parade of data breaches over in the US to glimpse what’s coming down the pipe if data protection standards are allowed to slip. The government publicly bashing the private sector for adhering to lax standards it deregulated could soon be the new ‘get popcorn’ moment for UK policy watchers…

UK citizens will surely soon learn of unfair and unethical uses of their data under the ‘light touch’ data protection regime — i.e. when they read about it in the newspaper.

Such an approach will indeed be setting the country on a path where mistrust of digital services becomes the new normal. And that of course will be horrible for digital business over the longer run. But Dowden appears to lack even a surface understanding of Internet basics.

The UK is also of course setting itself on a direct collision course with the EU if it goes ahead and lowers data protection standards.

This is because its current data adequacy deal with the bloc — which allows for EU citizens’ data to continue flowing freely to the UK — was granted only on the basis that the UK was, at the time it was inked, still aligned with the GDPR. So Dowden’s rush to rip up protections for people’s data presents a clear risk to the “significant safeguards” needed to maintain EU adequacy. Meaning the deal could topple.

Back in June, when the Commission signed off on the UK’s adequacy deal, it clearly warned that “if anything changes on the UK side, we will intervene”.

Add to that, the adequacy deal is also the first with a baked in sunset clause — meaning it will automatically expire in four years. So even if the Commission avoids taking proactive action over slipping privacy standards in the UK there is a hard deadline — in 2025 — when the EU’s executive will be bound to look again in detail at exactly what Dowden & Co. have wrought. And it probably won’t be pretty.

The longer term UK ‘plan’ (if we can put it that way) appears to be to replace domestic economic reliance on EU data flows — by seeking out other jurisdictions that may be friendly to a privacy-light regime governing what can be done with people’s information.

Hence — also today — DCMS trumpeted an intention to secure what it billed as “new multi-billion pound global data partnerships” — saying it will prioritize striking ‘data adequacy’ “partnerships” with the US, Australia, the Republic of Korea, Singapore, and the Dubai International Finance Centre and Colombia.

Future partnerships with India, Brazil, Kenya and Indonesia will also be prioritized, it added — with the government department cheerfully glossing over the fact it’s UK citizens’ own privacy that is being deprioritized here.

“Estimates suggest there is as much as £11 billion worth of trade that goes unrealised around the world due to barriers associated with data transfers,” DCMS writes in an ebullient press release.

As it stands, the EU is of course the UK’s largest trading partner. And statistics from the House of Commons library on the UK’s trade with the EU — which you won’t find cited in the DCMS release — underline quite how tiny this potential Brexit ‘data bonanza’ is, given that UK exports to the EU stood at £294 billion in 2019 (43% of all UK exports).

So even the government’s ‘economic’ case to water down citizens’ privacy rights looks to be puffed up with the same kind of misleadingly vacuous nonsense as ministers’ reframing of a post-Brexit UK as ‘Global Britain’.

Everyone hates cookies banners, sure, but that’s a case for strengthening not weakening people’s privacy — for making non-tracking the default setting online and outlawing manipulative dark patterns so that Internet users don’t constantly have to affirm they want their information protected. Instead the UK may be poised to get rid of annoying cookie consent ‘friction’ by allowing a free for all on citizens’ data.

 

#artificial-intelligence, #australia, #brazil, #colombia, #data-mining, #data-protection, #data-security, #digital-rights, #elizabeth-denham, #europe, #european-union, #facebook, #gdpr, #general-data-protection-regulation, #human-rights, #india, #indonesia, #john-edwards, #kenya, #korea, #matt-hancock, #new-zealand, #nhs, #oliver-dowden, #privacy, #singapore, #social-issues, #social-media, #uk-government, #united-kingdom, #united-states

Senators challenge TikTok’s ‘alarming’ plan to collect users’ voice and face biometrics

TikTok’s plans to collect biometric identifiers from its users has prompted concern among U.S. lawmakers, who are demanding the company reveal exactly what information it collects and what it plans to do with that data.

In a letter sent earlier this month addressed to TikTok CEO Shou Zi Chew, Sens. Amy Klobuchar (D-MN) and John Thune, (R-SD) say they are “alarmed” by the recent change to TikTok’s privacy policy, which allows the company to “automatically collect biometric data, including certain physical and behavioral characteristics from video content posted by its users.”

TechCrunch first reported details of the new privacy policy back in June, when TikTok said it will seek “required permissions” to collect “faceprints and voiceprints” where required by law, but failed to elaborate on whether it’s considering federal law, states laws, or both (only a handful of U.S. states have biometric privacy laws, including Illinois, Washington, California, Texas and New York).

Klobuchar and Thune’s letter asks TikTok to explicitly explain what constitutes a “faceprint” and “voiceprint”, as well as to explain how this data will be used and how long it will be retained. The senators also quizzed TikTok on whether any data is gathered for users under the age of 18; whether it makes any inferences about its users based on the biometric data it collects; and to provide a list of all third parties that have access to the data.

“The coronavirus pandemic led to an increase in online activity, which has magnified the need to protect consumers’ privacy,” the letter reads. “This is especially true for children and teenagers, who comprise more than 32% of TikTok’s active users and have relied on online applications such as TikTok for entertainment and for interaction with their friends and loved ones.”

TikTok has been given until August 25 to respond to the lawmakers’ questions. A TikTok spokesperson did not immediately comment.

This isn’t the first time TikTok’s excessive data collection plans have come under scrutiny. Earlier this year, the company paid out $92 million to settle a class-action lawsuit claiming it unlawfully collected users’ biometric data and shared it with third parties. This came after the FTC in 2019 slapped TikTok with a $5.7 million fine for violating the Children’s Online Privacy Protection Act (COPPA), which requires apps to receive parental permission before collecting a minor’s data.

#amy-klobuchar, #digital-rights, #human-rights, #privacy, #privacy-policy, #security, #tiktok

Stop using Zoom, Hamburg’s DPA warns state government

Hamburg’s state government has been formally warned against using Zoom over data protection concerns.

The German state’s data protection agency (DPA) took the step of issuing a public warning yesterday, writing in a press release that the Senate Chancellory’s use of the popular videoconferencing tool violates the European Union’s General Data Protection Regulation (GDPR) since user data is transferred to the US for processing.

The DPA’s concern follows a landmark ruling (Schrems II) by Europe’s top court last summer which invalidated a flagship data transfer arrangement between the EU and the US (Privacy Shield), finding US surveillance law to be incompatible with EU privacy rights.

The fallout from Schrems II has been slow to manifest — beyond an instant blanket of legal uncertainty. However a number of European DPAs are now investigating the use of US-based digital services because of the data transfer issue, and in some instances publicly warning against the use of mainstream US tools like Facebook and Zoom because user data cannot be adequately safeguarded when it’s taken over the pond.

German agencies are among the most proactive in this respect. But the EU’s data protection supervisor is also investigating the bloc’s use of cloud services from US giants Amazon and Microsoft over the same data transfer concern.

At the same time, negotiations between the European Commission and the Biden administration to seek a replacement data transfer deal remain ongoing. However EU lawmakers have repeatedly warned against any quick fix — saying reform of US surveillance law is likely required before there can be a revived Privacy Shield. And as the legal limbo continues a growing number of public bodies in Europe are facing pressure to ditch US-based services in favor of compliant local alternatives.

In the Hamburg case, the DPA says it took the step of issuing the Senate Chancellory with a public warning after the body did not provide an adequate response to concerns raised earlier.

The agency asserts that use of Zoom by the public body does not comply with the GDPR’s requirement for a valid legal basis for processing personal data, writing: “The documents submitted by the Senate Chancellery on the use of Zoom show that [GDPR] standards are not being adhered to.”

The DPA initiated a formal procedure earlier, via a hearing, on June 17, 2021 but says the Senate Chancellory failed to stop using the videoconferencing tool. Nor did it provide any additional documents or arguments to demonstrate compliance usage. Hence the DPA taking the step of a formal warning, under Article 58 (2) (a) of the GDPR.

In a statement, Ulrich Kühn, the acting Hamburg commissioner for data protection and freedom of information, dubbed it “incomprehensible” that the regional body was continuing to flout EU law in order to use Zoom — pointing out that a local alternative, provided by the German company Dataport (which supplies software to a number of state, regional and local government bodies) is readily available.

In the statement [translated with Google Translate], Kühn said: “Public bodies are particularly bound to comply with the law. It is therefore more than regrettable that such a formal step had to be taken. At the [Senate Chancellery of the Free and Hanseatic City of Hamburg], all employees have access to a tried and tested video conference tool that is unproblematic with regard to third-country transmission. As the central service provider, Dataport also provides additional video conference systems in its own data centers. These are used successfully in other regions such as Schleswig-Holstein. It is therefore incomprehensible why the Senate Chancellery insists on an additional and legally highly problematic system.”

We’ve reached out to the Hamburg DPA and Senate Chancellory with questions.

Zoom has also been contacted for comment.

#data-protection, #data-security, #dataport, #digital-rights, #eu-us-privacy-shield, #europe, #european-commission, #european-union, #general-data-protection-regulation, #government, #hamburg, #personal-data, #privacy, #schrems-ii, #surveillance-law, #united-states, #video-conferencing, #zoom

With liberty and privacy for some: Widening inequality on the digital frontier

Privacy is emotional — we often value privacy the most when we feel vulnerable or powerless when confronted with creepy data practices. But in the eyes of the court, emotions don’t always constitute harm or a reason for structural change in how privacy is legally codified.

It might take a material perspective on widening privacy disparities — and their implication in broader social inequality — to catalyze the privacy improvements the U.S. desperately needs.

Apple’s leaders announced their plans for the App Tracking Transparency (ATT) update in 2020. In short, iOS users can refuse an app’s ability to track their activity on other apps and websites. The ATT update has led to a sweeping three-quarters of iOS users opting out of cross-app tracking.

Whenever one user base gears up with privacy protections, companies simply redirect their data practices along the path of least resistance.

With less data available to advertisers looking to develop individual profiles for targeted advertising, targeted ads for iOS users look less effective and appealing to ad agencies. As a result, new findings show that advertisers are spending one-third less in advertising spending on iOS devices.

They are redirecting that capital into advertising on Android systems, which account for just over 42.06% of the mobile OS market share, compared to iOS at 57.62%.

Beyond a vague sense of creepiness, privacy disparities increasingly pose risks of material harm: emotional, reputational, economic and otherwise. If privacy belongs to all of us, as many tech companies say, then why does it cost so much? Whenever one user base gears up with privacy protections, companies simply redirect their data practices along the path of least resistance, toward the populations with fewer resources, legal or technical, to control their data.

More than just ads

As more money goes into Android ads, we could expect advertising techniques to become more sophisticated, or at least more aggressive. It is not illegal for companies to engage in targeted advertising, so long as it is done in compliance with users’ legal rights to opt out under relevant laws like CCPA in California.

This raises two immediate issues. First, residents of every state except California currently lack such opt-out rights. Second, granting some users the right to opt out of targeted advertising strongly implies that there are harms, or at least risks, to targeted advertising. And indeed, there can be.

Targeted advertising involves third parties building and maintaining behind-the-scenes profiles of users based on their behavior. Gathering data on app activity, such as fitness habits or shopping patterns, could lead to further inferences about sensitive aspects of a user’s life.

At this point, a representation of a user exists in an under-regulated data system containing — whether correctly or incorrectly inferenced — data that the user did not consent to sharing. (Unless the user lives in California, but let’s suppose they live anywhere else in the U.S.)

Further, research finds that targeted advertising, in building detailed profiles of users, can enact discrimination in housing and employment opportunities, sometimes in violation of federal law. And targeted advertising can impede individuals’ autonomy, preemptively narrowing their window of purchasing options, even when they don’t want to. On the other hand, targeted advertising can support niche or grassroots organizations in connecting them directly with interested audiences. Regardless of a stance on targeted advertising, the underlying problem is when users have no say in whether they are subject to it.

Targeted advertising is a massive and booming practice, but it is only one practice within a broader web of business activities that do not prioritize respect for users’ data. And these practices are not illegal in much of the U.S. Instead of the law, your pocketbook can keep you clear of data disrespect.

Privacy as a luxury

Prominent tech companies, particularly Apple, declare privacy a human right, which makes complete sense from a business standpoint. In the absence of the U.S. federal government codifying privacy rights for all consumers, a bold privacy commitment from a private company sounds pretty appealing.

If the government isn’t going to set a privacy standard, at least my phone manufacturer will. Even though only 6% of Americans claim to understand how companies use their data, it is companies that are making the broad privacy moves.

But if those declaring privacy as a human right only make products affordable to some, what does that say about our human rights? Apple products skew toward wealthier, more educated consumers compared to competitors’ products. This projects a troubling future of increasingly exacerbated privacy disparities between the haves and the have-nots, where a feedback loop is established: Those with fewer resources to acquire privacy protections may have fewer resources to navigate the technical and legal challenges that come with a practice as convoluted as targeted advertising.

Don’t take this as me siding with Facebook in its feud with Apple about privacy versus affordability (see: systemic access control issues recently coming to light). In my view, neither side of that battle is winning.

We deserve meaningful privacy protections that everyone can afford. In fact, to turn the phrase on its head, we deserve meaningful privacy protections that no company can afford to omit from their products. We deserve a both/and approach: privacy that is both meaningful and widely available.

Our next steps forward

Looking ahead, there are two key areas for privacy progress: privacy legislation and privacy tooling for developers. I again invoke the both/and approach. We need lawmakers, rather than tech companies, setting reliable privacy standards for consumers. And we need widely available developer tools that give developers no reason — financially, logistically or otherwise — to implement privacy at the product level.

On privacy legislation, I believe that policy professionals are already raising some excellent points, so I’ll direct you to some of my favorite recent writing from them.

Stacey Gray and her team at the Future of Privacy Forum have begun an excellent blog series on how a federal privacy law could interact with the emerging patchwork of state laws.

Joe Jerome published an outstanding recap of the 2021 state-level privacy landscape and the routes toward widespread privacy protections for all Americans. A key takeaway: The effectiveness of privacy regulation hinges on how well it harmonizes among individuals and businesses. That’s not to say that regulation should be business-friendly, but rather that businesses should be able to reference clear privacy standards so they can confidently and respectfully handle everyday folks’ data.

On privacy tooling, if we make privacy tools readily accessible and affordable for all developers, we really leave tech with zero excuses to meet privacy standards. Take the issue of access control, for instance. Engineers attempt to build manual controls over which personnel and end users can access various data in a complex data ecosystem already populated with sensitive personal information.

The challenge is twofold. First, the horse has already bolted. Technical debt accumulates rapidly, while privacy has remained outside of software development. Engineers need tools that enable them to build privacy features like nuanced access control prior to production.

This leads into the second aspect of the challenge: Even if the engineers overcame all of the technical debt and could make structural privacy improvements at the code level, what standards and widely available tools are available to use?

As a June 2021 report from the Future of Privacy Forum makes clear, privacy technology is in dire need of consistent definitions, which are required for widespread adoption of trustworthy privacy tools. With more consistent definitions and widely available developer tools for privacy, these technical transformations translate into material improvements in how tech at large — not just tech of Brand XYZ — gives users control over their data.

We need privacy rules set by an institution that is not itself playing the game. Regulation alone cannot save us from modern privacy perils, but it is a vital ingredient in any viable solution.

Alongside regulation, every software engineering team should have privacy tools immediately available. When civil engineers are building a bridge, they cannot make it safe for a subset of the population; it must work for all who cross it. The same must hold for our data infrastructure, lest we exacerbate disparities within and beyond the digital realm.

#android, #apple, #california, #column, #developer-tools, #digital-rights, #facebook, #human-rights, #ios-devices, #opinion, #policy, #privacy, #software-development, #tc, #united-states

Kill the standard privacy notice

Privacy is a word on everyone’s mind nowadays — even Big Tech is getting in on it. Most recently, Apple joined the user privacy movement with its App Tracking Transparency feature, a cornerstone of the iOS 14.5 software update. Earlier this year, Tim Cook even mentioned privacy in the same breath as the climate crisis and labeled it one of the top issues of the 21st century.

Apple’s solution is a strong move in the right direction and sends a powerful message, but is it enough? Ostensibly, it relies on users to get informed about how apps track them and, if they wish to, regulate or turn off the tracking. In the words of Soviet satirists Ilf and Petrov, “The cause of helping the drowning is in the drowning’s own hands.” It’s a system that, historically speaking, has not produced great results.

Today’s online consumer is drowning indeed — in the deluge of privacy policies, cookie pop-ups, and various web and app tracking permissions. New regulations just pile more privacy disclosures on, and businesses are mostly happy to oblige. They pass the information burden to the end user, whose only rational move is to accept blindly because reading through the heaps of information does not make sense rationally, economically or subjectively. To save that overburdened consumer, we have only one option: We have to kill the standard privacy notice.

A notice that goes unnoticed

Studies show that online consumers often struggle with standard-form notices. A majority of online users expect that if a company has published a document with the title “privacy notice” or “privacy policy” on its website, then it will not collect, analyze or share their personal information with third parties. At the same time, a similar majority of consumers have serious concerns about being tracked and targeted for intrusive advertising.

Online businesses and major platforms gear their privacy notices and other relevant data disclosures toward obtaining consent, not toward educating and explaining.

It’s a privacy double whammy. To get on the platform, users have to accept the privacy notice. By accepting it, they allow tracking and intrusive ads. If they actually read the privacy notice before accepting, that costs them valuable time and can be challenging and frustrating. If Facebook’s privacy policy is as hard to comprehend as German philosopher Immanuel Kant’s “Critique of Pure Reason,” we have a problem. In the end, the option to decline is merely a formality; not accepting the privacy policy means not getting access to the platform.

So, what use is the privacy notice in its current form? For companies, on the one hand, it legitimizes their data-processing practices. It’s usually a document created by lawyers, for lawyers without thinking one second about the interests of the real users. Safe in the knowledge that nobody reads such disclosures, some businesses not only deliberately fail to make the text understandable, they pack it with all kinds of silly or refreshingly honest content.

One company even claimed its users’ immortal souls and their right to eternal life. For consumers, on the other hand, the obligatory checkmark next to the privacy notice can be a nuisance — or it can lull them into a false sense of data security.

On the unlikely occasion that a privacy notice is so blatantly disagreeable that it pushes users away from one platform and toward an alternative, this is often not a real solution, either. Monetizing data has become the dominant business model online, and personal data ultimately flows toward the same Big Tech giants. Even if you’re not directly on their platforms, many of the platforms you are on work with Big Tech through plugins, buttons, cookies and the like. Resistance seems futile.

A regulatory framework from another time

If companies are deliberately producing opaque privacy notices that nobody reads, maybe lawmakers and regulators could intervene and help improve users’ data privacy? Historically, this has not been the case. In pre-digital times, lawmakers were responsible for a multitude of pre-contractual disclosure mandates that resulted in the heaps of paperwork that accompany leasing an apartment, buying a car, opening a bank account or taking out a mortgage.

When it comes to the digital realm, legislation has been reactive, not proactive, and it lags behind technological development considerably. It took the EU about two decades of Google and one decade of Facebook to come up with the General Data Protection Regulation, a comprehensive piece of legislation that still does not rein in rampant data collection practices. This is just a symptom of a larger problem: Today’s politicians and legislators do not understand the internet. How do you regulate something if you don’t know how it works?

Many lawmakers on both sides of the Atlantic often do not understand how tech companies operate and how they make their money with user data — or pretend not to understand for various reasons. Instead of tackling the issue themselves, legislators ask companies to inform the users directly, in whatever “clear and comprehensible” language they see fit. It’s part laissez-faire, part “I don’t care.”

Thanks to this attitude, we are fighting 21st-century challenges — such as online data privacy, profiling and digital identity theft — with the legal logic of Ancient Rome: consent. Not to knock Roman law, but Marcus Aurelius never had to read the iTunes Privacy Policy in full.

Online businesses and major platforms, therefore, gear their privacy notices and other relevant data disclosures toward obtaining consent, not toward educating and explaining. It keeps the data flowing and it makes for great PR when the opportunity for a token privacy gesture appears. Still, a growing number of users are waking up to the setup. It is time for a change.

A call to companies to do the right thing

We have seen that it’s difficult for users to understand all the “legalese,” and they have nowhere to go even if they did. We have also noted lawmakers’ inadequate knowledge and motivation to regulate tech properly. It is up to digital businesses themselves to act, now that growing numbers of online users are stating their discontent and frustration. If data privacy is one of our time’s greatest challenges, it requires concerted action. Just like countries around the world pledged to lower their carbon emissions, enterprises must also band together and commit to protecting their users’ privacy.

So, here’s a plea to tech companies large and small: Kill your standard privacy notices! Don’t write texts that almost no user understands to protect yourselves against potential legal claims so that you can continue collecting private user data. Instead, use privacy notices that are addressed to your users and that everybody can understand.

And don’t stop there — don’t only talk the talk but walk the walk: Develop products that do not rely on the collection and processing of personal data. Return to the internet’s open-source, protocol roots, and deliver value to your community, not to Big Tech and their advertisers. It is possible, it is profitable and it is rewarding.

#apple, #column, #data-protection, #data-security, #digital-rights, #european-union, #facebook, #general-data-protection-regulation, #google, #human-rights, #opinion, #privacy, #privacy-policy, #tc, #terms-of-service

Dutch court will hear another Facebook privacy lawsuit

Privacy litigation that’s being brought against Facebook by two not-for-profits in the Netherlands can go ahead, an Amsterdam court has ruled. The case will be heard in October.

Since 2019, the Amsterdam-based Data Privacy Foundation (DPS) has been seeking to bring a case against Facebook over its rampant collection of Internet users’ data — arguing the company does not have a proper legal basis for the processing.

It has been joined in the action by the Dutch consumer protection not-for-profit, Consumentenbond.

The pair are seeking redress for Facebook users in the Netherlands for alleged violations of their privacy rights — both by suing for compensation for individuals; and calling for Facebook to end the privacy-hostile practices.

European Union law allows for collective redress across a number of areas, including data protection rights, enabling qualified entities to bring representative actions on behalf of rights holders. And the provision looks like an increasingly important tool for furthering privacy enforcement in the bloc, given how European data protection regulators’ have continued to lack uniform vigor in upholding rights set out in legislation such as the General Data Protection Regulation (which, despite coming into application in 2018, has yet to be seriously applied against platform giants like Facebook).

Returning to the Dutch litigation, Facebook denies any abuse and claims it respects user privacy and provides people with “meaningful control” over how their data gets exploited.

But it has fought the litigation by seeking to block it on procedural grounds — arguing for the suit to be tossed by claiming the DPS does not fit the criteria for bringing a privacy claim on behalf of others and that the Amsterdam court has no jurisdiction as its European business is subject to Irish, rather than Dutch, law.

However the Amsterdam District Court rejected its arguments, clearing the way for the litigation to proceed.

Contacted for comment on the ruling, a Facebook spokesperson told us:

“We are currently reviewing the Court’s decision. The ruling was about the procedural part of the case, not a finding on the merits of the action, and we will continue to defend our position in court. We care about our users in the Netherlands and protecting their privacy is important to us. We build products to help people connect with people and content they care about while honoring their privacy choices. Users have meaningful control over the data that they share on Facebook and we provide transparency around how their data is used. We also offer people tools to access, download, and delete their information and we are committed to the principles of GDPR.”

In a statement today, the Consumentenbond‘s director, Sandra Molenaar, described the ruling as “a big boost for the more than 10 million victims” of Facebook’s practices in the country.

“Facebook has tried to throw up all kinds of legal hurdles and to delay this case as much as possible but fortunately the company has not succeeded. Now we can really get to work and ensure that consumers get what they are entitled to,” she added in the written remarks (translated from Dutch with Google Translate).

In another supporting statement, Dick Bouma, chairman of DPS, added: “This is a nice and important first step for the court. The ruling shows that it pays to take a collective stand against tech giants that violate privacy rights.”

The two not-for-profits are urging Facebook users in the Netherlands to sign up to be part of the representative action (and potentially receive compensation) — saying more than 185,000 people have registered so far.

The suit argues that Facebook users are ‘paying’ for the ‘free’ service with their data — contending the tech giant does not have a valid legal basis to process people’s information because it has not provided users with comprehensive information about the data it is gathering from and on them, nor what it does with it.

So — in essence — the argument is that Facebook’s tracking and targeting is in breach of EU privacy law.

The legal challenge follows an earlier investigation (back in 2014) of Facebook’s business by the Dutch data protection authority which identified problems with its privacy policy and — in a 2017 report — found the company to be processing users’ data without their knowledge or consent.

However, since 2018, Europe’s GDPR has been in application and a ‘one-stop-shop’ mechanism baked into the regulation — to streamline the handling of cross-border cases — has meant complaints against Facebook have been funnelled through Ireland’s Data Protection Commission. The Irish DPC has yet to issue a single decision against Facebook despite receiving scores of complaints. (And it’s notable that  ‘forced consent‘ complaints were filed against Facebook the day GDPR begun being applied — yet still remain undecided by Ireland.)

The GDPR’s enforcement bottleneck makes collective redress actions, such as this one in the Netherlands a potentially important route for Europeans to get rights relief against powerful platforms which seek to shrink the risk of regulatory enforcement via forum shopping.

Although national rules — and courts’ interpretations of them — can vary. So the chance of litigation succeeding is not uniform.

In this case, the Amsterdam court allowed the suit to proceed on the grounds that the Facebook data subjects in question reside in the Netherlands.

It also took the view that a local Facebook corporate entity in the Netherlands is an establishment of Facebook Ireland, among other reasons for rejecting Facebook’s arguments.

How Facebook will seek to press a case against the substance of the Dutch privacy litigation remains to be seen. It may well have other procedural strategies up its sleeve.

The tech giant has used similar stalling tactics against far longer-running privacy litigation in Austria, for example.

In that case, brought by privacy campaigner Max Schrems and his not-for-profit noyb, Facebook has sought to claim that the GDPR’s consent requirements do not apply to its advertising business because it now includes “personalized advertising” in its T&Cs — and therefore has a ‘duty’ to provide privacy-hostile ads to users — seeking to bypass the GDPR by claiming it must process users’ data because it’s “necessary for the performance of a contract”, as noyb explains here.

A court in Vienna accepted this “GDPR consent bypass” sleight-of-hand, dealing a blow to European privacy campaigners.

But an appeal reached the Austrian Supreme Court in March — and a referral could be made to Europe’s top court.

If that happens it would then be up to the CJEU to weigh in whether such a massive loophole in the EU’s flagship data protection framework should really be allowed to stand. But that process could still take over a year or longer.

In the short term, the result is yet more delay for Europeans trying to exercise their rights against platform giants and their in-house armies of lawyers.

In a more positive development for privacy rights, a recent ruling by the CJEU bolstered the case for data protection agencies across the EU to bring actions against tech giants if they see an urgent threat to users — and believe a lead supervisor is failing to act.

That ruling could help unblock some GDPR enforcement against the most powerful tech companies at the regulatory level, potentially reducing the blockages created by bottlenecks such as Ireland.

Facebook’s EU-to-US data flows are also now facing the possibility of a suspension order in a matter of months — related to another piece of litigation brought by Schrems which hinges on the conflict between EU fundamental rights and US surveillance law.

The CJEU weighed in on that last summer with a judgement that requires regulators like Ireland to act when user data is at risk. (And Germany’s federal data protection commissioner, for instance, has warned government bodies to shut their official Facebook pages ahead of planned enforcement action at the start of next year.)

So while Facebook has been spectacularly successful at kicking Europe’s privacy rights claims down the road, for well over a decade, its strategy of legal delay tactics to shield a privacy-hostile business model could finally hit a geopolitical brick wall.

The tech giant has sought to lobby against this threat to its business by suggesting it might switch off its service in Europe if the regulator follows through on a preliminary suspension order last year.

But it has also publicly denied it would actually follow through and close service in Europe.

How might Facebook actually comply if ordered to cut off EU data flows? Schrems has argued it may need to federate its service and store European users’ data inside the EU in order to comply with the eponymous Schrems II CJEU ruling.

Albeit, Facebook has certainly shown itself adept at exploiting the gaps between Europeans’ on-paper rights, national case law and the various EU and Member State institutions involved in oversight and enforcement as a tactic to defend its commercial priorities — playing different players and pushing agendas to further its business interests. So whether any single piece of EU privacy litigation will prove to be the silver bullet that forces a reboot of its privacy-hostile business model very much remains to be seen.

A perhaps more likely scenario is that each of these cases further erodes user trust in Facebook’s services — reducing people’s appetite to use its apps and expanding opportunities for rights-respecting competitors to poach custom by offering something better. 

 

#amsterdam, #austria, #data-protection, #data-protection-commission, #digital-rights, #europe, #european-union, #facebook, #general-data-protection-regulation, #germany, #human-rights, #ireland, #lawsuit, #max-schrems, #netherlands, #noyb, #privacy, #surveillance-law, #vienna

German government bodies urged to remove their Facebook Pages before next year

Germany’s federal information commissioner has run out of patience with Facebook.

Last month, Ulrich Kelber wrote to government agencies “strongly recommend[ing]” they to close down their official Facebook Pages because of ongoing data protection compliance problems and the tech giant’s failure to fix the issue.

In the letter, Kelber warns the government bodies that he intends to start taking enforcement action from January 2022 — essentially giving them a deadline of next year to pull their pages from Facebook.

So expect not to see official Facebook Pages of German government bodies in the coming months.

While Kelber’s own agency, the BfDi, does not appear to have a Facebook Page (although Facebook’s algorithms appear to generate this artificial stub if you try searching for one) plenty of other German federal bodies do — such as the Ministry of Health, whose public page has more than 760,000 followers.

The only alternative to such pages vanishing from Facebook’s platform by Christmas — or else being ordered to be taken down early next year by Kelber — seems to be for the tech giant to make more substantial changes to how its platform operators than it has offered so far, allowing the Pages to be run in Germany in a way that complies with EU law.

However Facebook has a long history of ignoring privacy expectations and data protection laws.

It has also, very recently, shown itself more than willing to reduce the quality of information available to users — if doing so further its business interests (such as to lobby against a media code law, as users in Australia can attest).

So it looks rather more likely that German government agencies will be the ones having to quietly bow off the platform soon…

Kelber says he’s avoided taking action over the ministries’ Facebook Pages until now on account of the public bodies arguing that their Facebook Pages are an important way for them to reach citizens.

However his letter points out that government bodies must be “role models” in matters of legal compliance — and therefore have “a particular duty” to comply with data protection law. (The EDPS is taking a similar tack by reviewing EU institutions’ use of US cloud services giants.)

Per his assessment, an “addendum” provided by Facebook in 2019 does not rectify the compliance problem and he concludes that Facebook has made no changes to its data processing operations to enable Page operators to comply with requirements set out in the EU’s General Data Protection Regulation.

A ruling by Europe’s top court, back in June 2018, is especially relevant here — as it held that the administrator of a fan page on Facebook is jointly responsible with Facebook for the processing of the data of visitors to the page.

That means that the operators of such pages also face data protection compliance obligations, and cannot simply assume that Facebook’s T&Cs provide them with legal cover for the data processing the tech giant undertakes.

The problem, in a nutshell, is that Facebook does not provide Pages operates with enough information or assurances about how it processes users’ data — meaning they’re unable to comply with GDPR principles of accountability and transparency because, for example, they’re unable to adequately inform followers of their Facebook Page what is being done with their data.

There is also no way for Facebook Page operators to switch off (or otherwise block) wider processing of their Page followers by Facebook. Even if they don’t make use of any of the analytics features Facebook provides to Page operators.

The processing still happens.

This is because Facebook operates a take-it-or-leave it ‘data maximizing’ model — to feed its ad-targeting engines.

But it’s an approach that could backfire if it ends up permanently reducing the quality of the information available on its network because there’s a mass migration of key services off its platform. Such as, for example, every government agency in the EU deleted its Facebook Page.

A related blog post on the BfDi’s website also holds out the hope that “data protection-compliant social networks” might develop in the Facebook compliance vacuum.

Certainly there could be a competitive opportunity for alternative platforms that seek to sell services based on respecting users’ rights.

The German Federal Ministry of Health’s verified Facebook Page (Screengrab: TechCrunch/Natasha Lomas)

Discussing the BfDis intervention, Luca Tosoni, a research fellow at the University of Oslo’s Norwegian Research Center for Computers and Law, told TechCrunch: “This development is strictly connected to recent CJEU case law on joint controllership. In particular, it takes into account the Wirtschaftsakademie ruling, which found that the administrator of a Facebook page should be considered a joint controller with Facebook in respect of processing the personal data of the visitors of the page.

“This does not mean that the page administrator and Facebook share equal responsibility for all stages of the data processing activities linked to the use of the Facebook page. However, they must have an agreement in place with a clear allocation of roles and responsibilities. According to the German Federal Commissioner for Data Protection and Freedom of Information, Facebook’s current data protection ‘Addendum’ would not seem to be sufficient to meet the latter requirement.”

“It is worth noting that, in its Fashion ID ruling, the CJEU has taken the view that the GDPR’s obligations for joint controllers are commensurate with those data processing stages in which they actually exercise control,” Tosoni added. “This means that the data protection obligations a Facebook page administrator would normally tend to be quite limited.”

Warnings for other social media services

This particular compliance issue affects Facebook in Germany — and potentially any other EU market. But other social media services may face similar problems too.

For example, Kelber’s letter flags an ongoing audit of Instagram, TikTok and Clubhouse — warning of “deficits” in the level of data protection they offer too.

He goes on to recommend that agencies avoid using the three apps on business devices.  

In an earlier, 2019 assessment of government bodies’ use of social media services, the BfDi suggested usage of Twitter could — by contrast — be compliant with data protection rules. At least if privacy settings were fully enabled and analytics disabled, for example.

At the time the BfDi also warned that Facebook-owned Instagram faced similar compliance problems to Facebook, being subject to the same “abusive” approach to consent he said was taken by the whole group.

Reached for comment on Kelber’s latest recommendations to government agencies, Facebook did not engage with our specific questions — sending us this generic statement instead:

“At the end of 2019, we updated the Page Insights addendum and clarified the responsibilities of Facebook and Page administrators, for which we took questions regarding transparency of data processing into account. It is important to us that also federal agencies can use Facebook Pages to communicate with people on our platform in a privacy-compliant manner.”

An additional complication for Facebook has arisen in the wake of the legal uncertainty following last summer’s Schrems II ruling by the CJEU.

Europe’s top court invalidated the EU-US Privacy Shield arrangement, which had allowed companies to self-certify an adequate level of data protection, removing the easiest route for transferring EU users’ personal data over to the US. And while the court did not outlaw international transfers of EU users’ personal data altogether it made it clear that data protection agencies must intervene and suspend data flows if they suspect information is being moved to a place, and in in such a way, that it’s put at risk.

Following Schrems II, transfers to the US are clearly problematic where the data is being processed by a US company that’s subject to FISA 702, as is the case with Facebook.

Indeed, Facebook’s EU-to-US data transfers were the original target of the complainant in the Schrems II case (by the eponymous Max Schrems). And a decision remains pending on whether the tech giant’s lead EU data supervisor will follow through on a preliminary order last year to it should suspend its EU data flows — due in the coming months.

Even ahead of that long-anticipated reckoning in Ireland, other EU DPAs are now stepping in to take action — and Kelber’s letter references the Schrems II ruling as another issue of concern.

Tosoni agrees that GDPR enforcement is finally stepping up a gear. But he also suggested that compliance with the Schrems II ruling comes with plenty of nuance, given that each data flow must be assessed on a case by case basis — with a range of supplementary measures that controllers may be able to apply.

“This development also shows that European data protection authorities are getting serious about enforcing the GDPR data transfer requirements as interpreted by the CJEU in Schrems II, as the German Federal Commissioner for Data Protection and Freedom flagged this as another pain point,” he said.

“However, the German Federal Commissioner sent out his letter on the use of Facebook pages a few days before the EDPB adopted the final version its recommendations on supplementary measures for international data transfers following the CJEU Schrems II ruling. Therefore, it remains to be seen how German data protection authorities will take these new recommendations into account in the context of their future assessment of the GDPR compliance of the use of Facebook pages by German public authorities.

“Such recommendations do not establish a blanket ban on data transfers to the US but impose the adoption of stringent safeguards, which will need to be followed to keep on transferring the data of German visitors of Facebook pages to the US.”

Another recent judgment by the CJEU reaffirmed that EU data protection agencies can, in certain circumstances, take action when they are not the lead data supervisor for a specific company under the GDPR’s one-stop-shop mechanism — expanding the possibility for litigation by watchdogs in Member States if a local agency believes there’s an urgent need to act.

Although, in the case of the German government bodies’ use of Facebook Pages, the earlier CJEU ruling finding on joint law controllership means the BfDi already has clear jurisdiction to target these agencies’ Facebook Pages itself.

 

#advertising-tech, #australia, #cjeu, #data-processing, #data-protection, #data-security, #digital-rights, #eu-us-privacy-shield, #europe, #european-union, #facebook, #facebook-pages, #general-data-protection-regulation, #germany, #instagram, #ireland, #law, #max-schrems, #policy, #privacy, #twitter, #united-states

Perspectives on tackling Big Tech’s market power

The need for markets-focused competition watchdogs and consumer-centric privacy regulators to think outside their respective ‘legal silos’ and find creative ways to work together to tackle the challenge of big tech market power was the impetus for a couple of fascinating panel discussions organized by the Centre for Economic Policy Research (CEPR), which were livestreamed yesterday but are available to view on-demand here.

The conversations brought together key regulatory leaders from Europe and the US — giving a glimpse of what the future shape of digital markets oversight might look like at a time when fresh blood has just been injected to chair the FTC so regulatory change is very much in the air (at least around tech antitrust).

CEPR’s discussion premise is that integration, not merely intersection, of competition and privacy/data protection law is needed to get a proper handle on platform giants that have, in many cases, leveraged their market power to force consumers to accept an abusive ‘fee’ of ongoing surveillance.

That fee both strips consumers of their privacy and helps tech giants perpetuate market dominance by locking out interesting new competition (which can’t get the same access to people’s data so operates at a baked in disadvantage).

A running theme in Europe for a number of years now, since a 2018 flagship update to the bloc’s data protection framework (GDPR), has been the ongoing under-enforcement around the EU’s ‘on-paper’ privacy rights — which, in certain markets, means regional competition authorities are now actively grappling with exactly how and where the issue of ‘data abuse’ fits into their antitrust legal frameworks.

The regulators assembled for CEPR’s discussion included, from the UK, the Competition and Markets Authority’s CEO Andrea Coscelli and the information commissioner, Elizabeth Denham; from Germany, the FCO’s Andreas Mundt; from France, Henri Piffaut, VP of the French competition authority; and from the EU, the European Data Protection Supervisor himself, Wojciech Wiewiórowski, who advises the EU’s executive body on data protection legislation (and is the watchdog for EU institutions’ own data use).

The UK’s CMA now sits outside the EU, of course — giving the national authority a higher profile role in global mergers & acquisition decisions (vs pre-brexit), and the chance to help shape key standards in the digital sphere via the investigations and procedures it chooses to pursue (and it has been moving very quickly on that front).

The CMA has a number of major antitrust probes open into tech giants — including looking into complaints against Apple’s App Store and others targeting Google’s plan to depreciate support for third party tracking cookies (aka the so-called ‘Privacy Sandbox’) — the latter being an investigation where the CMA has actively engaged the UK’s privacy watchdog (the ICO) to work with it.

Only last week the competition watchdog said it was minded to accept a set of legally binding commitments that Google has offered which could see a quasi ‘co-design’ process taking place, between the CMA, the ICO and Google, over the shape of the key technology infrastructure that ultimately replaces tracking cookies. So a pretty major development.

Germany’s FCO has also been very active against big tech this year — making full use of an update to the national competition law which gives it the power to take proactive inventions around large digital platforms with major competitive significance — with open procedures now against Amazon, Facebook and Google.

The Bundeskartellamt was already a pioneer in pushing to loop EU data protection rules into competition enforcement in digital markets in a strategic case against Facebook, as we’ve reported before. That closely watched (and long running) case — which targets Facebook’s ‘superprofiling’ of users, based on its ability to combine user data from multiple sources to flesh out a single high dimension per-user profile — is now headed to Europe’s top court (so likely has more years to run).

But during yesterday’s discussion Mundt confirmed that the FCO’s experience litigating that case helped shape key amendments to the national law that’s given him beefier powers to tackle big tech. (And he suggested it’ll be a lot easier to regulate tech giants going forward, using these new national powers.)

“Once we have designated a company to be of ‘paramount significance’ we can prohibit certain conduct much more easily than we could in the past,” he said. “We can prohibit, for example, that a company impedes other undertaking by data processing that is relevant for competition. We can prohibit that a use of service depends on the agreement to data collection with no choice — this is the Facebook case, indeed… When this law was negotiated in parliament parliament very much referred to the Facebook case and in a certain sense this entwinement of competition law and data protection law is written in a theory of harm in the German competition law.

“This makes a lot of sense. If we talk about dominance and if we assess that this dominance has come into place because of data collection and data possession and data processing you need a parameter in how far a company is allowed to gather the data to process it.”

“The past is also the future because this Facebook case… has always been a big case. And now it is up to the European Court of Justice to say something on that,” he added. “If everything works well we might get a very clear ruling saying… as far as the ECN [European Competition Network] is concerned how far we can integrate GDPR in assessing competition matters.

“So Facebook has always been a big case — it might get even bigger in a certain sense.”

France’s competition authority and its national privacy regulator (the CNIL), meanwhile, have also been joint working in recent years.

Including over a competition complaint against Apple’s pro-user privacy App Tracking Transparency feature (which last month the antitrust watchdog declined to block) — so there’s evidence there too of respective oversight bodies seeking to bridge legal silos in order to crack the code of how to effectively regulate tech giants whose market power, panellists agreed, is predicated on earlier failures of competition law enforcement that allowed tech platforms to buy up rivals and sew up access to user data, entrenching advantage at the expense of user privacy and locking out the possibility of future competitive challenge.

The contention is that monopoly power predicated upon data access also locks consumers into an abusive relationship with platform giants which can then, in the case of ad giants like Google and Facebook, extract huge costs (paid not in monetary fees but in user privacy) for continued access to services that have also become digital staples — amping up the ‘winner takes all’ characteristic seen in digital markets (which is obviously bad for competition too).

Yet, traditionally at least, Europe’s competition authorities and data protection regulators have been focused on separate workstreams.

The consensus from the CEPR panels was very much that that is both changing and must change if civil society is to get a grip on digital markets — and wrest control back from tech giants to that ensure consumers and competitors aren’t both left trampled into the dust by data-mining giants.

Denham said her motivation to dial up collaboration with other digital regulators was the UK government entertaining the idea of creating a one-stop-shop ‘Internet’ super regulator. “What scared the hell out of me was the policymakers the legislators floating the idea of one regulator for the Internet. I mean what does that mean?” she said. “So I think what the regulators did is we got to work, we got busy, we become creative, got our of our silos to try to tackle these companies — the likes of which we have never seen before.

“And I really think what we have done in the UK — and I’m excited if others think it will work in their jurisdictions — but I think that what really pushed us is that we needed to show policymakers and the public that we had our act together. I think consumers and citizens don’t really care if the solution they’re looking for comes from the CMA, the ICO, Ofcom… they just want somebody to have their back when it comes to protection of privacy and protection of markets.

“We’re trying to use our regulatory levers in the most creative way possible to make the digital markets work and protect fundamental rights.”

During the earlier panel, the CMA’s Simeon Thornton, a director at the authority, made some interesting remarks vis-a-vis its (ongoing) Google ‘Privacy Sandbox’ investigation — and the joint working it’s doing with the ICO on that case — asserting that “data protection and respecting users’ rights to privacy are very much at the heart of the commitments upon which we are currently consulting”.

“If we accept the commitments Google will be required to develop the proposals according to a number of criteria including impacts on privacy outcomes and compliance with data protection principles, and impacts on user experience and user control over the use of their personal data — alongside the overriding objective of the commitments which is to address our competition concerns,” he went on, adding: “We have worked closely with the ICO in seeking to understand the proposals and if we do accept the commitments then we will continue to work closely with the ICO in influencing the future development of those proposals.”

“If we accept the commitments that’s not the end of the CMA’s work — on the contrary that’s when, in many respects, the real work begins. Under the commitments the CMA will be closely involved in the development, implementation and monitoring of the proposals, including through the design of trials for example. It’s a substantial investment from the CMA and we will be dedicating the right people — including data scientists, for example, to the job,” he added. “The commitments ensure that Google addresses any concerns that the CMA has. And if outstanding concerns cannot be resolved with Google they explicitly provide for the CMA to reopen the case and — if necessary — impose any interim measures necessary to avoid harm to competition.

“So there’s no doubt this is a big undertaking. And it’s going to be challenging for the CMA, I’m sure of that. But personally I think this is the sort of approach that is required if we are really to tackle the sort of concerns we’re seeing in digital markets today.”

Thornton also said: “I think as regulators we do need to step up. We need to get involved before the harm materializes — rather than waiting after the event to stop it from materializing, rather than waiting until that harm is irrevocable… I think it’s a big move and it’s a challenging one but personally I think it’s a sign of the future direction of travel in a number of these sorts of cases.”

Also speaking during the regulatory panel session was FTC commissioner Rebecca Slaughter — a dissenter on the $5BN fine it hit Facebook with back in 2019 for violating an earlier consent order (as she argued the settlement provided no deterrent to address underlying privacy abuse, leaving Facebook free to continue exploiting users’ data) — as well as Chris D’Angelo, the chief deputy AG of the New York Attorney General, which is leading a major states antitrust case against Facebook.

Slaughter pointed out that the FTC already combines a consumer focus with attention on competition but said that historically there has been separation of divisions and investigations — and she agreed on the need for more joined-up working.

She also advocated for US regulators to get out of a pattern of ineffective enforcement in digital markets on issues like privacy and competition where companies have, historically, been given — at best — what amounts to wrist slaps that don’t address root causes of market abuse, perpetuating both consumer abuse and market failure. And be prepared to litigate more.

As regulators toughen up their stipulations they will need to be prepared for tech giants to push back — and therefore be prepared to sue instead of accepting a weak settlement.

“That is what is most galling to me that even where we take action, in our best faith good public servants working hard to take action, we keep coming back to the same questions, again and again,” she said. “Which means that the actions we are taking isn’t working. We need different action to keep us from having the same conversation again and again.”

Slaughter also argued that it’s important for regulators not to pile all the burden of avoiding data abuses on consumers themselves.

“I want to sound a note of caution around approaches that are centered around user control,” she said. “I think transparency and control are important. I think it is really problematic to put the burden on consumers to work through the markets and the use of data, figure out who has their data, how it’s being used, make decisions… I think you end up with notice fatigue; I think you end up with decision fatigue; you get very abusive manipulation of dark patterns to push people into decisions.

“So I really worry about a framework that is built at all around the idea of control as the central tenant or the way we solve the problem. I’ll keep coming back to the notion of what instead we need to be focusing on is where is the burden on the firms to limit their collection in the first instance, prohibit their sharing, prohibit abusive use of data and I think that that’s where we need to be focused from a policy perspective.

“I think there will be ongoing debates about privacy legislation in the US and while I’m actually a very strong advocate for a better federal framework with more tools that facilitate aggressive enforcement but I think if we had done it ten years ago we probably would have ended up with a notice and consent privacy law and I think that that would have not been a great outcome for consumers at the end of the day. So I think the debate and discussion has evolved in an important way. I also think we don’t have to wait for Congress to act.”

As regards more radical solutions to the problem of market-denting tech giants — such as breaking up sprawling and (self-servingly) interlocking services empires — the message from Europe’s most ‘digitally switched on’ regulators seemed to be don’t look to us for that; we are going to have to stay in our lanes.

So tl;dr — if antitrust and privacy regulators’ joint working just sums to more intelligent fiddling round the edges of digital market failure, and it’s break-ups of US tech giants that’s what’s really needed to reboot digital markets, then it’s going to be up to US agencies to wield the hammers. (Or, as Coscelli elegantly phrased it: “It’s probably more realistic for the US agencies to be in the lead in terms of structural separation if and when it’s appropriate — rather than an agency like ours [working from inside a mid-sized economy such as the UK’s].”)

The lack of any representative from the European Commission on the panel was an interesting omission in that regard — perhaps hinting at ongoing ‘structural separation’ between DG Comp and DG Justice where digital policymaking streams are concerned.

The current competition chief, Margrethe Vestager — who also heads up digital strategy for the bloc, as an EVP — has repeatedly expressed reluctance to impose radical ‘break up’ remedies on tech giants. She also recently preferred to waive through another Google digital merger (its acquisition of fitness wearable Fitbit) — agreeing to accept a number of ‘concessions’ and ignoring major mobilization by civil society (and indeed EU data protection agencies) urging her to block it.

Yet in an earlier CEPR discussion session, another panellist — Yale University’s Dina Srinivasan — pointed to the challenges of trying to regulate the behavior of companies when there are clear conflicts of interest, unless and until you impose structural separation as she said has been necessary in other markets (like financial services).

“In advertising we have an electronically traded market with exchanges and we have brokers on both sides. In a competitive market — when competition was working — you saw that those brokers were acting in the best interest of buyers and sellers. And as part of carrying out that function they were sort of protecting the data that belonged to buyers and sellers in that market, and not playing with the data in other ways — not trading on it, not doing conduct similar to insider trading or even front running,” she said, giving an example of how that changed as Google gained market power.

“So Google acquired DoubleClick, made promises to continue operating in that manner, the promises were not binding and on the record — the enforcement agencies or the agencies that cleared the merger didn’t make Google promise that they would abide by that moving forward and so as Google gained market power in that market there’s no regulatory requirement to continue to act in the best interests of your clients, so now it becomes a market power issue, and after they gain enough market power they can flip data ownership and say ‘okay, you know what before you owned this data and we weren’t allowed to do anything with it but now we’re going to use that data to for example sell our own advertising on exchanges’.

“But what we know from other markets — and from financial markets — is when you flip data ownership and you engage in conduct like that that allows the firm to now build market power in yet another market.”

The CMA’s Coscelli picked up on Srinivasan’s point — saying it was a “powerful” one, and that the challenges of policing “very complicated” situations involving conflicts of interests is something that regulators with merger control powers should be bearing in mind as they consider whether or not to green light tech acquisitions.

(Just one example of a merger in the digital space that the CMA is still scrutizing is Facebook’s acquisition of animated GIF platform Giphy. And it’s interesting to speculate whether, had brexit happened a little faster, the CMA might have stepped in to block Google’s Fitibit merger where the EU wouldn’t.)

Coscelli also flagged the issue of regulatory under-enforcement in digital markets as a key one, saying: “One of the reasons we are today where we are is partially historic under-enforcement by competition authorities on merger control — and that’s a theme that is extremely interesting and relevant to us because after the exit from the EU we now have a bigger role in merger control on global mergers. So it’s very important to us that we take the right decisions going forward.”

“Quite often we intervene in areas where there is under-enforcement by regulators in specific areas… If you think about it when you design systems where you have vertical regulators in specific sectors and horizontal regulators like us or the ICO we are more successful if the vertical regulators do their job and I’m sure they are more success if we do our job properly.

“I think we systematically underestimate… the ability of companies to work through whatever behavior or commitments or arrangement are offered to us, so I think these are very important points,” he added, signalling that a higher degree of attention is likely to be applied to tech mergers in Europe as a result of the CMA stepping out from the EU’s competition regulation umbrella.

Also speaking during the same panel, the EDPS warned that across Europe more broadly — i.e. beyond the small but engaged gathering of regulators brought together by CEPR — data protection and competition regulators are far from where they need to be on joint working, implying that the challenge of effectively regulating big tech across the EU is still a pretty Sisyphean one.

It’s true that the Commission is not sitting on hands in the face of tech giant market power.

At the end of last year it proposed a regime of ex ante regulations for so-called ‘gatekeeper’ platforms, under the Digital Markets Act. But the problem of how to effectively enforce pan-EU laws — when the various agencies involved in oversight are typically decentralized across Member States — is one key complication for the bloc. (The Commission’s answer with the DMA was to suggest putting itself in charge of overseeing gatekeepers but it remains to be seen what enforcement structure EU institutions will agree on.)

Clearly, the need for careful and coordinated joint working across multiple agencies with different legal competencies — if, indeed, that’s really what’s needed to properly address captured digital markets vs structural separation of Google’s search and adtech, for example, and Facebook’s various social products — steps up the EU’s regulatory challenge in digital markets.

“We can say that no effective competition nor protection of the rights in the digital economy can be ensured when the different regulators do not talk to each other and understand each other,” Wiewiórowski warned. “While we are still thinking about the cooperation it looks a little bit like everybody is afraid they will have to trade a little bit of its own possibility to assess.”

“If you think about the classical regulators isn’t it true that at some point we are reaching this border where we know how to work, we know how to behave, we need a little bit of help and a little bit of understanding of the other regulator’s work… What is interesting for me is there is — at the same time — the discussion about splitting of the task of the American regulators joining the ones on the European side. But even the statements of some of the commissioners in the European Union saying about the bigger role the Commission will play in the data protection and solving the enforcement problems of the GDPR show there is no clear understanding what are the differences between these fields.”

One thing is clear: Big tech’s dominance of digital markets won’t be unpicked overnight. But, on both sides of the Atlantic, there are now a bunch of theories on how to do it — and growing appetite to wade in.

#advertising-tech, #amazon, #andreas-mundt, #competition-and-markets-authority, #competition-law, #congress, #data-processing, #data-protection, #data-protection-law, #data-security, #digital-markets-act, #digital-rights, #doubleclick, #elizabeth-denham, #europe, #european-commission, #european-court-of-justice, #european-union, #facebook, #federal-trade-commission, #financial-services, #fitbit, #france, #general-data-protection-regulation, #germany, #human-rights, #margrethe-vestager, #policy, #privacy, #uk-government, #united-kingdom, #united-states, #yale-university

Facebook loses last ditch attempt to derail DPC decision on its EU-US data flows

Facebook has failed in its bid to prevent its lead EU data protection regulator from pushing ahead with a decision on whether to order suspension of its EU-US data flows.

The Irish High Court has just issued a ruling dismissing the company’s challenge to the Irish Data Protection Commission’s (DPC) procedures.

The case has huge potential operational significance for Facebook which may be forced to store European users’ data locally if it’s ordered to stop taking their information to the U.S. for processing.

Last September Irish data watchdog made a preliminary order warning Facebook it may have to suspend EU-US data flows. Facebook responding by filing for a judicial review and obtaining a stay on the DPC’s procedure. That block is now being unblocked.

We understand the involved parties have been given a few days to read the High Court judgement ahead of another hearing on Thursday — when the court is expected to formally lift Facebook’s stay on the DPC’s investigation (and settle the matter of case costs).

The DPC declined to comment on today’s ruling in any detail — or on the timeline for making a decision on Facebook’s EU-US data flows — but deputy commissioner Graham Doyle told us it “welcomes today’s judgment”.

Its preliminary suspension order last fall followed a landmark judgement by Europe’s top court in the summer — when the CJEU struck down a flagship transatlantic agreement on data flows, on the grounds that US mass surveillance is incompatible with the EU’s data protection regime.

The fall-out from the CJEU’s invalidation of Privacy Shield (as well as an earlier ruling striking down its predecessor Safe Harbor) has been ongoing for years — as companies that rely on shifting EU users’ data to the US for processing have had to scramble to find valid legal alternatives.

While the CJEU did not outright ban data transfers out of the EU, it made it crystal clear that data protection agencies must step in and suspend international data flows if they suspect EU data is at risk. And EU to US data flows were signalled as at clear risk given the court simultaneously struck down Privacy Shield.

The problem for some businesses is that there may simply not be a valid legal alternative. And that’s where things look particularly sticky for Facebook, since its service falls under NSA surveillance via Section 702 of the FISA (which is used to authorize mass surveillance programs like Prism).

So what happens now for Facebook, following the Irish High Court ruling?

As ever in this complex legal saga — which has been going on in various forms since an original 2013 complaint made by European privacy campaigner Max Schrems — there’s still some track left to run.

After this unblocking the DPC will have two enquiries in train: Both the original one, related to Schrems’ complaint, and an own volition enquiry it decided to open last year — when it said it was pausing investigation of Schrems’ original complaint.

Schrems, via his privacy not-for-profit noyb, filed for his own judicial review of the DPC’s proceedings. And the DPC quickly agreed to settle — agreeing in January that it would ‘swiftly’ finalize Schrems’ original complaint. So things were already moving.

The tl;dr of all that is this: The last of the bungs which have been used to delay regulatory action in Ireland over Facebook’s EU-US data flows are finally being extracted — and the DPC must decide on the complaint.

Or, to put it another way, the clock is ticking for Facebook’s EU-US data flows. So expect another wordy blog post from Nick Clegg very soon.

Schrems previously told TechCrunch he expects the DPC to issue a suspension order against Facebook within months — perhaps as soon as this summer (and failing that by fall).

In a statement reacting to the Court ruling today he reiterated that position, saying: “After eight years, the DPC is now required to stop Facebook’s EU-US data transfers, likely before summer. Now we simply have two procedures instead of one.”

When Ireland (finally) decides it won’t mark the end of the regulatory procedures, though.

A decision by the DPC on Facebook’s transfers would need to go to the other EU DPAs for review — and if there’s disagreement there (as seems highly likely, given what’s happened with draft DPC GDPR decisions) it will trigger a further delay (weeks to months) as the European Data Protection Board seeks consensus.

If a majority of EU DPAs can’t agree the Board may itself have to cast a deciding vote. So that could extend the timeline around any suspension order. But an end to the process is, at long last, in sight.

And, well, if a critical mass of domestic pressure is ever going to build for pro-privacy reform of U.S. surveillance laws now looks like a really good time…

“We now expect the DPC to issue a decision to stop Facebook’s data transfers before summer,” added Schrems. “This would require Facebook to store most data from Europe locally, to ensure that Facebook USA does not have access to European data. The other option would be for the US to change its surveillance laws.”

Facebook has been contacted for comment on the Irish High Court ruling.

Update: The company has now sent us this statement:

“Today’s ruling was about the process the IDPC followed. The larger issue of how data can move around the world remains of significant importance to thousands of European and American businesses that connect customers, friends, family and employees across the Atlantic. Like other companies, we have followed European rules and rely on Standard Contractual Clauses, and appropriate data safeguards, to provide a global service and connect people, businesses and charities. We look forward to defending our compliance to the IDPC, as their preliminary decision could be damaging not only to Facebook, but also to users and other businesses.”

#data-protection, #data-security, #digital-rights, #dpc, #eu-us-privacy-shield, #europe, #european-data-protection-board, #european-union, #facebook, #human-rights, #ireland, #lawsuit, #max-schrems, #nick-clegg, #noyb, #policy, #privacy, #safe-harbor, #united-states

Facebook faces ‘mass action’ lawsuit in Europe over 2019 breach

Facebook is to be sued in Europe over the major leak of user data that dates back to 2019 but which only came to light recently after information on 533M+ accounts was found posted for free download on a hacker forum.

Today Digital Rights Ireland (DRI) announced it’s commencing a “mass action” to sue Facebook, citing the right to monetary compensation for breaches of personal data that’s set out in the European Union’s General Data Protection Regulation (GDPR).

Article 82 of the GDPR provides for a ‘right to compensation and liability’ for those affected by violations of the law. Since the regulation came into force, in May 2018, related civil litigation has been on the rise in the region.

The Ireland-based digital rights group is urging Facebook users who live in the European Union or European Economic Area to check whether their data was breach — via the haveibeenpwned website (which lets you check by email address or mobile number) — and sign up to join the case if so.

Information leaked via the breach includes Facebook IDs, location, mobile phone numbers, email address, relationship status and employer.

Facebook has been contacted for comment on the litigation.

The tech giant’s European headquarters is located in Ireland — and earlier this week the national data watchdog opened an investigation, under EU and Irish data protection laws.

A mechanism in the GDPR for simplifying investigation of cross-border cases means Ireland’s Data Protection Commission (DPC) is Facebook’s lead data regulator in the EU. However it has been criticized over its handling of and approach to GDPR complaints and investigations — including the length of time it’s taking to issue decisions on major cross-border cases. And this is particularly true for Facebook.

With the three-year anniversary of the GDPR fast approaching, the DPC has multiple open investigations into various aspects of Facebook’s business but has yet to issue a single decision against the company.

(The closest it’s come is a preliminary suspension order issued last year, in relation to Facebook’s EU to US data transfers. However that complaint long predates GDPR; and Facebook immediately filed to block the order via the courts. A resolution is expected later this year after the litigant filed his own judicial review of the DPC’s processes).

Since May 2018 the EU’s data protection regime has — at least on paper — baked in fines of up to 4% of a company’s global annual turnover for the most serious violations.

Again, though, the sole GDPR fine issued to date by the DPC against a tech giant (Twitter) is very far off that theoretical maximum. Last December the regulator announced a €450k (~$547k) sanction against Twitter — which works out to around just 0.1% of the company’s full-year revenue.

That penalty was also for a data breach — but one which, unlike the Facebook leak, had been publicly disclosed when Twitter found it in 2019. So Facebook’s failure to disclose the vulnerability it discovered and claims it fixed by September 2019, which led to the leak of 533M accounts now, suggests it should face a higher sanction from the DPC than Twitter received.

However even if Facebook ends up with a more substantial GDPR penalty for this breach the watchdog’s caseload backlog and plodding procedural pace makes it hard to envisage a swift resolution to an investigation that’s only a few days old.

Judging by past performance it’ll be years before the DPC decides on this 2019 Facebook leak — which likely explains why the DRI sees value in instigating class-action style litigation in parallel to the regulatory investigation.

“Compensation is not the only thing that makes this mass action worth joining. It is important to send a message to large data controllers that they must comply with the law and that there is a cost to them if they do not,” DRI writes on its website.

It also submitted a complaint about the Facebook breach to the DPC earlier this month, writing then that it was “also consulting with its legal advisors on other options including a mass action for damages in the Irish Courts”.

It’s clear that the GDPR enforcement gap is creating a growing opportunity for litigation funders to step in in Europe and take a punt on suing for data-related compensation damages — with a number of other mass actions announced last year.

In the case of DRI its focus is evidently on seeking to ensure that digital rights are upheld. But it told RTE that it believes compensation claims which force tech giants to pay money to users whose privacy rights have been violated is the best way to make them legally compliant.

Facebook, meanwhile, has sought to play down the breach it failed to disclose in 2019 — claiming it’s ‘old data’ — a deflection that ignores the fact that people’s dates of birth don’t change (nor do most people routinely change their mobile number or email address).

Plenty of the ‘old’ data exposed in this latest massive Facebook leak will be very handy for spammers and fraudsters to target Facebook users — and also now for litigators to target Facebook for data-related damages.

#data-protection, #data-protection-commission, #data-security, #digital-rights, #digital-rights-ireland, #europe, #european-union, #facebook, #gdpr, #general-data-protection-regulation, #ireland, #lawsuit, #litigation, #personal-data, #privacy, #social, #social-media, #tc, #twitter

How startups can ensure CCPA and GDPR compliance in 2021

Data is the most valuable asset for any business in 2021. If your business is online and collecting customer personal information, your business is dealing in data, which means data privacy compliance regulations will apply to everyone — no matter the company’s size.

Small startups might not think the world’s strictest data privacy laws — the California Consumer Privacy Act (CCPA) and Europe’s General Data Protection Regulation (GDPR) — apply to them, but it’s important to enact best data management practices before a legal situation arises.

Data compliance is not only critical to a company’s daily functions; if done wrong or not done at all, it can be quite costly for companies of all sizes.

For example, failing to comply with the GDPR can result in legal fines of €20 million or 4% of annual revenue. Under the CCPA, fines can also escalate quickly, to the tune of $2,500 to $7,500 per person whose data is exposed during a data breach.

If the data of 1,000 customers is compromised in a cybersecurity incident, that would add up to $7.5 million. The company can also be sued in class action claims or suffer reputational damage, resulting in lost business costs.

It is also important to recognize some benefits of good data management. If a company takes a proactive approach to data privacy, it may mitigate the impact of a data breach, which the government can take into consideration when assessing legal fines. In addition, companies can benefit from business insights, reduced storage costs and increased employee productivity, which can all make a big impact on the company’s bottom line.

Challenges of data compliance for startups

Data compliance is not only critical to a company’s daily functions; if done wrong or not done at all, it can be quite costly for companies of all sizes. For example, Vodafone Spain was recently fined $9.72 million under GDPR data protection failures, and enforcement trackers show schools, associations, municipalities, homeowners associations and more are also receiving fines.

GDPR regulators have issued $332.4 million in fines since the law was enacted almost two years ago and are being more aggressive with enforcement. While California’s attorney general started CCPA enforcement on July 1, 2020, the newly passed California Privacy Rights Act (CPRA) only recently created a state agency to more effectively enforce compliance for any company storing information of residents in California, a major hub of U.S. startups.

That is why in this age, data privacy compliance is key to a successful business. Unfortunately, many startups are at a disadvantage for many reasons, including:

Identiq, a privacy-friendly fraud prevention startup, secures $47M at Series A

Israeli fraud prevention startup Identiq has raised $47 million at Series A as the company eyes international growth, driven in large part by the spike in online spending during the pandemic.

The round was led by Insight Partners and Entrée Capital, with participation from Amdocs, Sony Innovation Fund by IGV, as well as existing investors Vertex Ventures Israel, Oryzn Capital, and Slow Ventures.

Fraud prevention is big business, which is slated to be worth $145 billion by 2026, ballooning by eightfold in size compared to 2018. But it’s a data hungry industry, fraught with security and privacy risks, having to rely on sharing enormous sets of consumer data in order to learn who legitimate customers are in order to weed out the fraudsters, and therefore.

Identiq takes a different, more privacy-friendly approach to fraud prevention, without having to share a customer’s data with a third-party.

“Before now, the only way companies could solve this problem was by exposing the data they were given by the user to a third party data provider for validation, creating huge privacy problems,” Identiq’s chief executive Itay Levy told TechCrunch. “We solved this by allowing these companies to validate that the data they’ve been given matches the data of other companies that already know and trust the user, without sharing any sensitive information at all.”

When an Identiq customer — such as an online store — sees a new customer for the first time, the store can ask other stores in Identiq’s network if they know or trust that new customer. This peer-to-peer network uses cryptography to help online stores anonymously vet new customers to help weed out bad actors, like fraudsters and scammers, without needing to collect private user data.

So far, the company says it already counts Fortune 500 companies as customers.

Identiq said it plans to use the $47 million raise to hire and grow the company’s workforce, and aims to scale up its support for its international customers.

#articles, #cryptography, #customer-data, #digital-rights, #entree-capital, #human-rights, #identity-management, #insight-partners, #marketing, #online-shopping, #online-stores, #peer-to-peer, #privacy, #security, #slow-ventures, #sony, #sony-innovation-fund, #startups, #terms-of-service, #vertex-ventures

Clearview AI ruled ‘illegal’ by Canadian privacy authorities

Controversial facial recognition startup Clearview AI violated Canadian privacy laws when it collected photos of Canadians without their knowledge or permission, the country’s top privacy watchdog has ruled.

The New York-based company made its splashy newspaper debut a year ago by claiming it had collected over 3 billion photos of people’s faces and touting its connections to law enforcement and police departments. But the startup has faced a slew of criticism for scraping social media sites also without their permission, prompting Facebook, LinkedIn and Twitter to send cease and desist letters to demand it stops.

In a statement, Canada’s Office of the Privacy Commissioner said its investigation found Clearview had “collected highly sensitive biometric information without the knowledge or consent of individuals,” and that the startup “collected, used and disclosed Canadians’ personal information for inappropriate purposes, which cannot be rendered appropriate via consent.”

Clearview rebuffed the allegations, claiming Canada’s privacy laws do not apply because the company doesn’t have a “real and substantial connection” to the country, and that consent was not required because the images it scraped were publicly available.

That’s a challenge the company continues to face in court, as it faces a class action suit citing Illinois’ biometric protection laws that last year dinged Facebook to the tune of $550 million for violating the same law.

The Canadian privacy watchdog rejected Clearview’s arguments, and said it would “pursue other actions” if the company does not follow its recommendations, which included stopping the collection on Canadians and deleting all previously collected images. Clearview said in July that it stopped providing its technology to Canadian customers after the Royal Canadian Mounted Police and the Toronto Police Service were using the startup’s technology.

“What Clearview does is mass surveillance and it is illegal,” said Daniel Therrien, Canada’s privacy commissioner. “It is an affront to individuals’ privacy rights and inflicts broad-based harm on all members of society, who find themselves continually in a police lineup. This is completely unacceptable.”

A spokesperson for Clearview AI did not immediately return a request for comment.

#articles, #canada, #clearview-ai, #digital-rights, #facebook, #facial-recognition, #facial-recognition-software, #human-rights, #illinois, #law-enforcement, #mass-surveillance, #new-york, #privacy, #security, #social-issues, #spokesperson, #terms-of-service

Inadequate federal privacy regulations leave US startups lagging behind Europe

“A new law to follow” seems unlikely to have featured on many business wishlists this holiday season, particularly if that law concerned data privacy. Digital privacy management is an area that takes considerable resources to whip into shape, and most SMBs just aren’t equipped for it.

But for 2021, I believe startups in the United States should be demanding that legislators deliver a federal privacy law. Yes, they should demand to be regulated.

For every day that goes by without agreed-upon federal standards for data, these companies lose competitive edge to the rest of the world. Soon there may be no coming back.

For every day that goes by without agreed-upon federal standards for data, these companies lose competitive edge to the rest of the world.

Businesses should not view privacy and trust infrastructure requirements as burdensome. They should view them as keys that can unlock the full power of the data they possess. They should stop thinking about privacy as compliance and begin thinking of it as a harmonization of the customer relationship. The rewards flowing to each party from such harmonization are bountiful. The U.S. federal government is in a unique position to help realize those rewards.

To understand what I mean, cast your eyes to Europe, where it’s become clear that the GDPR was nowhere near the final destination of EU data policy. Indeed it was just the launchpad. Europe’s data regime can frustrate (endless cookie banners anyone?), but it has set an agreed-upon standard of protection for citizens and elevated their trust in internet infrastructure.

For example, a Deloitte survey found that 44% of consumers felt that organizations cared more about their privacy after GDPR came into force. With a baseline standard established — seatbelts in every car — Europe is now squarely focused on raising the speed limit.

EU lawmakers recently unveiled plans for “A Europe fit for the Digital Age.” in the words of Internal Market Commissioner Thierry Breton, it’s a plan to make Europe “the most data-empowered continent in the world.”

Here are some pillars of the plan. While reading, imagine that you are a U.S.-based health tech startup. Imagine the disadvantage you would face against a similar, European-based company, if these initiatives came to fruition:

  • A regulatory framework covering data governance, access and reuse between businesses, between businesses and government, and within administrations to create incentives for data sharing.
  • A push to make public-sector data more widely available by opening up “high-value datasets” to enable their reuse to foster innovation.
  • Support for cloud infrastructure, platforms and systems to support the data reuse goals, with investments in European high-impact projects on European data spaces and trustworthy, energy-efficient cloud infrastructures.
  • Sector-specific actions to build European data spaces that focus on specific areas such as industrial manufacturing, the Green New Deal, mobility or health.

There are so many ways governments can help businesses maximize their data leverage in ways that improve society. But the American public currently has no appetite for that. They don’t trust the internet.

They want to see Mark Zuckerberg and Jeff Bezos sweating it out under Senate Committee questioning. Until we trust our leaders to protect basic online rights, widespread data empowerment initiatives will not be politically viable.

In Europe, the equation is totally different. GDPR was the foundation of a European data strategy, not the capstone.

While the EU powers forward, America’s ability to enact federal privacy reform is stymied by two quintessentially American privacy sticking points:

  • Can I personally sue a business that violates my privacy rights?
  • Can individual states build additional privacy protections on top of a federal law, or will it act as a nationwide “ceiling”?

These are important questions that must be answered as a function of our country’s unique cultural and political history. But currently they’re the roadblocks that stall American industry while the EU, seatbelts secure, begins speeding down the data autobahn.

If you want a visceral example of how this gap is already impacting American businesses, look no further than the fallout of the ECJ’s Schrems II decision in the middle of last summer. Europe’s highest court invalidated a key agreement used to transfer EU data back to the U.S., essentially because there’s no federal law to ensure EU citizens’ data would be protected once it lands in America.

The legal wrangling continues, but the impact of this decision was so considerable that Facebook legitimately threatened to quit operating Europe if the Schrems II ruling was enforced.

While issues generated for smaller businesses don’t grab as many headlines, rest assured that on the front lines of this issue, I’ve seen many SMB’s data operations thrown into total chaos. In other words, the geopolitical battle for a data-driven business edge is already well underway. We are losing.

To sum it up, the United States increasingly finds itself in a position that’s unprecedented since the dawn of the internet era: laggard. American tech companies still innovate at a fantastic rate, but America’s inability to marshal private sector practices to reflect evolving public sentiment threatens to become a yoke around the economy’s neck.

The catastrophic response to the COVID-19 pandemic fell far short of other nations’ efforts. Our handling of data privacy protection costs far less in human terms, but it grows astronomically more expensive in dollar terms with every passing day.

The technology exists to treat users respectfully in a cost-effective manner. The public will is there.

The business will is there. The legislative capability is there.

That’s why I believe America’s startup community should demand federal lawmakers follow the recent example of Europe, India, New Zealand, Brazil, South Africa and Canada. They need to introduce federally guaranteed modern data privacy protections as soon as possible.

#column, #data-security, #digital-rights, #europe, #general-data-protection-regulation, #opinion, #policy, #privacy, #startups

Understanding Europe’s big push to rewrite the digital rulebook

European Union lawmakers have set out the biggest update of digital regulations for around two decades — likening it to the introduction of traffic lights to highways to bring order to the chaos wrought by increased mobility. Just switch cars for packets of data.

The proposals for a Digital Services Act (DSA) to standardize safety rules for online business, and a Digital Markets Act (DMA), which will put limits on tech giants aimed at boosting competition in the digital markets they dominate, are intended to shape the future of online business for the next two decades — both in Europe and beyond.

The bloc is far ahead of the U.S. on internet regulation. So while the tech giants of today are (mostly) made in the USA, rules that determine how they can and can’t operate in the future are being shaped in Brussels.

What will come faster, a U.S. breakup of a tech empire or effective enforcement of EU rules on internet gatekeepers is an interesting question to ponder.

The latter part of this year has seen Ursula von der Leyen’s European Commission, which took up its five-mandate last December, unleash a flotilla of digital proposals — and tease more coming in 2021. The Commission has proposed a Data Governance Act to encourage reuse of industrial (and other) data, with another data regulation and rules on political ads transparency proposal slated as coming next year. European-flavored guardrails for use of AI will also be presented next year.

But it’s the DSA and DMA that are core to understanding how the EU executive body hopes to reshape internet business practices to increase accountability and fairness — and in so doing promote the region’s interests for years to come.

These are themes being seen elsewhere in the world at a national level. The U.K., for example, is coming with an “Online Safety Bill” next year in response to public concern about the societal impacts of big tech. While rising interest in tech antitrust has led to Google and Facebook facing charges of abusive business practices on home turf.

What will come faster, a U.S. breakup of a tech empire or effective enforcement of EU rules on internet gatekeepers is an interesting question to ponder. Both are now live possibilities — so entrepreneurs can dare to dream of a different, freer and fairer digital playground. One that’s not ruled over by a handful of abusive giants. Though we’re certainly not there yet.

With the DSA and DMA the EU is proposing an e-commerce and digital markets framework that, once adopted, will apply for its 27 Member States — and the ~445 million people who live there — exerting both a sizable regional pull and seeking to punch up and out at global internet giants.

While there are many challenges ahead to turn the planned framework into pan-EU law, it looks a savvy move by the Commission to separate the DSA and DMA — making it harder for big tech to co-opt the wider industry to lobby against measures that will only affect them in the 160+ pages of proposed legislation now on the table.

It’s also notable that the DSA contains a sliding scale of requirements, with audits, risk assessments and the deepest algorithmic accountability provisions reserved for larger players.

Tech sovereignty — by scaling up Europe’s tech capacity and businesses — is a strategic priority for the Commission. And rule-setting is a key part of how it intends to get there — building on data protection rules that have already been updated, with the GDPR being applied from 2018.

Though what the two new major policy packages will mean for tech companies, startup-sized or market-dominating, won’t be clear for months — or even years. The DSA and DMA have to go through the EU’s typically bruising co-legislative process, looping in representatives of Member States’ governments and directly elected MEPs in the European parliament (which often are coming at the process with different policy priorities and agendas).

The draft presented this month is thus a starting point. Plenty could shift — or even change radically — through the coming debates and amendments. Which means the lobbying starts in earnest now. The coming months will be crucial to determining who will be the future winners and losers under the new regime so startups will need to work hard to make their voices heard.

While tech giants have been pouring increasing amounts of money into Brussels “whispering” for years, the EU is keen to champion homegrown tech — and most of big tech isn’t that.

A fight is almost certainly brewing to influence the world’s most ambitious digital rulebook — including in key areas like the surveillance-based adtech business models that currently dominate the web (to the detriment of individual rights and pro-privacy innovation). So for those dreaming of a better web there’s plenty to play for.

Early responses to the DSA and DMA show the two warring sides, with U.S.-based tech lobbies blasting the plan to expand internet regulation as “anti-innovation” (and anti-U.S.), while EU rights groups are making positive noises over the draft — albeit, with an ambition to go further and ensure stronger protections for web users.

On the startup side, there’s early relief that key tenets of the EU’s existing e-commerce framework look set to remain untouched, mingled with concern that plans to rein in tech giants may have knock-on impacts — such as on startup exits (and valuations). European founders, whose ability to scale is being directly throttled by big tech’s market muscle, have other reasons to be cheerful about the direction of policy travel.

In short, major shifts are coming and businesses and entrepreneurs would do well to prepare for changing requirements — and to seize new opportunities.

Read on for a breakdown of the key aims and requirements of the DSA and the DMA, and additional discussion on how the policy plan could shape the future of the startup business.

Digital Services Act

The DSA aims to standardize rules for digital services that act as intermediaries by connecting consumers to goods, services and content. It will apply to various types of digital services, including network infrastructure providers (like ISPs); hosting services (like cloud storage providers); and online platforms (like social media and marketplaces) — applying to all that offer services in the EU, regardless of where they’re based.

The existing EU e-Commerce Directive was adopted in the year 2000 so revisiting it to see if core principles are still fit for purpose is important. And the Commission has essentially decided that they are. But it also wants to improve consumer protections and dial up transparency and accountability on services businesses by setting new due diligence obligations — responding to a smorgasbord of concerns around the impact of what’s now being hawked and monetized online (whether hateful content or dangerous/illegal products).

Some EU Member States have also been drafting their own laws (in areas like hate speech) that threatens regulatory fragmentation of the bloc’s single market, giving lawmakers added impetus to come with harmonized pan-EU rules (hence the DSA being a regulation, not a directive).

The package will introduce obligations aimed at setting rules for how internet businesses respond to illegal stuff (content, services, goods and so on) — including standardized notice and response procedures for swiftly tackling illegal content (an areas that’s been managed by a voluntary EU code of conduct on illegal hate speech up til now); and a “Know Your Customer” principle for online marketplaces (already a familiar feature in more heavily regulated sectors like fintech) that’s aimed at making it harder for sellers of illegal products to simply respawn within a marketplace under a new name.

There’s also a big push around transparency obligations — with requirements in the proposal for platforms to provide “meaningful” criteria used to target ads (Article 24); and explain the “main parameters” of recommender algorithms (Article 29), as well as requirements to foreground user controls (including at least one “nonprofiling” option).

Here the overarching aim is to increase accountability by ensuring European users can get the information needed to be able to exercise their rights.

#digital-markets-act, #digital-rights, #digital-services-act, #eu, #europe, #platform-regulation, #policy, #tc

Privacy is the new competitive battleground

In November, Californians voted to pass Proposition 24, a ballot measure that imposes new regulations on the collection of data by businesses. As part of the California Privacy Rights Act (CPRA), individuals will now have the right to opt out of the sharing and sale of their personal information, while companies must “reasonably” minimize data collection to protect user privacy.

For companies like Apple, Facebook, Uber and Google, all of which are headquartered in California, these new requirements may seem like a limitation on their existing data collection capabilities.

Looking more closely, it’s a nuanced story: By not only meeting the demands of these new regulations but exceeding them, companies have an opportunity to differentiate themselves from competitors to grow their bottom line, thanks to new technologies that put data privacy in the hands of consumers.

Take Apple, the world’s most valuable tech company, as an example. When Google and Facebook — two of Apple’s largest competitors — were under fire for exploiting customer data, CEO Tim Cook saw an opportunity to turn privacy into a competitive advantage.

The tech giant rolled out a suite of new privacy-maximizing features, including a new Sign In With Apple feature that allows users to securely log in to apps without sharing personal information with the apps’ developers. More recently, the company updated its privacy page to better showcase how its flagship apps are designed with privacy in mind.

By not only meeting the demands of these new regulations but exceeding them, companies have an opportunity to differentiate themselves from their competition.

This doubling down on privacy took center stage in the company’s marketing campaigns, too, with “Privacy Matters” becoming the central message of its prime-time air spots and its 10,000+ billboards around the world.

And of course, the company could hardly resist taking the occasional jab at its data-hungry competitors:

“The truth is, we could make a ton of money if we monetized our customer — if our customer was our product,” said Cook in an interview with MSNBC. “We’ve elected not to do that.”

Apple’s commitment to privacy not only puts them in a stronger position to comply with new CPRA regulations. It also sends a strong message to an industry that has profited off of customer data, and an even stronger message to consumers: It’s time to respect personal data.

The growing demand for privacy

The prioritization of consumer data privacy comes out of a need to address growing consumer concerns, which have consistently made headlines in recent years. Attention-grabbing stories such as the Cambridge Analytica data privacy scandal, as well as major breaches at companies such as Equifax, have left consumers wondering whom they can trust and how they can protect themselves. And the research is pretty conclusive — consumers want more out of their businesses and governments:

  • Only 52% of consumers feel like they can trust businesses, and only 41% worldwide trust their governments (Edelman).
  • 85% of consumers believe businesses should be doing more to actively protect their data (IBM).
  • 61% of consumers say their fears of having personal data compromised have increased in the last two years (Salesforce).

It’s hard to say exactly how this trust crisis will manifest in the global economy, but we’ve already seen several large boycotts, like the #DeleteFacebook movement, and a staggering 75% of consumers who say they won’t purchase from a company they don’t trust with their data.

And it’s not just Big Tech. From loyalty programs and inventory planning to smart cities and election advertising, it’s hard to overestimate the appetite — and effect — of using data to optimize processes and drive behavioral change.

As we look toward a new data-driven decade, however, we’re starting to realize the cost of this big data arms race: Consumers have lost trust in both the private and public sectors.

Private sector initiatives like Apple’s strengthened commitment to privacy, alongside public policy legislation like the CPRA, have the potential to not only build back consumer trust but to go even further beyond the minimum requirements. Thanks to new technologies like self-sovereign identity, companies can transform their data privacy policies, while cutting costs, reducing fraud and improving customer experiences.

The value of SSI

Self-sovereign identity (or SSI) leverages a thin layer of distributed ledger technology and a dose of very advanced cryptography to enable companies to prove the identities of their customers, without putting privacy at risk.

At its simplest, SSI is a way of giving consumers more control over their personal information. It offers a way for consumers to digitally store and manage personal information (in the form of verifiable credentials) that are issued and signed by a trusted authority (like a government, bank or university) in a way that can never be altered, embellished or manipulated. Consumers can then share this information when, where and with whom they wish as a way of proving things about themselves.

While sharing digital records online is nothing new, SSI changes the game in two fundamental ways:

  1. Organizations can capture the required data, without overcollection. Unlike the physical credentials we carry in our wallets, like driver’s licenses and insurance cards, a digital verifiable credential can be divided into individual attributes, which can be shared separately.

The classic example is walking into a bar and showing the bouncer your driver’s license to verify that you are of legal age. The card reveals the necessary data, but it also includes information that the bar has no business knowing — such as your name and address. With verifiable credentials, we can share proof of age without revealing anything else.

For sensitive cases, self-sovereign identity even allows us to cryptographically prove something about ourselves without revealing the actual data. In this case, we could provide a yes/no answer to whether we are of a legal age, without revealing our date of birth.

For individuals, data minimization represents a great stride forward in privacy. For organizations, it’s a way of avoiding the massive liability of storing and securing excess personally identifiable information.

  1. Correlation becomes much, much harder. While there are those who say privacy is a myth and our data will all be correlated anyway, self-sovereign identity protects us against many of the leading concerns with other digital identity solutions.

For example, if we look at other tools that give us some level of data portability, like single-sign-on, there is always a concern that a single player in the middle can track what we do online. There’s a reason those Facebook ads are eerily relevant: They know every site and app we have signed into using our Facebook profile.

With SSI, there’s no one player or centralized registry in the middle. Verifiers (those requesting an identity verification) can verify the authenticity cryptographically, meaning they don’t have to “phone home” to the original credential issuer and the credential issuer has no way of knowing when, where or to whom a credential was shared. No correlatable signatures are shared, and your digital identity is truly under your control and for your eyes only.

As a result, the consumer benefits from better privacy and security, while businesses benefit from:

  • Reduced fraud, with better, more accurate data verification at the time of account creation.
  • Reduced friction, with a dramatically faster sign-up process.
  • Reduced costs, both from time savings and from smarter KYC compliance (which normally costs large banks $500 million+ each year).
  • Increased efficiency, with less back-and-forth verifying third-party data.
  • Better customer experiences, with the ability to create a personalized, omnichannel customer experience without data harvesting.

And it’s not science fiction, either. Several major governments, businesses and NGOs have already launched self-sovereign solutions. These include financial institutions like UNIFY, Desert Financial and TruWest, healthcare organizations like Providence Health and the NHS, and telecom and travel giants like LG and the International Air Transport Association.

It’s not clear how soon the technology will become ubiquitous, but it is clear that privacy is quickly emerging as the next competitive battleground. Newly passed regulations like CPRA codify the measures companies need to take, but it’s consumer expectations that will drive long-term shifts within the companies themselves.

For those ahead of the curve, there will be significant cost savings and growth — especially as customers start to shift their loyalty toward those businesses that respect and protect their privacy. For everyone else, it will be a major wake-up call as consumers demand to take back their data.

#column, #cryptography, #digital-identity, #digital-rights, #identity-management, #personal-data, #privacy