FTC says health apps must notify consumers about data breaches — or face fines

The U.S. Federal Trade Commission (FTC) has warned apps and devices that collect personal health information must notify consumers if their data is breached or shared with third parties without their permission.

In a 3-2 vote on Wednesday, the FTC agreed on a new policy statement to clarify a decade-old 2009 Health Breach Notification Rule, which requires companies handling health records to notify consumers if their data is accessed without permission, such as the result of a breach. This has now been extended to apply to health apps and devices — specifically calling out apps that track fertility data, fitness, and blood glucose — which “too often fail to invest in adequate privacy and data security,” according to FTC chair Lina Khan.

“Digital apps are routinely caught playing fast and loose with user data, leaving users’ sensitive health information susceptible to hacks and breaches,” said Khan in a statement, pointing to a study published this year in the British Medical Journal that found health apps suffer from “serious problems” ranging from the insecure transmission of user data to the unauthorized sharing of data with advertisers.

There have also been a number of recent high-profile breaches involving health apps in recent years. Babylon Health, a U.K. AI chatbot and telehealth startup, last year suffered a data breach after a “software error” allowed users to access other patients’ video consultations, while period tracking app Flo was recently found to be sharing users’ health data with third-party analytics and marketing services.

Under the new rule, any company offering health apps or connected fitness devices that collect personal health data must notify consumers if their data has been compromised. However, the rule doesn’t define a “data breach” as just a cybersecurity intrusion; unauthorized access to personal data, including the sharing of information without an individual’s permission, can also trigger notification obligations.

“While this rule imposes some measure of accountability on tech firms that abuse our personal information, a more fundamental problem is the commodification of sensitive health information, where companies can use this data to feed behavioral ads or power user analytics,” Khan said.

If companies don’t comply with the rule, the FTC said it will “vigorously” enforce fines of $43,792 per violation per day.

The FTC has been cracking down on privacy violations in recent weeks. Earlier this month, the agency unanimously voted to ban spyware maker SpyFone and its chief executive Scott Zuckerman from the surveillance industry for harvesting mobile data on thousands of people and leaving it on the open internet.

#articles, #artificial-intelligence, #babylon-health, #chair, #data-breach, #digital-rights, #flo, #government, #identity-management, #lina-khan, #open-internet, #security, #security-breaches, #social-issues, #spyfone, #terms-of-service

Have ‘The Privacy Talk’ with your business partners

As a parent of teenagers, I’m used to having tough, sometimes even awkward, conversations about topics that are complex but important. Most parents will likely agree with me when I say those types of conversations never get easier, but over time, you tend to develop a roadmap of how to approach the subject, how to make sure you’re being clear, and how to answer hard questions.

And like many parents, I quickly learned that my children have just as much to teach me as I can teach them. I’ve learned that tough conversations build trust.

I’ve applied this lesson about trust-building conversations to an extremely important aspect of my role as the chief legal officer at Foursquare: Conducting “The Privacy Talk.”

The discussion should convey an understanding of how the legislative and regulatory environment are going to affect product offerings, including what’s being done to get ahead of that change.

What exactly is ‘The Privacy Talk’?

It’s the conversation that goes beyond the written, publicly-posted privacy policy, and dives deep into a customer, vendor, supplier or partner’s approach to ethics. This conversation seeks to convey and align the expectations that two companies must have at the beginning of a new engagement.

RFIs may ask a lot of questions about privacy compliance, information security, and data ethics. But it’s no match for asking your prospective partner to hop on a Zoom to walk you through their broader approach. Unless you hear it first-hand, it can be hard to discern whether a partner is thinking strategically about privacy, if they are truly committed to data ethics, and how compliance is woven into their organization’s culture.

#column, #digital-advertising, #digital-rights, #ec-column, #ec-how-to, #foursquare, #identity-management, #lawyers, #privacy, #security, #startups, #terms-of-service, #verified-experts

Privacy-oriented search app Xayn raises $12M from Japanese backers to go into devices

Back in December 2020 we covered the launch of a new kind of smartphone app-based search engine, Xayn.

“A search engine?!” I hear you say? Well, yes, because despite the convenience of modern search engines’ ability to tailor their search results to the individual, this user-tracking comes at the expense of privacy. This mass surveillance might be what improves Google’s search engine and Facebook’s ad targeting, to name just two examples, but it’s not very good for our privacy.

Internet users are admittedly able to switch to the US-based DuckDuckGo, or perhaps France’s Qwant, but what they gain in privacy, they often lose in user experience and the relevance of search results, through this lack of tailoring.

What Berlin-based Xayn has come up with is personalized, but a privacy-safe web search on smartphones, which replaces the cloud-based AI employed by Google et al with the innate AI in-built into modern smartphones. The result is that no data about you is uploaded to Xayn’s servers.

And this approach is not just for ‘privacy freaks’. Businesses that need search but don’t need Google’s dominant market position are increasingly attracted by this model.

And the evidence comes today with the new that Xayn has now raised almost $12 million in Series A funding led by the Japanese investors Global Brain and KDDI (a Japanese telecommunications operator), with participation from previous backers, including the Earlybird VC in Berlin. Xayn’s total financing now comes to more than $23 million to date.

It would appear that Xayn’s fusion of a search engine, a discovery feed, and a mobile browser has appealed to these Asian market players, particularly because Xayn can be built into OEM devices.

The result of the investment is that Xayn will now also focus on the Asian market, starting with Japan, as well as Europe.

Leif-Nissen Lundbæk, Co-Founder and CEO of Xayn said: “We proved with Xayn that you can have it all: great results through personalization, privacy by design through advanced technology, and a convenient user experience through clean design.”

He added: “In an industry in which selling data and delivering ads en masse are the norm, we choose to lead with privacy instead and put user satisfaction front and center.”

The funding comes as legislation such as the EU’s GDPR or California’s CCPA have both raised public awareness about personal data online.

Since its launch, Xayn says its app has been downloaded around 215,000 times worldwide, and a web version of its app is expected soon.

Over a call, Lundbæk expanded on the KDDI aspect of the fund-raising: “The partnership with KDDI means we will give users access to Xayn for free, while the corporate – such as KDDI – is the actual customer but gives our search engine away for free.”

The core features of Xayn include personalized search results; a personalized feed of the entire Internet which learns from their Tinder-like swipes, without collecting or sharing personal data;
an ad-free experience.  

Naoki Kamimeada, Partner at Global Brain Corporation said: “The market for private online search is growing, but Xayn is head and shoulders above everyone else because of the way they’re re-thinking how finding information online should be.”

Kazuhiko Chuman, Head of KDDI Open Innovation Fund, said: “This European discovery engine uniquely combines efficient AI with a privacy-protecting focus and a smooth user experience. At KDDI, we’re constantly on the lookout for companies that can shape the future with their expertise and technology. That’s why it was a perfect match for us.”

In addition to the three co-founders Leif-Nissen Lundbæk (Chief Executive Officer), Professor Michael Huth (Chief Research Officer), and Felix Hahmann (Chief Operations Officer), Dr Daniel von Heyl will come on board as Chief Financial Officer, Frank Pepermans will take on the role of Chief Technology Officer, and Michael Briggs will join as Chief Growth Officer.

#artificial-intelligence, #berlin, #california, #chief-executive-officer, #chief-financial-officer, #chief-technology-officer, #computing, #duckduckgo, #europe, #european-union, #facebook, #france, #global-brain-corporation, #google, #head, #japan, #kddi, #online-search, #partner, #privacy, #qwant, #search-engine, #search-engines, #search-results, #smartphone, #smartphones, #tc, #terms-of-service, #websites, #world-wide-web, #xayn

Opioid addiction treatment apps found sharing sensitive data with third parties

Several widely used opioid treatment recovery apps are accessing and sharing sensitive user data with third parties, a new investigation has found.

As a result of the COVID-19 pandemic and efforts to reduce transmission in the U.S, telehealth services and apps offering opioid addiction treatment have surged in popularity. This rise of app-based services comes as addiction treatment facilities face budget cuts and closures, which has seen both investor and government interest turn to telehealth as a tool to combat the growing addiction crisis.

While people accessing these services may have a reasonable expectation of privacy of their healthcare data, a new report from ExpressVPN’s Digital Security Lab, compiled in conjunction with the Opioid Policy Institute and the Defensive Lab Agency, found that some of these apps collect and share sensitive information with third parties, raising questions about their privacy and security practices.

The report studied 10 opioid treatment apps available on Android: Bicycle Health, Boulder Care, Confidant Health. DynamiCare Health, Kaden Health, Loosid, Pear Reset-O, PursueCare, Sober Grid, and Workit Health. These apps have been installed at least 180,000 times, and have received more than $300 million in funding from investment groups and the federal government.

Despite the vast reach and sensitive nature of these services, the research found that the majority of the apps accessed unique identifiers about the user’s device and, in some cases, shared that data with third parties.

Of the 10 apps studied, seven access the Android Advertising ID (AAID), a user-generated identifier that can be linked to other information to provide insights into identifiable individuals. Five of the apps also access the devices’ phone number; three access the device’s unique IMEI and IMSI numbers, which can also be used to uniquely identify a person’s device; and two access a users’ list of installed apps, which the researchers say can be used to build a “fingerprint” of a user to track their activities.

Many of the apps examined are also obtaining location information in some form, which when correlated with these unique identifiers, strengthens the capability for surveilling an individual person, as well as their daily habits, behaviors, and who they interact with. One of the methods the apps are doing this is through Bluetooth; seven of the apps request permission to make Bluetooth connections, which the researchers say is particularly worrying due to the fact this can be used to track users in real-world locations.

“Bluetooth can do what I call proximity tracking, so if you’re in the grocery store, it knows how long you’re in a certain aisle, or how close you are to someone else,” Sean O’Brien, principal researcher at ExpressVPN’s Digital Security Lab who led the investigation, told TechCrunch. “Bluetooth is an area that I’m pretty concerned about.”

Another major area of concern is the use of tracker SDKs in these apps, which O’Brien previously warned about in a recent investigation that revealed that hundreds of Android apps were sending granular user location data to X-Mode, a data broker known to sell location data to U.S. military contractors, and now banned from both Apple and Google’s app stores. SDKs, or software development kits, are bundles of code that are included with apps to make them work properly, such as collecting location data. Often, SDKs are provided for free in exchange for sending back the data that the apps collect.

“Confidentiality continues to be one of the major concerns that people cite for not entering treatment… existing privacy laws are totally not up to speed.” Jacqueline Seitz, Legal Action Center

While the researchers keen to point out that it does not categorize all usage of trackers as malicious, particularly as many developers may not even be aware of their existence within their apps, they discovered a high prevalence of tracker SDKs in seven out of the 10 apps that revealed potential data-sharing activity. Some SDKs are designed specifically to collect and aggregate user data; this is true even where the SDK’s core functionality is concerned.

But the researchers explain that an app, which provides navigation to a recovery center, for example, may also be tracking a user’s movements throughout the day and sending that data back to the app’s developers and third parties.

In the case of Kaden Health, Stripe — which is used for payment services within the app — can read the list of installed apps on a user’s phone, their location, phone number, and carrier name, as well as their AAID, IP address, IMEI, IMSI, and SIM serial number.

“An entity as large as Stripe having an app share that information directly is pretty alarming. It’s worrisome to me because I know that information could be very useful for law enforcement,” O’Brien tells TechCrunch. “I also worry that people having information about who has been in treatment will eventually make its way into decisions about health insurance and people getting jobs.”

The data-sharing practices of these apps are likely a consequence of these services being developed in an environment of unclear U.S. federal guidance regarding the handling and disclosure of patient information, the researchers say, though O’Brien tells TechCrunch that the actions could be in breach of 42 CFR Part 2, a law that outlines strong controls over disclosure of patient information related to treatment for addiction.

Jacqueline Seitz, a senior staff attorney for health privacy at Legal Action Center, however, said this 40-year-old law hasn’t yet been updated to recognize apps.

“Confidentiality continues to be one of the major concerns that people cite for not entering treatment,” Seitz told TechCrunch. “While 42 CFR Part 2 recognizes the very sensitive nature of substance use disorder treatment, it doesn’t mention apps at all. Existing privacy laws are totally not up to speed.

“It would be great to see some leadership from the tech community to establish some basic standards and recognize that they’re collecting super-sensitive information so that patients aren’t left in the middle of a health crisis trying to navigate privacy policies,” said Seitz.

Another likely reason for these practices is a lack of security and data privacy staff, according to Jonathan Stoltman, director at Opioid Policy Institute, which contributed to the research. “If you look at a hospital’s website, you’ll see a chief information officer, a chief privacy officer, or a chief security officer that’s in charge of physical security and data security,” he tells TechCrunch. “None of these startups have that.”

“There’s no way you’re thinking about privacy if you’re collecting the AAID, and almost all of these apps are doing that from the get-go,” Stoltman added.

Google is aware of ExpressVPN’s findings but has yet to comment. However, the report has been released as the tech giant prepares to start limiting developer access to the Android Advertising ID, mirroring Apple’s recent efforts to enable users to opt out of ad tracking.

While ExpressVPN is keen to make patients aware that these apps may violate expectations of privacy, it also stresses the central role that addiction treatment and recovery apps may play in the lives of those with opioid addiction. It recommends that if you or a family member used one of these services and find the disclosure of this data to be problematic, contact the Office of Civil Rights through Health and Human Services to file a formal complaint.

“The bottom line is this is a general problem with the app economy, and we’re watching telehealth become part of that, so we need to be very careful and cautious,” said O’Brien. “There needs to be disclosure, users need to be aware, and they need to demand better.”

Recovery from addiction is possible. For help, please call the free and confidential treatment referral hotline (1-800-662-HELP) or visit findtreatment.gov.

Read more:

#android, #app-developers, #app-store, #apple, #apps, #artificial-intelligence, #bluetooth, #broker, #computing, #director, #federal-government, #google, #google-play, #governor, #health, #health-insurance, #healthcare-data, #imessage, #law-enforcement, #mobile-app, #operating-systems, #privacy, #read, #security, #software, #stripe, #terms-of-service, #united-states

Kill the standard privacy notice

Privacy is a word on everyone’s mind nowadays — even Big Tech is getting in on it. Most recently, Apple joined the user privacy movement with its App Tracking Transparency feature, a cornerstone of the iOS 14.5 software update. Earlier this year, Tim Cook even mentioned privacy in the same breath as the climate crisis and labeled it one of the top issues of the 21st century.

Apple’s solution is a strong move in the right direction and sends a powerful message, but is it enough? Ostensibly, it relies on users to get informed about how apps track them and, if they wish to, regulate or turn off the tracking. In the words of Soviet satirists Ilf and Petrov, “The cause of helping the drowning is in the drowning’s own hands.” It’s a system that, historically speaking, has not produced great results.

Today’s online consumer is drowning indeed — in the deluge of privacy policies, cookie pop-ups, and various web and app tracking permissions. New regulations just pile more privacy disclosures on, and businesses are mostly happy to oblige. They pass the information burden to the end user, whose only rational move is to accept blindly because reading through the heaps of information does not make sense rationally, economically or subjectively. To save that overburdened consumer, we have only one option: We have to kill the standard privacy notice.

A notice that goes unnoticed

Studies show that online consumers often struggle with standard-form notices. A majority of online users expect that if a company has published a document with the title “privacy notice” or “privacy policy” on its website, then it will not collect, analyze or share their personal information with third parties. At the same time, a similar majority of consumers have serious concerns about being tracked and targeted for intrusive advertising.

Online businesses and major platforms gear their privacy notices and other relevant data disclosures toward obtaining consent, not toward educating and explaining.

It’s a privacy double whammy. To get on the platform, users have to accept the privacy notice. By accepting it, they allow tracking and intrusive ads. If they actually read the privacy notice before accepting, that costs them valuable time and can be challenging and frustrating. If Facebook’s privacy policy is as hard to comprehend as German philosopher Immanuel Kant’s “Critique of Pure Reason,” we have a problem. In the end, the option to decline is merely a formality; not accepting the privacy policy means not getting access to the platform.

So, what use is the privacy notice in its current form? For companies, on the one hand, it legitimizes their data-processing practices. It’s usually a document created by lawyers, for lawyers without thinking one second about the interests of the real users. Safe in the knowledge that nobody reads such disclosures, some businesses not only deliberately fail to make the text understandable, they pack it with all kinds of silly or refreshingly honest content.

One company even claimed its users’ immortal souls and their right to eternal life. For consumers, on the other hand, the obligatory checkmark next to the privacy notice can be a nuisance — or it can lull them into a false sense of data security.

On the unlikely occasion that a privacy notice is so blatantly disagreeable that it pushes users away from one platform and toward an alternative, this is often not a real solution, either. Monetizing data has become the dominant business model online, and personal data ultimately flows toward the same Big Tech giants. Even if you’re not directly on their platforms, many of the platforms you are on work with Big Tech through plugins, buttons, cookies and the like. Resistance seems futile.

A regulatory framework from another time

If companies are deliberately producing opaque privacy notices that nobody reads, maybe lawmakers and regulators could intervene and help improve users’ data privacy? Historically, this has not been the case. In pre-digital times, lawmakers were responsible for a multitude of pre-contractual disclosure mandates that resulted in the heaps of paperwork that accompany leasing an apartment, buying a car, opening a bank account or taking out a mortgage.

When it comes to the digital realm, legislation has been reactive, not proactive, and it lags behind technological development considerably. It took the EU about two decades of Google and one decade of Facebook to come up with the General Data Protection Regulation, a comprehensive piece of legislation that still does not rein in rampant data collection practices. This is just a symptom of a larger problem: Today’s politicians and legislators do not understand the internet. How do you regulate something if you don’t know how it works?

Many lawmakers on both sides of the Atlantic often do not understand how tech companies operate and how they make their money with user data — or pretend not to understand for various reasons. Instead of tackling the issue themselves, legislators ask companies to inform the users directly, in whatever “clear and comprehensible” language they see fit. It’s part laissez-faire, part “I don’t care.”

Thanks to this attitude, we are fighting 21st-century challenges — such as online data privacy, profiling and digital identity theft — with the legal logic of Ancient Rome: consent. Not to knock Roman law, but Marcus Aurelius never had to read the iTunes Privacy Policy in full.

Online businesses and major platforms, therefore, gear their privacy notices and other relevant data disclosures toward obtaining consent, not toward educating and explaining. It keeps the data flowing and it makes for great PR when the opportunity for a token privacy gesture appears. Still, a growing number of users are waking up to the setup. It is time for a change.

A call to companies to do the right thing

We have seen that it’s difficult for users to understand all the “legalese,” and they have nowhere to go even if they did. We have also noted lawmakers’ inadequate knowledge and motivation to regulate tech properly. It is up to digital businesses themselves to act, now that growing numbers of online users are stating their discontent and frustration. If data privacy is one of our time’s greatest challenges, it requires concerted action. Just like countries around the world pledged to lower their carbon emissions, enterprises must also band together and commit to protecting their users’ privacy.

So, here’s a plea to tech companies large and small: Kill your standard privacy notices! Don’t write texts that almost no user understands to protect yourselves against potential legal claims so that you can continue collecting private user data. Instead, use privacy notices that are addressed to your users and that everybody can understand.

And don’t stop there — don’t only talk the talk but walk the walk: Develop products that do not rely on the collection and processing of personal data. Return to the internet’s open-source, protocol roots, and deliver value to your community, not to Big Tech and their advertisers. It is possible, it is profitable and it is rewarding.

#apple, #column, #data-protection, #data-security, #digital-rights, #european-union, #facebook, #general-data-protection-regulation, #google, #human-rights, #opinion, #privacy, #privacy-policy, #tc, #terms-of-service

A Senate proposal for a new US agency to protect Americans’ data is back

Democratic Senator Kirsten Gillibrand has revived a bill that would establish a new U.S. federal agency to shield Americans from the invasive practices of tech companies operating in their own backyard.

Last year, Gillibrand (D-NY) introduced the Data Protection Act, a legislative proposal that would create an independent agency designed to address modern concerns around privacy and tech that existing government regulators have proven ill-equipped to handle.

“The U.S. needs a new approach to privacy and data protection and it’s Congress’ duty to step forward and seek answers that will give Americans meaningful protection from private companies that value profits over people,” Sen. Gillibrand said.

The revamped bill, which retains its core promise of a new “Data Protection Agency,” is co-sponsored by Ohio Democrat Sherrod Brown and returns to the new Democratic Senate with a few modifications.

In the spirit of all of the tech antitrust regulation chatter going on right now, the 2021 version of the bill would also empower the Data Protection Agency to review any major tech merger involving a data aggregator or other deals that would see the user data of 50,000 people change hands.

Other additions to the bill would establish an office of civil rights to “advance data justice” and allow the agency to evaluate and penalize high-risk data practices, like the use of algorithms, biometric data and harvesting data from children and other vulnerable groups.

Gillibrand calls the notion of updating regulation to address modern tech concerns “critical” — and she’s not alone. Democrats and Republicans seldom find common ground in 2021, but a raft of new bipartisan antitrust bills show that Congress has at last grasped how important it is to rein in tech’s most powerful companies lest they lose the opportunity altogether.

The Data Protection Act lacks the bipartisan sponsorship enjoyed by the set of new House tech bills, but with interest in taking on big tech at an all-time high, it could attract more support. Of all of the bills targeting the tech industry in the works right now, this one isn’t likely to go anywhere without more bipartisan interest, but that doesn’t mean its ideas aren’t worth considering.

Like some other proposals wending their way through Congress, this bill recognizes that the FTC has failed to meaningfully punish big tech companies for their bad behavior. In Gillibrand’s vision, the Data Protection Agency could rise to modern regulatory challenges where the FTC has failed. In other proposals, the FTC would be bolstered with new enforcement powers or infused with cash that could help the agency’s bite match its bark.

It’s possible that modernizing the tools that federal agencies have at hand won’t be sufficient. Cutting back more than a decade of overgrowth from tech’s data giants won’t be easy, particularly because the stockpile of Americans’ data that made those companies so wealthy is already out in the wild.

A new agency dedicated to wresting control of that data from powerful tech companies could bridge the gap between Europe’s own robust data protections and the absence of federal regulation we’ve seen in the U.S. But until something does, Silicon Valley’s data hoarders will eagerly fill the power vacuum themselves.

#congress, #data-security, #europe, #federal-trade-commission, #policy, #regulation, #senate, #tc, #terms-of-service, #the-battle-over-big-tech, #united-states

Adtech ‘data breach’ GDPR complaint is headed to court in EU

New York-based IAB Tech Labs, a standards body for the digital advertising industry, is being taken to court in Germany by the Irish Council for Civil Liberties (ICCL) in a piece of privacy litigation that’s targeted at the high speed online ad auction process known as real-time bidding (RTB).

While that may sound pretty obscure the case essentially loops in the entire ‘data industrial complex’ of adtech players, large and small, which make money by profiling Internet users and selling access to their attention — from giants like Google and Facebook to other household names (the ICCL’s PR also name-checks Amazon, AT&T, Twitter and Verizon, the latter being the parent company of TechCrunch — presumably because all participate in online ad auctions that can use RTB); as well as the smaller (typically non-household name) adtech entities and data brokers which also also involved in handling people’s data to run high velocity background auctions that target behavioral ads at web users.

The driving force behind the lawsuit is Dr Johnny Ryan, a former adtech insider turned whistleblower who’s now a senior fellow a the ICCL — and who has dubbed RTB the biggest data breach of all time.

He points to the IAB Tech Lab’s audience taxonomy documents which provide codes for what can be extremely sensitive information that’s being gathered about Internet users, based on their browsing activity, such as political affiliation, medical conditions, household income, or even whether they may be a parent to a special needs child.

The lawsuit contends that other industry documents vis-a-vis the ad auction system confirm there are no technical measures to limit what companies can do with people’s data, nor who they might pass it on to.

The lack of security inherent to the RTB process also means other entities not directly involved in the adtech bidding chain could potentially intercept people’s information — when it should, on the contrary, be being protected from unauthorized access, per EU law…

Ryan and others have been filing formal complaints against RTB security issue for years, arguing the system breaches a core principle of Europe’s General Data Protection Regulation (GDPR) — which requires that personal data be “processed in a manner that ensures appropriate security… including protection against unauthorised or unlawful processing and against accidental loss” — and which, they contend, simply isn’t possible given how RTB functions.

The problem is that Europe’s data protection agencies have failed to act. Which is why Ryan, via the ICCL, has decided to take the more direct route of filing a lawsuit.

“There aren’t many DPAs around the union that haven’t received evidence of what I think is the biggest data breach of all time but it started with the UK and Ireland — neither of which took, I think it’s fair to say, any action. They both said they were doing things but nothing has changed,” he tells TechCrunch, explaining why he’s decided to take the step of litigating.

“I want to take the most efficient route to protection people’s rights around data,” he adds.

Per Ryan, the Irish Data Protection Commission (DPC) has still not sent a statement of issues relating to the RTB complaint he lodged with them back in 2018 — so years later. In May 2019 the DPC did announce it was opening a formal investigation into Google’s adtech, following the RTB complaints, but the case remains open and unresolved. (We’ve contacted the DPC with questions about its progress on the investigation and will update with any response.)

Since the GDPR came into application in Europe in May 2018 there has been growth in privacy lawsuits  — including class action style suits — so litigation funders may be spying an opportunity to cash in on the growing enforcement gap left by resource-strapped and, well, risk-averse data protection regulators.

A similar complaint about RTB lodged with the UK’s Information Commissioner’s Office (ICO) also led to a lawsuit being filed last year — albeit in that case it was against the watchdog itself for failing to take any action. (The ICO’s last missive to the adtech industry told it to — uhhhh — expect audits.)

“The GDPR was supposed to create a situation where the average person does not need to wear a tin-foil hat, they do not need to be paranoid or take action to become well informed. Instead, supervisory authorities protect them. And these supervisory authorities — paid for by the tax payer — have very strong powers. They can gain admission to any documents and any premises. It’s not about fines I don’t think, just. They can tell the biggest most powerful companies in the world to stop doing what they’re doing with our data. That’s the ultimate power,” says Ryan. “So GDPR sets up these guardians — these potentially very empowered guardians — but they’ve not used those powers… That’s why we’re acting.”

“I do wish that I’d litigated years ago,” he adds. “There’s lots of reasons why I didn’t do that — I do wish, though, that this litigation was unnecessary because supervisory authorities protected me and you. But they didn’t. So now, as Irish politics like to say in the middle of a crisis, we are where we are. But this is — hopefully — several nails in the coffin [of RTB’s use of personal data].”

The lawsuit has been filed in Germany as Ryan says they’ve been able to establish that IAB Tech Labs — which is NY-based and has no official establishment in Europe — has representation (a consultancy it hired) that’s based in the country. Hence they believe there is a clear route to litigate the case at the Landgerichte, Hamburg.

While Ryan has been indefatigably sounding the alarm about RTB for years he’s prepared to clock up more mileage going direct through the courts to see the natter through.

And to keep hammering home his message to the adtech industry that it must clean up its act and that recent attempts to maintain the privacy-hostile status quo — by trying to rebrand and repackage the same old data shuffle under shiny new claims of ‘privacy’ and ‘responsibility’ — simply won’t wash. So the message is really: Reform or die.

“This may very well end up at the ECJ [European Court of Justice]. And that would take a few years but long before this ends up at the ECJ I think it’ll be clear to the industry now that it’s time to reform,” he adds.

IAB Tech Labs has been contacted for comment on the ICCL’s lawsuit.

Ryan is by no means the only person sounding the alarm over adtech. Last year the European Parliament called for tighter controls on behavioral ads to be baked into reforms of the region’s digital rules — calling for regulation to favor less intrusive, contextual forms of advertising which do not rely on mass surveillance of Internet users.

While even Google has said it wants to depreciate support for tracking cookies in favor of a new stack of technology proposals that it dubs ‘Privacy Sandbox’ (although its proposed alternative — targeting groups of Internet users based on interests derived from tracking their browsing habits — has been criticized as potentially amplifying problems of predatory and exploitative ad targeting, so may not represent a truly clean break with the rights-hostile adtech status quo).

The IAB is also facing another major privacy law challenge in Europe — where complaints against a widely used framework it designed for websites to obtain Internet users’ consent to being tracked for ads online led to scrutiny by Belgium’s data protection agency.

Last year its investigatory division found that the IAB Europe’s Transparency and Consent Framework (TCF) fails to meet the required standards of data protection under the GDPR.

The case went in front of the litigation chamber last week. A verdict — and any enforcement action by the Belgian DPA over the IAB Europe’s TCF — remains pending.

#adtech, #advertising-tech, #amazon, #articles, #att, #computing, #data-protection, #europe, #european-court-of-justice, #european-union, #facebook, #general-data-protection-regulation, #germany, #hamburg, #information-commissioners-office, #ireland, #johnny-ryan, #new-york, #online-advertising, #privacy, #real-time-bidding, #techcrunch, #terms-of-service, #twitter, #united-kingdom, #verizon, #world-wide-web

Ring won’t say how many users had footage obtained by police

Ring gets a lot of criticism, not just for its massive surveillance network of home video doorbells and its problematic privacy and security practices, but also for giving that doorbell footage to law enforcement. While Ring is making moves towards transparency, the company refuses to disclose how many users had their data given to police.

The video doorbell maker, acquired by Amazon in 2018, has partnerships with at least 1,800 U.S. police departments (and growing) that can request camera footage from Ring doorbells. Prior to a change this week, any police department that Ring partnered with could privately request doorbell camera footage from Ring customers for an active investigation. Ring will now let its police partners publicly request video footage from users through its Neighbors app.

The change ostensibly gives Ring users more control when police can access their doorbell footage, but ignores privacy concerns that police can access users’ footage without a warrant.

Civil liberties advocates and lawmakers have long warned that police can obtain camera footage from Ring users through a legal back door because Ring’s sprawling network of doorbell cameras are owned by private users. Police can still serve Ring with a legal demand, such as a subpoena for basic user information, or a search warrant or court order for video content, assuming there is evidence of a crime.

Ring received over 1,800 legal demands during 2020, more than double from the year earlier, according to a transparency report that Ring published quietly in January. Ring does not disclose sales figures but says it has “millions” of customers. But the report leaves out context that most transparency reports include: how many users or accounts had footage given to police when Ring was served with a legal demand?

When reached, Ring declined to say how many users had footage obtained by police.

That number of users or accounts subject to searches is not inherently secret, but an obscure side effect of how companies decide — if at all — to disclose when the government demands user data. Though they are not obligated to, most tech companies publish transparency reports once or twice a year to show how often user data is obtained by the government.

Transparency reports were a way for companies subject to data requests to push back against damning allegations of intrusive bulk government surveillance by showing that only a fraction of a company’s users are subject to government demands.

But context is everything. Facebook, Apple, Microsoft, Google, and Twitter all reveal how many legal demands they receive, but also specify how many users or accounts had data given. In some cases, the number of users or accounts affected can be twice or more than threefold the number of demands they received.

Ring’s parent, Amazon, is a rare exception among the big tech giants, which does not break out the specific number of users whose information was turned over to law enforcement.

“Ring is ostensibly a security camera company that makes devices you can put on your own homes, but it is increasingly also a tool of the state to conduct criminal investigations and surveillance,” Matthew Guariglia, policy analyst at the Electronic Frontier Foundation, told TechCrunch.

Guariglia added that Ring could release the numbers of users subject to legal demands, but also how many users have previously responded to police requests through the app.

Ring users can opt out of receiving requests from police, but this option would not stop law enforcement from obtaining a legal order from a judge for your data. Users can also switch on end-to-end encryption to prevent anyone other than the user, including Ring, from accessing their videos.

#amazon, #apple, #articles, #electronic-frontier-foundation, #encryption, #facebook, #google, #hardware, #judge, #law-enforcement, #microsoft, #neighbors, #operating-systems, #privacy, #ring, #security, #smart-doorbell, #software, #terms-of-service, #transparency-report

Facebook ordered not to apply controversial WhatsApp T&Cs in Germany

The Hamburg data protection agency has banned Facebook from processing the additional WhatsApp user data that the tech giant is granting itself access to under a mandatory update to WhatsApp’s terms of service.

The controversial WhatsApp privacy policy update has caused widespread confusion around the world since being announced — and already been delayed by Facebook for several months after a major user backlash saw rivals messaging apps benefitting from an influx of angry users.

The Indian government has also sought to block the changes to WhatApp’s T&Cs in court — and the country’s antitrust authority is investigating.

Globally, WhatsApp users have until May 15 to accept the new terms (after which the requirement to accept the T&Cs update will become persistent, per a WhatsApp FAQ).

The majority of users who have had the terms pushed on them have already accepted them, according to Facebook, although it hasn’t disclosed what proportion of users that is.

But the intervention by Hamburg’s DPA could further delay Facebook’s rollout of the T&Cs — at least in Germany — as the agency has used an urgency procedure, allowed for under the European Union’s General Data Protection Regulation (GDPR), to order the tech giant not to share the data for three months.

A WhatsApp spokesperson disputed the legal validity of Hamburg’s order — calling it “a fundamental misunderstanding of the purpose and effect of WhatsApp’s update” and arguing that it “therefore has no legitimate basis”.

“Our recent update explains the options people have to message a business on WhatsApp and provides further transparency about how we collect and use data. As the Hamburg DPA’s claims are wrong, the order will not impact the continued roll-out of the update. We remain fully committed to delivering secure and private communications for everyone,” the spokesperson added, suggesting that Facebook-owned WhatsApp may be intending to ignore the order.

We understand that Facebook is considering its options to appeal Hamburg’s procedure.

The emergency powers Hamburg is using can’t extend beyond three months but the agency is also applying pressure to the European Data Protection Board (EDPB) to step in and make what it calls “a binding decision” for the 27 Member State bloc.

We’ve reached out to the EDPB to ask what action, if any, it could take in response to the Hamburg DPA’s call.

The body is not usually involved in making binding GDPR decisions related to specific complaints — unless EU DPAs cannot agree over a draft GDPR decision brought to them for review by a lead supervisory authority under the one-stop-shop mechanism for handling cross-border cases.

In such a scenario the EDPB can cast a deciding vote — but it’s not clear that an urgency procedure would qualify.

In taking the emergency action, the German DPA is not only attacking Facebook for continuing to thumb its nose at EU data protection rules, but throwing shade at its lead data supervisor in the region, Ireland’s Data Protection Commission (DPC) — accusing the latter of failing to investigate the very widespread concerns attached to the incoming WhatsApp T&Cs.

(“Our request to the lead supervisory authority for an investigation into the actual practice of data sharing was not honoured so far,” is the polite framing of this shade in Hamburg’s press release).

We’ve reached out to the DPC for a response and will update this report if we get one.

Ireland’s data watchdog is no stranger to criticism that it indulges in creative regulatory inaction when it comes to enforcing the GDPR — with critics charging commissioner Helen Dixon and her team of failing to investigate scores of complaints and, in the instances when it has opened probes, taking years to investigate — and opting for weak enforcements at the last.

The only GDPR decision the DPC has issued to date against a tech giant (against Twitter, in relation to a data breach) was disputed by other EU DPAs — which wanted a far tougher penalty than the $550k fine eventually handed down by Ireland.

GDPR investigations into Facebook and WhatsApp remain on the DPC’s desk. Although a draft decision in one WhatsApp data-sharing transparency case was sent to other EU DPAs in January for review — but a resolution has still yet to see the light of day almost three years after the regulation begun being applied.

In short, frustrations about the lack of GDPR enforcement against the biggest tech giants are riding high among other EU DPAs — some of whom are now resorting to creative regulatory actions to try to sidestep the bottleneck created by the one-stop-shop (OSS) mechanism which funnels so many complaints through Ireland.

The Italian DPA also issued a warning over the WhatsApp T&Cs change, back in January — saying it had contacted the EDPB to raise concerns about a lack of clear information over what’s changing.

At that point the EDPB emphasized that its role is to promote cooperation between supervisory authorities. It added that it will continue to facilitate exchanges between DPAs “in order to ensure a consistent application of data protection law across the EU in accordance with its mandate”. But the always fragile consensus between EU DPAs is becoming increasingly fraught over enforcement bottlenecks and the perception that the regulation is failing to be upheld because of OSS forum shopping.

That will increase pressure on the EDPB to find some way to resolve the impasse and avoid a wider break down of the regulation — i.e. if more and more Member State agencies resort to unilateral ’emergency’ action.

The Hamburg DPA writes that the update to WhatsApp’s terms grant the messaging platform “far-reaching powers to share data with Facebook” for the company’s own purposes (including for advertising and marketing) — such as by passing WhatApp users’ location data to Facebook and allowing for the communication data of WhatsApp users to be transferred to third-parties if businesses make use of Facebook’s hosting services.

Its assessment is that Facebook cannot rely on legitimate interests as a legal base for the expanded data sharing under EU law.

And if the tech giant is intending to rely on user consent it’s not meeting the bar either because the changes are not clearly explained nor are users offered a free choice to consent or not (which is the required standard under GDPR).

“The investigation of the new provisions has shown that they aim to further expand the close connection between the two companies in order for Facebook to be able to use the data of WhatsApp users for their own purposes at any time,” Hamburg goes on. “For the areas of product improvement and advertising, WhatsApp reserves the right to pass on data to Facebook companies without requiring any further consent from data subjects. In other areas, use for the company’s own purposes in accordance to the privacy policy can already be assumed at present.

“The privacy policy submitted by WhatsApp and the FAQ describe, for example, that WhatsApp users’ data, such as phone numbers and device identifiers, are already being exchanged between the companies for joint purposes such as network security and to prevent spam from being sent.”

DPAs like Hamburg may be feeling buoyed to take matters into their own hands on GDPR enforcement by a recent opinion by an advisor to the EU’s top court, as we suggested in our coverage at the time. Advocate General Bobek took the view that EU law allows agencies to bring their own proceedings in certain situations, including in order to adopt “urgent measures” or to intervene “following the lead data protection authority having decided not to handle a case.”

The CJEU ruling on that case is still pending — but the court tends to align with the position of its advisors.

 

#data-protection, #data-protection-commission, #data-protection-law, #europe, #european-data-protection-board, #european-union, #facebook, #gdpr, #general-data-protection-regulation, #germany, #hamburg, #helen-dixon, #ireland, #privacy, #privacy-policy, #social, #social-media, #terms-of-service, #whatsapp

State AGs tell Facebook to scrap Instagram for kids plans

In a new letter, attorneys general representing 44 U.S. states and territories are pressuring Facebook to walk away from new plans to open Instagram to children. The company is working on an age-gated version of Instagram for kids under the age of 13 that would lure in young users who are currently not permitted to use the app, which was designed for adults.

“It appears that Facebook is not responding to a need, but instead creating one, as this platform appeals primarily to children who otherwise do not or would not have an Instagram account,” the coalition of attorneys general wrote, warning that an Instagram for kids would be “harmful for myriad reasons.”

The state attorneys general call for Facebook to abandon its plans, citing concerns around developmental health, privacy and Facebook’s track record of prioritizing growth over the well being of children on its platforms. In the letter, embedded below, they delve into specific worries about cyberbullying, online grooming by sexual predators and algorithms that showed dieting ads to users with eating disorders.

Concerns about social media and mental health in kids and teens is a criticism we’ve been hearing more about this year, as some Republicans join Democrats in coalescing around those issues, moving away from the claims of anti-conservative bias that defined politics in tech during the Trump years.

Leaders from both parties have been openly voicing fears over how social platforms are shaping young minds in recent months amidst calls to regulate Facebook and other social media companies. In April, a group of Congressional Democrats wrote Facebook with similar warnings over its new plans for children, pressing the company for details on how it plans to protect the privacy of young users.

In light of all the bad press and attention from lawmakers, it’s possible that the company may walk back its brazen plans to boost business by bringing more underage users into the fold. Facebook is already in the hot seat with state and federal regulators in just about every way imaginable. Deep worries over the company’s future failures to protect yet another vulnerable set of users could be enough to keep these plans on the company’s back burner.

#computing, #cyberbullying, #facebook, #instagram, #online-grooming, #social, #social-media, #software, #tc, #terms-of-service, #united-states

Identiq, a privacy-friendly fraud prevention startup, secures $47M at Series A

Israeli fraud prevention startup Identiq has raised $47 million at Series A as the company eyes international growth, driven in large part by the spike in online spending during the pandemic.

The round was led by Insight Partners and Entrée Capital, with participation from Amdocs, Sony Innovation Fund by IGV, as well as existing investors Vertex Ventures Israel, Oryzn Capital, and Slow Ventures.

Fraud prevention is big business, which is slated to be worth $145 billion by 2026, ballooning by eightfold in size compared to 2018. But it’s a data hungry industry, fraught with security and privacy risks, having to rely on sharing enormous sets of consumer data in order to learn who legitimate customers are in order to weed out the fraudsters, and therefore.

Identiq takes a different, more privacy-friendly approach to fraud prevention, without having to share a customer’s data with a third-party.

“Before now, the only way companies could solve this problem was by exposing the data they were given by the user to a third party data provider for validation, creating huge privacy problems,” Identiq’s chief executive Itay Levy told TechCrunch. “We solved this by allowing these companies to validate that the data they’ve been given matches the data of other companies that already know and trust the user, without sharing any sensitive information at all.”

When an Identiq customer — such as an online store — sees a new customer for the first time, the store can ask other stores in Identiq’s network if they know or trust that new customer. This peer-to-peer network uses cryptography to help online stores anonymously vet new customers to help weed out bad actors, like fraudsters and scammers, without needing to collect private user data.

So far, the company says it already counts Fortune 500 companies as customers.

Identiq said it plans to use the $47 million raise to hire and grow the company’s workforce, and aims to scale up its support for its international customers.

#articles, #cryptography, #customer-data, #digital-rights, #entree-capital, #human-rights, #identity-management, #insight-partners, #marketing, #online-shopping, #online-stores, #peer-to-peer, #privacy, #security, #slow-ventures, #sony, #sony-innovation-fund, #startups, #terms-of-service, #vertex-ventures

Clearview AI ruled ‘illegal’ by Canadian privacy authorities

Controversial facial recognition startup Clearview AI violated Canadian privacy laws when it collected photos of Canadians without their knowledge or permission, the country’s top privacy watchdog has ruled.

The New York-based company made its splashy newspaper debut a year ago by claiming it had collected over 3 billion photos of people’s faces and touting its connections to law enforcement and police departments. But the startup has faced a slew of criticism for scraping social media sites also without their permission, prompting Facebook, LinkedIn and Twitter to send cease and desist letters to demand it stops.

In a statement, Canada’s Office of the Privacy Commissioner said its investigation found Clearview had “collected highly sensitive biometric information without the knowledge or consent of individuals,” and that the startup “collected, used and disclosed Canadians’ personal information for inappropriate purposes, which cannot be rendered appropriate via consent.”

Clearview rebuffed the allegations, claiming Canada’s privacy laws do not apply because the company doesn’t have a “real and substantial connection” to the country, and that consent was not required because the images it scraped were publicly available.

That’s a challenge the company continues to face in court, as it faces a class action suit citing Illinois’ biometric protection laws that last year dinged Facebook to the tune of $550 million for violating the same law.

The Canadian privacy watchdog rejected Clearview’s arguments, and said it would “pursue other actions” if the company does not follow its recommendations, which included stopping the collection on Canadians and deleting all previously collected images. Clearview said in July that it stopped providing its technology to Canadian customers after the Royal Canadian Mounted Police and the Toronto Police Service were using the startup’s technology.

“What Clearview does is mass surveillance and it is illegal,” said Daniel Therrien, Canada’s privacy commissioner. “It is an affront to individuals’ privacy rights and inflicts broad-based harm on all members of society, who find themselves continually in a police lineup. This is completely unacceptable.”

A spokesperson for Clearview AI did not immediately return a request for comment.

#articles, #canada, #clearview-ai, #digital-rights, #facebook, #facial-recognition, #facial-recognition-software, #human-rights, #illinois, #law-enforcement, #mass-surveillance, #new-york, #privacy, #security, #social-issues, #spokesperson, #terms-of-service

Data privacy startup eXate raises £2.3M Seed led by Outward VC

Accessing and sharing data is a complicated issue for large businesses, and doing this in a secure, compliant way is a problem for many. eXate, a London-based data software firm is attacking this problem and has now raised a £2.3 million seed round led by Outward VC, with additional backing from ING Ventures and Triple Point Ventures.

eXate is competing with those that tend to do more specific types of data privacy such as Hazy, Privitar, and Very Good Security. By contrast, eXate says it aggregates multiple types of privacy into one solution, and provides central governance and control.

eXate was founded by Peter Lancos and Sonal Rattan, former digital business leaders at HSBC. Its clients include ING, its new investor.

Peter Lancos, CEO, eXate, said in a statement: “Organisations that store and process large volumes of data experience many challenges when it comes to data sharing. We see the biggest obstacles arise from the lack of joined-up thinking. The use of expensive multiple single-point solutions, coupled with monitoring complicated country-by-country policies adds time and budget to data initiatives.”

The funding will enable eXate to capitalize on DataSecOps demand by growing its team, accelerating platform development and expanding into new geographies and verticals.

Andi Kazeroonian, Investor at Outward VC, commented on the investment: “Ensuring the protection of sensitive data is a mission-critical challenge for companies that wish to utilise data to deliver value to its stakeholders. eXate’s unique platform provides companies with the tools it requires to ensure data privacy and protection by design.”

#articles, #ceo, #europe, #law, #london, #privacy, #social-issues, #tc, #terms-of-service

Facebook trails expanding portability tools ahead of FTC hearing

Facebook is considering expanding the types of data its users are able to port directly to alternative platforms.

In comments on portability sent to US regulators ahead of an FTC hearing on the topic next month, Facebook says it intends to expand the scope of its data portability offerings “in the coming months”.

It also offers some “possible examples” of how it could build on the photo portability tool it began rolling out last year — suggesting it could in future allow users to transfer media they’ve produced or shared on Facebook to a rival platform or take a copy of their “most meaningful posts” elsewhere.

Allowing Facebook-based events to be shared to third party cloud-based calendar services is another example cited in Facebook’s paper.

It suggests expanding portability in such ways could help content creators build their brands on other platforms or help event organizers by enabling them to track Facebook events using calendar based tools.

However there are no firm commitments from Facebook to any specific portability product launches or expansions of what it offers currently.

For now the tech giant only lets Facebook users directly send copies of their photos to Google’s eponymous photo storage service — a transfer tool it switched on for all users this June.

“We remain committed to ensuring the current product remains stable and performant for people and we are also exploring how we might extend this tool, mindful of the need to preserve the privacy of our users and the integrity of our services,” Facebook writes of its photo transfer tool.

On whether it will expand support for porting photos to other rival services (i.e. not just Google Photos) Facebook has this non-committal line to offer regulators: “Supporting these additional use cases will mean finding more destinations to which people can transfer their data. In the short term, we’ll pursue these destination partnerships through bilateral agreements informed by user interest and expressions of interest from potential partners.”

Beyond allowing photo porting to Google Photos, Facebook users have long been able to download a copy of some of the information it holds on them.

But the kind of portability regulators are increasingly interested in is about going much further than that — meaning offering mechanisms that enable easy and secure data transfers to other services in a way that could encourage and support fast-moving competition to attention-monopolizing tech giants.

The Federal Trade Commission is due to host a public workshop on September 22, 2020, which it says will  “examine the potential benefits and challenges to consumers and competition raised by data portability”.

The regulator notes that the topic has gained interest following the implementation of major privacy laws that include data portability requirements — such as the European Union’s General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA).

It asked for comment submissions by August 21, which is what Facebook’s paper is responding to.

In comments to the Reuters news agency, Facebook’s privacy and public policy manager, Bijan Madhani, said the company wants to see “dedicated portability legislation” coming out of any post-workshop recommendations.

It reports that Facebook supports a portability bill that’s doing the rounds in Congress — called the Access Act, which is sponsored by Democratic Senators Richard Blumenthal and Mark Warner, and Republican senator Josh Hawley — which would require large tech platforms to let their users easily move their data to other services.

Albeit Madhani dubs it a good first step, adding that the company will continue to engage with the lawmakers on shaping its contents.

“Although some laws already guarantee the right to portability, our experience suggests that companies and people would benefit from additional guidance about what it means to put those rules into practice,” Facebook also writes in its comments to the FTC .

Ahead of dipping its toe into portability via the photo transfer tool, Facebook released a white paper on portability last year, seeking to shape the debate and influence regulatory thinking around any tighter or more narrowly defined portability requirements.

In recent months Mark Zuckerberg has also put in facetime to lobby EU lawmakers on the topic, as they work on updating regulations around digital services.

The Facebook founder pushed the European Commission to narrow the types of data that should fall under portability rules. In the public discussion with commissioner Thierry Breton, in May, he raised the example of the Cambridge Analytica Facebook data misuse scandal, claiming the episode illustrated the risks of too much platform “openness” — and arguing that there are “direct trade-offs about openness and privacy”.

Zuckerberg went on to press for regulation that helps industry “balance these two important values around openness and privacy”. So it’s clear the company is hoping to shape the conversation about what portability should mean in practice.

Or, to put it another way, Facebook wants to be able to define which data can flow to rivals and which can’t.

“Our position is that portability obligations should not mandate the inclusion of observed and inferred data types,” Facebook writes in further comments to the FTC — lobbying to put broad limits on how much insight rivals would be able to gain into Facebook users who wish to take their data elsewhere.

Both its white paper and comments to the FTC plough this preferred furrow of making portability into a ‘hard problem’ for regulators, by digging up downsides and fleshing out conundrums — such as how to tackle social graph data.

On portability requests that wrap up data on what Facebook refers to as “non-requesting users”, its comments to the FTC work to sew doubt about the use of consent mechanisms to allow people to grant each other permission to have their data exported from a particular service — with the company questioning whether services “could offer meaningful choice and control to non-requesting users”.

“Would requiring consent inappropriately restrict portability? If not, how could consent be obtained? Should, for example, non-requesting users have the ability to choose whether their data is exported each time one of their friends wants to share it with an app? Could an approach offering this level of granularity or frequency of notice could lead to notice fatigue?” Facebook writes, skipping lightly over the irony given the levels of fatigue its own apps’ default notifications can generate for users.

Facebook also appears to be advocating for an independent body or regulator to focus on policy questions and liability issues tied to portability, writing in a blog post announcing its FTC submission: “In our comments, we encourage the FTC to examine portability in practice. We also ask it to recommend dedicated federal portability legislation and provide advice to industry on the policy and regulatory tensions we highlight, so that companies implementing data portability have the clear rules and certainty necessary to build privacy-protective products that enhance people’s choice and control online.”

In its FTC submission the company goes on to suggest that “an independent mechanism or body” could “collaboratively set privacy and security standards to ensure data portability partnerships or participation in a portability ecosystem that are transparent and consistent with the broader goals of data portability”.

Facebook then further floats the idea of an accreditation model under which recipients of user data “could demonstrate, through certification to an independent body, that they meet the data protection and processing standards found in a particular regulation, such as the [EU’s] GDPR or associated code of conduct”.

“Accredited entities could then be identified with a seal and would be eligible to receive data from transferring service providers. The independent body (potentially in consultation with relevant regulators) could work to assess compliance of certifying entities, revoking accreditation where appropriate,” it further suggests.

However its paper also notes the risk that requiring accreditation might present a barrier to entry for the small businesses and startups that might otherwise be best positioned to benefit from portability.

#apps, #congress, #data-portability, #data-protection, #digital-media, #digital-rights, #europe, #european-commission, #european-union, #facebook, #federal-trade-commission, #ftc, #gdpr, #general-data-protection-regulation, #google, #josh-hawley, #mark-warner, #mark-zuckerberg, #policy, #richard-blumenthal, #social, #terms-of-service, #thierry-breton, #united-states

Let’s close the gap and finally pass a federal data privacy law

My college economics professor, Dr. Charles Britton, often said, “There’s no such thing as a free lunch.” The common principle known as TINSTAFL implies that even if something appears to be free, there is always a cost to someone, even if it is not the individual receiving the benefit.

For decades, the ad-supported ecosystem enjoyed much more than a proverbial free lunch. Brands, technology providers, publishers and platforms successfully transformed data provided by individuals into massive revenue gains, creating some of the world’s most profitable corporations. So if TINSTAFL is correct, what is the true cost of monetizing this data? Consumer trust, as it turns out.

Studies overwhelmingly demonstrate that the majority of people believe data collection and data use lack the necessary transparency and control. After a few highly publicized data breaches brought a spotlight on the lack of appropriate governance and regulation, people began to voice concerns that companies had operated with too little oversight for far too long, and unfairly benefited from the data individuals provided.

With increased attention, momentum and legislative activity in multiple individual states, we have never been in a better position to pass a federal data privacy law that can rebalance the system and set standards that rebuild trust with the people providing the data.

Over the last two decades, we’ve seen that individuals benefit from regulated use of data. The competitiveness of the banking markets is partly a result of laws around the collection and use of data for credit decisions. In exchange for data collection and use, individuals now have the ability to go online and get a home loan or buy a car with instant credit. A federal law would strengthen the value exchange and provide rules for companies around the collection and utilization of data, as well as establish consistency and uniformity, which can create a truly national market.

In order to close the gap and pass a law that properly balances the interests of people, society and commerce, the business sector must first unify on the need and the current political reality. Most already agree that a federal law should be preemptive of state laws, and many voices with legitimate differences of opinion have come a long way toward a consensus. Further unification on the following three assertions could help achieve bipartisan support:

A federal law must recognize that one size does not fit all. While some common sense privacy accountability requirements should be universal, a blanket approach for accountability practices is unrealistic. Larger enterprises with significant amounts of data on hand should have stricter requirements than other entities and be required to appoint a Data Ethics Officer and document privacy compliance processes and privacy reviews.

They should be required to regularly perform internal and external audits of data collection and use. These audits should be officer-certified and filed with a regulator. While larger companies are equipped to absorb this burden, smaller businesses should not be forced to forego using the data they need to innovate and thrive by imposing the same standards. Instead, requirements for accountability should be “right-sized,” and based on the amount and type of data collected and its intended use.

A federal law must properly empower the designated regulatory authority. The stated mission of the Federal Trade Commission is to protect American consumers. As the government agency of record for data privacy regulation and enforcement, the FTC has already imposed billions of dollars in penalties for privacy violations. However, in a modern world where every company collects and uses data, the FTC cannot credibly monitor or enforce federal regulation without substantially increasing funding and staffing.

With increased authority, equipped with skilled teams to diligently monitor those companies with the most consumer data, the FTC — with State Attorney Generals designated as back-ups — can hold them accountable by imposing meaningful remedial actions and fines.

A federal law must acknowledge that properly crafted private right-to-action is appropriate and necessary. The earlier points build an effective foundation for the protection of people’s privacy rights, but there will still be situations where a person should have access to the judicial system to seek redress. Certainly, if a business does not honor the data rights of an individual as defined by federal law, people should have the right to bring an action for equitable relief. If a person has suffered actual physical or significant economic harm directly caused by violation of a Federal Data Privacy law, they should be able to bring suit if, after giving notice, the FTC declines to pursue.

Too many leaders have been unwilling to venture toward possible common ground, but public opinion dictates that more must be done, otherwise states, counties, parishes and cities will inevitably continue to act if Congress does not. It is just as certain that those data privacy laws will be inconsistent, creating a patchwork of rules based on geography, leading to unnecessary friction and complexity. Consider how much time is spent sorting through the 50 discrete data breach laws that exist today, an expense that could easily be mitigated with a single national standard.

It is clear that responsible availability of data is critical to fostering innovation. American technology has led the world into this new data-driven era, and it’s time for our laws to catch up.

To drive economic growth and benefit all Americans, we need to properly balance the interests of people, society at-large and business, and pass a data law that levels the playing field and allows American enterprise to continue thinking with data. It should ensure that transparency and accountability are fostered and enforced and help rebuild trust in the system.

Coming together to support the passage of a comprehensive and preemptive federal data privacy law is increasingly important. If not, we are conceding that we’re okay with Americans remaining distrustful of the industry, and that the rest of the world should set the standards for us.

#column, #digital-rights, #federal-trade-commission, #government, #human-rights, #opinion, #policy, #privacy, #social-issues, #terms-of-service

Data brokers track everywhere you go, but their days may be numbered

Everywhere you go, you are being followed. Not by some creep in a raincoat, but by the advertisers wanting to sell you things.

The more advertisers know about you — where you go, which shops you visit, and what purchases you make — the more they can profile you, understand your tastes, your hobbies and interests, and use that information to target you with ads. You can thank the phone in your pocket — the apps on it, to be more accurate — that invisibly spits out gobs of data about you as you go about your day.

Your location, chief among the data, is by far the most revealing.

Apps, just like websites, are filled with trackers that send your real-time location to data brokers. In return, these data brokers sell on that data to advertisers, while the app maker gets a cut of the money. If you let your weather app know your location to serve you the forecast, you’re also giving your location to data brokers.

Don’t be too surprised. It’s all explained in the privacy policy that you didn’t read.

By collecting your location data, these data brokers have access to intensely personal aspects of your life and can easily build a map of everywhere you go. This data isn’t just for advertising. Immigration authorities have bought access to users’ location data to help catch the undocumented. In one case, a marketing firm used location data harvested from phones to predict the race, age, and gender of Black Lives Matter protesters. It’s an enormous industry, said to be worth at least $200 billion.

It’s only been in recent years that it was possible to learn what these data brokers know about us. But the law is slowly catching up. Anyone in Europe can request access to obtain or delete their data  under the GDPR rules. California’s new consumer privacy law grants California residents access to their data.

But because so many data brokers collect and resell that data, the data marketplace is a fragmented mess, making it impossible to know which companies have your data. That can make requesting it a nightmare.

Jordan Wright, a senior security architect at Duo Security, requested his data from some of the biggest data brokers in the industry, citing California’s new consumer privacy law. Not all went to plan. As an out-of-state resident, only one of the 14 data brokers approved his request and sent him his data.

What came back was a year’s worth of location data.

Wright works in cybersecurity and knows better than most how much data spills out of his phone. But he takes precautions, and is careful about the apps he puts on his phone. Yet the data he got back knew where he lives, where he works, and where he took his family on holiday before the pandemic hit.

“It’s frustrating not fully knowing what data has been collected or shared and by whom,” he wrote in a blog post. “The reality is that dozens of companies are monitoring the location of hundreds of millions of unsuspecting people every single day.”

Avoiding this invasive tracking is nearly impossible. Just like with web ad tracking, you have little choice but to accept the app’s terms. Allow the tracking, or don’t use the app.

But the winds are changing and there is an increasing appetite to rein in the data brokers and advertising giants by kneecapping their data collection efforts. As privacy became a more prominent selling point for phone consumers, the two largest smartphone makers, Apple and Google, in recent years began to curb the growing power of data brokers.

Both iPhones and Android devices now let you opt-out of ad tracking, a move that doesn’t reduce the ads that appear but prevents advertisers from tracking you across the web or between apps.

Apple threw down the gauntlet last month when it said its next software update, iOS 14, would let users opt-out of app tracking altogether, serving a severe blow to data brokers and advertisers by reducing the amount of data that these ad giants collect on millions without their explicit and direct consent. That prompted an angry letter from the Interactive Advertising Bureau, an industry trade group that represents online advertisers, expressed its “strong concerns” and effectively asked it to back down from the plans.

Google also plans to roll out new app controls for location data in its next Android release.

It’s not the only effort taking on data brokers but it’s been the most effective — so far. Lawmakers are scrambling to find bipartisan support for a proposed federal data protection agency before the end of the year, when Congress resets and enters a legislative session.

Shy of an unlikely fix by Washington, it’s up to the tech giants to keep pushing back.

#articles, #california, #congress, #europe, #general-data-protection-regulation, #information, #interactive-advertising-bureau, #marketing, #online-advertising, #privacy, #security, #smartphone, #terms-of-service, #washington

Social media platforms must protect democracy, even from the president

It began with a simple blue label: “Get the facts about mail-in ballots.”

Last month, President Donald Trump tweeted allegations — shown time and again to be unfounded — that voting by mail leads to fraud. When Twitter, in accordance with its policies on civic integrity and coronavirus misinformation, fact-checked and labeled the false claims, Trump threatened to shut social media companies down.

Twitter subsequently hid one of the president’s tweets about ongoing protests against police brutality behind an interstitial warning on the grounds that it was glorifying violence. Trump then issued a muddled and largely unenforceable executive order to muzzle social media companies. By Monday, Facebook had been drawn into the fray, with many employees staging a virtual walkout to protest the company’s inaction on Trump’s posts.

Trump’s social media posts are but the latest installment in a long, ugly history of voter suppression and violence against protestors, much of it targeting Black communities in the United States. Put together, the events of the past week bring into stark relief how social media has become a front in such attacks on democracy — and show how much more must be done to address digital disinformation.

A lot has been made of Twitter’s decision to hide one of the president’s tweets on the grounds that it glorifies violence. The tweet, which read, in part, “when the looting starts, the shooting starts,” referenced a phrase coined by a Miami police chief known for his aggressive, racist policing policies in Black neighborhoods in the 1960s. Yet when Trump also tweeted that protestors were “professionally managed” and “ANTIFA led anarchists” — spreading rumors that looting and rioting was being organized by antifa activists — neither post was labeled, hidden or removed. Facebook, meanwhile, chose not to take action on any of the posts, which were also placed on its network.

Similarly, Twitter’s labeling of Trump’s “ballot fraud” disinformation is also a very new development. Last Tuesday’s tweets marked the first time Twitter has fact-checked Trump — but it was far from the first time the president had peddled such claims. Just a week before, he tweeted false information that the secretaries of state of Michigan and Nevada were engaging in illegal fraud when they tried to expand access to mail-in ballots, threatening to cut funding to those states. He also posted on Facebook that voting by mail would lead to “massive fraud and abuse” as well as “the end of our great Republican party,” despite there being no link between voting by mail and fraud, nor any evidence that mail-in ballots benefit either political party. At the time, neither Twitter nor Facebook took action.

Trump’s attempts to use digital disinformation to discredit voting by mail in the midst of a pandemic are especially concerning given his campaign’s history with voter suppression. In the lead-up to the 2016 election, a senior Trump campaign official was quoted as saying the organization had “three major voter suppression operations under way.” As part of this, the campaign used “dark posts” on Facebook — posts only visible to certain users — to target Black voters in particular, encouraging them to stay home on Election Day (a tactic eerily echoed by Russian interference efforts on social media). Going into the 2020 election, the Trump campaign and the Republican party are planning a massive campaign to limit voting by mail; spreading disinformation about voter fraud in order to decrease trust in political processes is part and parcel of this strategy.

Twitter and Facebook’s policies on violence and civic participation go some way toward addressing these issues, on social media at least. Platforms ban the glorification and incitement of violence, and both platforms ban communications that contain incorrect information about when, where and how to vote, as well as paid advertisements that discourage voting. However, these policies have typically been unevenly applied. While neither company had previously moderated posts by the president, Facebook in particular has drawn ire for explicitly exempting content by politicians from fact-checking. Its complete inaction on Trump’s latest dangerous posts shows the instability of such policies, which led to the Monday walkout by Facebook employees and condemnation from civil rights leaders.

Twitter and Facebook enacted their policies on civic engagement and violence in response to overwhelming public outcry over the effects of digital disinformation. No one, not even the President of the United States, should be exempt from them. Twitter took a small step toward acknowledging this by fact-checking and hiding the president’s harmful tweets. In the future, however, both Twitter and Facebook need to consistently administer their policies, even — and perhaps, especially — when they apply to figures in power.

#column, #content-moderation, #digital-media, #donald-trump, #facebook, #opinion, #president, #social, #tc, #terms-of-service, #twitter

UK’s NHS COVID-19 app lacks robust legal safeguards against data misuse, warns committee

A UK parliamentary committee that focuses on human rights issues has called for primary legislation to be put in place to ensure that legal protections wrap around the national coronavirus contact tracing app.

The app, called NHS COVID-19, is being fast tracked for public use — with a test ongoing this week in the Isle of Wight. It’s set to use Bluetooth Low Energy signals to log social interactions between users to try to automate some contacts tracing based on an algorithmic assessment of users’ infection risk.

The NHSX has said the app could be ready for launch within a matter of weeks but the committee says key choices related to the system architecture create huge risks for people’s rights that demand the safeguard of primary legislation.

“Assurances from Ministers about privacy are not enough. The Government has given assurances about protection of privacy so they should have no objection to those assurances being enshrined in law,” said committee chair, Harriet Harman MP, in a statement.

“The contact tracing app involves unprecedented data gathering. There must be robust legal protection for individuals about what that data will be used for, who will have access to it and how it will be safeguarded from hacking.

“Parliament was able quickly to agree to give the Government sweeping powers. It is perfectly possible for parliament to do the same for legislation to protect privacy.”

The NHSX, a digital arm of the country’s National Health Service, is in the process of testing the app — which it’s said could be launched nationally within a few weeks.

The government has opted for a system design that will centralize large amounts of social graph data when users experiencing COVID-19 symptoms (or who have had a formal diagnosis) choose to upload their proximity logs.

Earlier this week we reported on one of the committee hearings — when it took testimony from NHSX CEO Matthew Gould and the UK’s information commissioner, Elizabeth Denham, among other witnesses.

Warning now over a lack of parliamentary scrutiny — around what it describes as an unprecedented expansion of state surveillance — the committee report calls for primary legislation to ensure “necessary legal clarity and certainty as to how data gathered could be used, stored and disposed of”.

The committee also wants to see an independent body set up to carry out oversight monitoring and guard against ‘mission creep’ — a concern that’s also been raised by a number of UK privacy and security experts in an open letter late last month.

“A Digital Contact Tracing Human Rights Commissioner should be responsible for oversight and they should be able to deal with complaints from the Public and report to Parliament,” the committee suggests.

Prior to publishing its report, the committee wrote to health minister Matt Hancock, raising a full spectrum of concerns — receiving a letter in response.

In this letter, dated May 4, Hancock told it: “We do not consider that legislation is necessary in order to build and deliver the contact tracing app. It is consistent with the powers of, and duties imposed on, the Secretary of State at a time of national crisis in the interests of protecting public health.”

The committee’s view is Hancock’s ‘letter of assurance’ is not enough given the huge risks attached to the state tracking citizens’ social graph data.

“The current data protection framework is contained in a number of different documents and it is nearly impossible for the public to understand what it means for their data which may be collected by the digital contact tracing system. Government’s assurances around data protection and privacy standards will not carry any weight unless the Government is prepared to enshrine these assurances in legislation,” it writes in the report, calling for a bill that it says myst include include a number of “provisions and protections”.

Among the protections the committee is calling for are limits on who has access to data and for what purpose.

“Data held centrally may not be accessed or processed without specific statutory authorisation, for the purpose of combatting Covid-19 and provided adequate security protections are in place for any systems on which this data may be processed,” it urges.

It also wants legal protections against data reconstruction — by different pieces of data being combined “to reconstruct information about an individual”.

The report takes a very strong line — warning that no app should be released without “strong protections and guarantees” on “efficacy and proportionality”.

“Without clear efficacy and benefits of the app, the level of data being collected will be not be justifiable and it will therefore fall foul of data protection law and human rights protections,” says the committee.

The report also calls for regular reviews of the app — looking at efficacy; data safety; and “how privacy is being protected in the use of any such data”.

It also makes a blanket call for transparency, with the committee writing that the government and health authorities “must at all times be transparent about how the app, and data collected through it, is being used”.

A lack of transparency around the project was another of the concerns raised by the 177 academics who signed the open letter last month.

The government has committed to publishing data protection impact assessments for the app. But the ICO’s Denham still hadn’t had sight of this document as of this Monday.

Another call by the committee is for a time-limit to be attached to any data gathered by or generated via the app. “Any digital contact tracing (and data associated with it) must be permanently deleted when no longer required and in any event may not be kept beyond the duration of the public health emergency,” it writes.

We’ve reached out to the Department of Health and NHSX for comment on the human rights committee’s report.

There’s another element to this fast moving story: Yesterday the Financial Times reported that the NHSX has inked a new contract with an IT supplier which suggests it might be looking to change the app architecture — moving away from a centralized database to a decentralized system for contacts tracing. Although NHSX has not confirmed any such switch at this point.

Some other countries have reversed course in their choice of app architecture after running into technical challenges related to Bluetooth. The need to ensure public trust in the system was also cited by Germany for switching to a decentralized model.

The human rights committee report highlights a specific app efficacy issue of relevance to the UK, which it points out is also linked to these system architecture choices, noting that: “The Republic of Ireland has elected to use a decentralised app and if a centralised app is in use in Northern Ireland, there are risks that the two systems will not be interoperable which would be most unfortunate.”

#apps, #bluetooth, #data-protection-law, #digital-rights, #elizabeth-denham, #europe, #germany, #health, #human-rights, #identity-management, #ireland, #law, #matt-hancock, #mobile, #national-health-service, #nhs, #nhs-covid-19, #nhsx, #northern-ireland, #privacy, #privacy-policy, #terms-of-service, #united-kingdom

Hundreds of academics back privacy-friendly coronavirus contact tracing apps

Hundreds of academics across the world have welcomed efforts to introduce privacy-friendly contact tracing systems to help understand the spread of coronavirus.

A letter, signed by nearly 300 academics and published Monday, praised recent announcements from Apple and Google to build an opt-in and decentralized way of allowing individuals to know if they have come into contact with someone confirmed to be infected with COVID-19.

The academics said that contact tracing apps that use automated Bluetooth tracing are far more privacy preserving than apps that collect location data in a central store.

“Contact tracing is a well-understood tool to tackle epidemics, and has traditionally been done manually. In some situations, so-called ‘contact tracing apps’ on peoples’ smartphones may improve the effectiveness of the manual contact tracing technique,” the letter reads. “Though the effectiveness of contact tracing apps is controversial, we need to ensure that those implemented preserve the privacy of their users, thus safeguarding against many other issues, noting that such apps can otherwise be repurposed to enable unwarranted discrimination and surveillance.”

The academic endorsement couldn’t come at a more critical time. There are competing methods to trace individuals’ contact with coronavirus. Decentralized systems are far more privacy conscious because no single entity stores the tracing data. But the academics say that centralized stores of data can “allow reconstructing invasive information about the population should be rejected without further discussion,” and instead urged all countries to “rely on systems that are subject to public scrutiny and that are privacy preserving by design.”

“It is vital that, in coming out of the current crisis, we do not create a tool that enables large scale data collection on the population, either now or at a later time,” the letter reads.

The letter lands just days after some of the same academics pulled their support for a similar contact tracing project, known as PEPP-PT, which is said to have seven unnamed governments signed up so far. Two of those, Spain and Switzerland, have called for a decentralized contact tracing solution. But after PEPP-PT published details of its centralized proprietary protocol, several academics associated with the project disavowed the project, saying it was neither open or transparent enough, and lent their support instead to the decentralized systems, such as the privacy-friendly DP-3T protocol, or systems like Apple and Google’s cross-platform solution.

Alan Woodward, a professor at the University of Surrey who also signed onto the letter, told TechCrunch that the letter serves as what the academic community thinks is the “correct approach” to contact tracing.

“I’ve never seen anything like it in this field,” Woodward said. “It shows that it’s not just the few but many who share the concern. I really hope governments listen before they do something that will be very difficult to undo.”

#bluetooth, #contact-tracing, #health, #learning, #prevention, #security, #smartphones, #surveillance, #terms-of-service

EU lawmakers set out guidance for coronavirus contacts tracing apps

The European Commission has published detailed guidance for Member States on developing coronavirus contacts tracing and warning apps.

The toolbox, which has been developed by the e-Health Network with the support of the Commission, is intended as a practical guide to implementing digital tools for tracking close contacts between device carriers as a proxy for infection risk that seeks to steer Member States in a common, privacy-sensitive direction as they configure their digital responses to the COVID-19 pandemic.

Commenting in a statement, Thierry Breton — the EU commissioner for Internal Market — said: Contact tracing apps to limit the spread of coronavirus can be useful, especially as part of Member States’ exit strategies. However, strong privacy safeguards are a pre-requisite for the uptake of these apps, and therefore their usefulness. While we should be innovative and make the best use of technology in fighting the pandemic, we will not compromise on our values and privacy requirements.”

“Digital tools will be crucial to protect our citizens as we gradually lift confinement measures,” added Stella Kyriakides, commissioner for health and food safety, in another supporting statement. “Mobile apps can warn us of infection risks and support health authorities with contact tracing, which is essential to break transmission chains. We need to be diligent, creative, and flexible in our approaches to opening up our societies again. We need to continue to flatten the curve – and keep it down. Without safe and compliant digital technologies, our approach will not be efficient.”

The Commission’s top-line “essential requirements” for national contacts tracing apps are that they’re:

  • voluntary;
  • approved by the national health authority;
  • privacy-preserving (“personal data is securely encrypted”); and
  • dismantled as soon as no longer needed

In the document the Commission writes that the requirements on how to record contacts and notify individuals are “anchored in accepted epidemiological guidance, and reflect best practice on cybersecurity, and accessibility”.

“They cover how to prevent the appearance of potentially harmful unapproved apps, success criteria and collectively monitoring the effectiveness of the apps, and the outline of a communications strategy to engage with stakeholders and the people affected by these initiatives,” it adds.

Yesterday, setting out a wider roadmap to encourage a co-ordinated lifting of the coronavirus lockdown, the Commission suggested digital tools for contacts tracing will play a key role in easing quarantine measures.

Although today’s toolbox clearly emphasizes the need to use manual contact tracing in parallel with digital contact tracing, with such apps and tools envisaged as a support for health authorities — if widely rolled out — by enabling limited resources to be more focused toward manual contacts tracing.

“Manual contact tracing will continue to play an important role, in particular for those, such as elderly or disabled persons, who could be more vulnerable to infection but less likely to have a mobile phone or have access to these applications,” the Commission writes. “Rolling-out mobile applications on a large-scale will significantly contribute to contact tracing efforts also allowing health authorities to carry manual tracing in a more focussed manner.”

“Mobile apps will not reach all citizens given that they rely on the possession and active use of a smart phone. Evidence from Singapore and a study by Oxford University indicate that 60-75% of a population need to have the app for it to be efficient,” it adds in a section on accessibility and inclusiveness. “However, non-users will benefit from any increased population disease control the widespread use of such an app may bring.”

The toolbox also reiterates a clear message from the Commission in recent days that “appropriate safeguards” must be embedded into digital contacts tracing systems. Though it’s less clear whether all Member States are listening to memos about respecting EU rights and freedoms, as they scrambled for tech and data to beat back COVID-19.

“This digital technology, if deployed correctly, could contribute substantively to containing and reversing its spread. Deployed without appropriate safeguards, however, it could have a significant negative effect on privacy and individual rights and freedoms,” the Commission writes, further warning that: “A fragmented and uncoordinated approach to contact tracing apps risks hampering the effectiveness of measures aimed at combating the COVID-19 crisis, whilst also causing adverse effects to the single market and to fundamental rights and freedoms.”

On safeguards the Commission has a clear warning for EU Member States, writing: “Any contact tracing and warning app officially recognised by Member States’ relevant authorities should present all guarantees for respect of fundamental rights, and in particular privacy and data protection, the prevention of surveillance and stigmatization.”

Its list of key safeguards notably includes avoiding the collection of any location data.

“Location data is not necessary nor recommended for the purpose of contact tracing apps, as their goal is not to follow the movements of individuals or to enforce prescriptions,” it says. “Collecting an individual’s movements in the context of contact tracing apps would violate the principle of data minimisation and would create major security and privacy issues.”

The toolbox also emphasizes that such contacts tracing/warning systems be temporary and voluntary in nature — with “automated/gentle self-dismantling, including deletion of all remaining personal data and proximity information, as soon as the crisis is over”.

“The apps’ installation should be consent-based, while providing users with complete and clear information on intended use and processing,” is another key recommendation. 

The toolbox leans towards suggesting a decentralized approach, in line with earlier Commission missives, with a push for: “Safeguards to ensure the storing of proximity data on the device and data encryption.”

Though the document also includes some discussion of alternative centralized models which involve uploading arbitrary identifiers to a backend server held by public health authorities. 

Users cannot be directly identified through these data. Only the arbitrary identifiers generated by the app are stored on the server. The advantage is that the data stored in the server can be anonymised by aggregation and further used by public authorities as a source of important aggregated information on the intensity of contacts in the population, on the effectiveness of the app in tracing and alerting contacts and on the aggregated number of people that could potentially develop symptoms,” it writes. 

“None of the two options [decentralized vs centralized] includes storing of unnecessary personal information,” it adds, leaving the door open to states that might want their public health authorities to be responsible for centralized data processing.

However the Commission draws a clear distinction between centralized approaches that use arbitrary identifiers and those that store directly-identifiable data on every user — with the latter definitely not recommended.

They would have “major disadvantage”, per the toolbox, because they “would not keep personal data processing to the absolute minimum, and so people may be less willing to install and use the app”.

“Centralised storage of mobile phone numbers could also create risks of data breaches and cyberattacks,” the Commission further warns.

Discussing cross-border interoperability requirements, the toolbox highlights the necessity for a grab-bag of EU contacts tracing apps to be interoperable, in order to successfully break cross-border transmission chains, which requires national health authorities to be technically able to exchange available information about individuals infected with and/or exposed to COVID-19.

“Tracing and warning apps should therefore follow common EU interoperability protocols so that the previous functionalities can be performed, and particularly safeguarding rights to privacy and data protection, regardless of where a device is in the EU,” it suggests.

On preventing the spread of harmful or unlawful apps the document suggests Member States consider setting up a national system of evaluation/accreditation endorsement of national apps, perhaps based on a common set of criteria (that would need to be defined).

“A close cooperation between health and digital authorities should be sought whenever possible for the evaluation/endorsement of the apps,” it writes. 

The Commission also says “close cooperation with app stores will be needed to promote national apps and promote uptake while delisting harmful apps” — putting Apple and Google squarely in the frame.

Earlier this week the pair announced their own collaboration on coronavirus contracts tracing — announcing a plan to offer an API and later opt-in system-level contacts tracing, based on a decentralized tracking architecture with ephemeral IDs processed locally on devices, rather than being uploaded and held on a central server.

Given the dominance of the two tech giants their decision to collaborate on a decentralized system may effectively deprive national health authorities of the option to gain buy in for systems that would give those publicly funded bodies access to anonymized and aggregated data for coronavirus modelling and/or tracking purposes. Which should, in the middle of a pandemic, give more than a little pause for thought.

A note in the toolbox mentions Apple and Google — with the Commission writing that: “By the end of April 2020, Member States with the Commission will seek clarifications on the solution proposed by Google and Apple with regard to contact tracing functionality on Android and iOS in order to ensure that their initiative is compatible with the EU common approach.”

#android, #api, #apple, #apps, #articles, #contact-tracing, #digital-rights, #europe, #european-commission, #european-union, #food-safety, #google, #health, #human-rights, #identity-management, #law, #mobile-app, #oxford-university, #privacy, #singapore, #smart-phone, #terms-of-service, #thierry-breton

Apple and Google are launching a joint COVID-19 tracing tool for iOS and Android

Apple and Google’s engineering teams have banded together to create a decentralized contact tracing tool that will help individuals determine whether they have been exposed to someone with COVID-19.

Contact tracing is a useful tool that helps public health authorities track the spread of the disease and inform the potentially exposed so that they can get tested. It does this by identifying and “following up with” people who have come into contact with a COVID-19-affected person.

The first phase of the project is an API that public health agencies can integrate into their own apps. The next phase is a system-level contact tracing system that will work across iOS and Android devices on an opt-in basis.

The system uses on-board radios on your device to transmit an anonymous ID over short ranges — using Bluetooth beaconing. Servers relay your last 14 days of rotating IDs to other devices, which search for a match. A match is determined based on a threshold of time spent and distance maintained between two devices.

If a match is found with another user that has told the system that they have tested positive, you are notified and can take steps to be tested and to self-quarantine.

Contact tracing is a well-known and debated tool, but one that has been adopted by health authorities and universities that are working on multiple projects like this. One such example is MIT’s efforts to use Bluetooth to create a privacy-conscious contact tracing tool that was inspired by Apple’s Find My system. The companies say that those organizations identified technical hurdles that they were unable to overcome and asked for help.

Our own Jon Evans laid out the need for a broader tracing apparatus a week ago, along with the notion that you’d need buy-in from Apple and Google to make it happen.

The project was started two weeks ago by engineers from both companies. One of the reasons the companies got involved is that there is poor interoperability between systems on various manufacturer’s devices. With contact tracing, every time you fragment a system like this between multiple apps, you limit its effectiveness greatly. You need a massive amount of adoption in one system for contact tracing to work well.

At the same time, you run into technical problems like Bluetooth power suck, privacy concerns about centralized data collection and the sheer effort it takes to get enough people to install the apps to be effective.

Two-phase plan

To fix these issues, Google and Apple teamed up to create an interoperable API that should allow the largest number of users to adopt it, if they choose.

The first phase, a private proximity contact detection API, will be released in mid-May by both Apple and Google for use in apps on iOS and Android. In a briefing today, Apple and Google said that the API is a simple one and should be relatively easy for existing or planned apps to integrate. The API would allow apps to ask users to opt-in to contact tracing (the entire system is opt-in only), allowing their device to broadcast the anonymous, rotating identifier to devices that the person “meets.” This would allow tracing to be done to alert those who may come in contact with COVID-19 to take further steps.

The value of contact tracing should extend beyond the initial period of pandemic and into the time when self-isolation and quarantine restrictions are eased.

The second phase of the project is to bring even more efficiency and adoption to the tracing tool by bringing it to the operating system level. There would be no need to download an app, users would just opt-in to the tracing right on their device. The public health apps would continue to be supported, but this would address a much larger spread of users.

This phase, which is slated for the coming months, would give the contract tracing tool the ability to work at a deeper level, improving battery life, effectiveness and privacy. If its handled by the system, then every improvement in those areas — including cryptographic advances — would benefit the tool directly.

How it works

A quick example of how a system like this might work:

  1. Two people happen to be near each other for a period of time, let’s say 10 minutes. Their phones exchange the anonymous identifiers (which change every 15 minutes).
  2. Later on, one of those people is diagnosed with COVID-19 and enters it into the system via a Public Health Authority app that has integrated the API.
  3. With an additional consent, the diagnosed user allows his anonymous identifiers for the last 14 days to be transmitted to the system.
  4. The person they came into contact with has a Public Health app on their phone that downloads the broadcast keys of positive tests and alerts them to a match.
  5. The app gives them more information on how to proceed from there.

Privacy and transparency

Both Apple and Google say that privacy and transparency are paramount in a public health effort like this one and say they are committed to shipping a system that does not compromise personal privacy in any way. This is a factor that has been raised by the ACLU, which has cautioned that any use of cell phone tracking to track the spread of COVID-19 would need aggressive privacy controls.

There is zero use of location data, which includes users who report positive. This tool is not about where affected people are but instead whether they have been around other people.

The system works by assigning a random, rotating identifier to a person’s phone and transmitting it via Bluetooth to nearby devices. That identifier, which rotates every 15 minutes and contains no personally identifiable information, will pass through a simple relay server that can be run by health organizations worldwide.

Even then, the list of identifiers you’ve been in contact with doesn’t leave your phone unless you choose to share it. Users that test positive will not be identified to other users, Apple or Google. Google and Apple can disable the broadcast system entirely when it is no longer needed.

All identification of matches is done on your device, allowing you to see — within a 14-day window — whether your device has been near the device of a person who has self-identified as having tested positive for COVID-19.

The entire system is opt-in. Users will know upfront that they are participating, whether in app or at a system level. Public health authorities are involved in notifying users that they have been in contact with an affected person.

The American Civil Liberties Union appears to be cautiously optimistic.

“No contact tracing app can be fully effective until there is widespread, free, and quick testing and equitable access to healthcare. These systems also can’t be effective if people don’t trust them,” said ACLU’s surveillance and cybersecurity counsel Jennifer Granick. “To their credit, Apple and Google have announced an approach that appears to mitigate the worst privacy and centralization risks, but there is still room for improvement. We will remain vigilant moving forward to make sure any contract tracing app remains voluntary and decentralized, and used only for public health purposes and only for the duration of this pandemic.”

Apple and Google say that they will openly publish information about the work that they have done for others to analyze in order to bring the most transparency possible to the privacy and security aspects of the project.

“All of us at Apple and Google believe there has never been a more important moment to work together to solve one of the world’s most pressing problems,” the companies said in a statement. “Through close cooperation and collaboration with developers, governments and public health providers, we hope to harness the power of technology to help countries around the world slow the spread of COVID-19 and accelerate the return of everyday life.”

You can find more information about the contact tracing API on Google’s post here and on Apple’s page here including specifications.

Updated with comment from the ACLU.

#android, #api, #apple, #bluetooth, #computing, #contact-tracing, #coronavirus, #covid19, #google, #health, #indoor-positioning-system, #mit, #operating-system, #operating-systems, #smartphones, #software, #tc, #terms-of-service