Facebook Dating launches in Europe after 9-month+ delay over privacy concerns

Facebook’s dating bolt-on to its eponymous social networking service has finally launched in Europe, more than nine months after an earlier launch plan was derailed at the last minute over privacy concerns.

From today, European Facebook users in Austria, Belgium, Bulgaria, Cyprus, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Croatia, Hungary, Ireland, Italy, Lithuania, Luxembourg, Latvia, Malta, Netherlands, Poland, Portugal, Romania, Sweden, Slovenia, Slovakia, Iceland, Liechtenstein, Norway, Spain, Switzerland and the UK can opt into Facebook Dating by creating a profile at facebook.com/dating.

Among the dating product’s main features are the ability to share Stories on your profile; a Secret Crush feature that lets you select up to nine of your Facebook friends or Instagram followers who you’d like to date (without them knowing unless they also add you — triggering a match notification); the ability to see people with similar interests if you add your Facebook Events and Groups to your Dating profile; and a video chat feature called Virtual Dates.

Image credit: Facebook

Of course if you opt in to Facebook Dating you’re going to be plugging even more of your personal data into Facebook’s people profiling machine. And it was concerns about how the dating product would be processing European users’ information that led to a regulatory intervention by the company’s lead data regulator in the EU, the Irish Data Protection Commission (DPC).

Back in February Facebook agreed to postpone the regional launch of Facebook Dating after the DPC’s agents paid a visit to its Dublin office — saying Facebook had not provided it with enough advanced warning of the product launch, nor adequate documentation about how it would work.

More than nine months later the regulator seems satisfied it now understands how Facebook Dating is processing people’s personal data — although it also says it will be monitoring the EU launch.

Additionally, the DPC says Facebook has made some changes to the product in light of concerns it raised (full details below).

Deputy commissioner, Graham Doyle, told TechCrunch: “As you will recall, the DPC became aware of Facebook’s plans to launch Facebook Dating a number of days prior to its planned launch in February of this year. Further to the action taken by the DPC at the time (which included an on-site inspection and a number of queries and concerns being put to Facebook), Facebook has provided detailed clarifications on the processing of personal data in the context of the Dating feature. Facebook has also provided details of changes that they have made to the product to take account of the issues raised by the DPC. We will continue to monitor the product as it launches across the EU this week.”

“Much earlier engagement on such projects is imperative going forward,” he added.

Since the launch of Facebook’s dating product in 20 countries around the world — including the US and a number of markets in Asia and LatAm — the company says more than 1.5 billion matches have been “created”.

In a press release about the European launch, Facebook writes that it has “built Dating with safety, security and privacy at the forefront”, adding: “We worked with experts in these areas to provide easy access to safety tips and build protections into Facebook Dating, including the ability to report and block anyone, as well as stopping people from sending photos, links, payments or videos in messages.”

It also links to an update about Facebook Dating’s privacy which emphasizes the product is an “opt-in experience”. This document includes a section explaining how use of the product impacts Facebook’s data collection and the ads users see across its suite of products.

“Facebook Dating may suggest matches for you based on your activities, preferences and information in Dating and other Facebook Products,” it writes. “We may also use your activity in Dating to personalize your experience, including ads you may see, across Facebook Products. The exception to this is your religious views and the gender(s) you are interested in dating, which will not be used to personalize your experience on other Facebook Products.”

One key privacy-related change flowing from the DPC intervention looks to be that Facebook has committed to excluding the use of Dating users’ religious and sexual orientation information for ad targeting purposes.

Under EU law this type of personal information is classed as ‘special category’ data — and consent to process it requires a higher bar of explicit consent from the user. (And Facebook probably didn’t want to harsh Dating users’ vibe with pop-ups asking them to agree to ads targeting them for being gay or Christian, for example.)

Asked about the product changes, the DPC confirmed a number of changes related to special category data, along with some additional clarifications.

Here’s its full list of “changes and clarifications” obtained from Facebook:

  • Changes to the user interface around a user’s selection of religious belief. Under the original proposal, the “prefer not to say” option was buried in the choices;
  • Updated sign-up flow within the Dating feature to bring to the user’s attention that Dating is a Facebook product and that it is covered by FB’s terms of service and data policy, as particularised by the Supplemental Facebook Dating Terms.
  • Clarification on the uses of special category data (no advertising using special category data and special category data collected in the dating feature will not be used by the core FB service);
  • Clarification that all other information will be used by Facebook in the normal manner across the Facebook platform in accordance with the FB terms of service;
  • Clarification on the processing of location data (location services has to be turned on for onboarding for safety and verification purpose but can then be turned off. Dating does not automatically update users’ Dating location in their Dating profile, even if the user chooses to have their location turned on for the wider Facebook service. Dating location does not use the user’s exact location, and is shown at a city level on the user’s Dating profile.).

#data-protection, #dpc, #europe, #facebook-dating, #privacy, #social

0

Mine raises $9.5M to help people take control of their personal data

TechCrunch readers probably know that privacy regulations like Europe’s GDPR and California’s CCPA give them additional rights around their personal data — like the ability to request that companies delete your data. But how many of you have actually exercised that right?

An Israeli startup called Mine is working to make that process much simpler, and it announced this morning that it has raised $9.5 million in Series A funding.

The startup was founded CEO Gal Ringel, CTO Gal Golan and CPO Kobi Nissan. Ringel and Golan are both veterans of Unit 8200, the cybersecurity unit of the Israeli Defense Forces.

Ringel explained that Mine scans users’ inboxes to help them understand who has access to their personal data.

“Every time that you do an online interaction, such as you sign up for a service or purchase a flight ticket, those companies, those services leave some clues or traces within your inbox,” he said.

Mine

Image Credits: Mine

Mine then cross-references that information with the data collection and privacy policies of the relevant companies, determining what data they’re likely to possess. It calculates a risk score for each company — and if the user decides they want a company to delete their data, Mine can send an automated email request from the user’s own account.

Ringel argued that this is a very different approach to data privacy and data ownership. Instead of building “fences” around your data, Mine makes you more comfortable sharing that data, knowing that you can take control when necessary.

“The product gives [consumers] the freedom to use the internet feeling more secure, because they know they can exercise their right to be forgotten,” he said.

Ringel noted that the average Mine user has a personal data footprint across 350 countries — and the number is more like 550 in the United States. I ran a Mine audit for myself and, within a few minutes, found that I’m pretty close to the U.S. average. (Ringel said the number doesn’t include email newsletters.)

Mine launched in Europe earlier this year and says it has already been used by more than 100,000 people to send 1.3 million data deletion requests.

The legal force behind those requests will differ depending on where you live and which company you are emailing, but Ringel said that most companies will comply even when they’re not legally required to do so, because it’s part of creating a better privacy experience that helps them “earn trust and credibility from consumers.” Plus, “Most of them understand that if you want to go, they’ve already lost you.”

The startup’s core service is available for free. Ringel said the company will make money with premium consumer offerings, like the ability to offload the entire conversation with a company when you want your data deleted. It will also work with businesses to create a standard interface around privacy and data deletion.

As for whether giving Mine access to your inbox creates new privacy risks, Ringel said that the startup collects the “bare minimum” of data — usually just your email address and your full name. Otherwise, it knows “the type of data, but not the actual data” that other companies have obtained.

“We would never share or sell your data,” he added.

The Series A was led by Google’s AI-focused venture fund Gradient Ventures, with participation from e.ventures, MassMutual Ventures, as well as existing investors Battery Ventures and Saban Ventures. Among other things, Ringel said the money will fund Mine’s launch in the United States.

#funding, #fundings-exits, #gradient-ventures, #mine, #privacy, #startups, #tc

0

EU parliament backs tighter rules on behavioural ads

The EU parliament has backed a call for tighter regulations on behavioral ads (aka microtargeting) in favor of less intrusive, contextual forms of advertising — urging Commission lawmakers to also assess further regulatory options, including looking at a phase-out leading to a full ban.

MEPs also want Internet users to be able to opt out of algorithmic content curation altogether.

The legislative initiative, introduced by the Legal Affairs committee, sets the parliament on a collision course with the business model of tech giants Facebook and Google.

Parliamentarians also backed a call for the Commission to look at options for setting up a European entity to monitor and impose fines to ensure compliance with rebooted digital rules — voicing support for a single, pan-EU Internet regulator to keep platforms in line.

The votes by the elected representatives of EU citizens are non-binding but send a clear signal to Commission lawmakers who are busy working on an update to existing ecommerce rules, via the forthcoming Digital Service Act (DSA) package — due to be introduced next month.

The DSA is intended to rework the regional rule book for digital services, including tackling controversial issues such as liability for user-generated content and online disinformation. And while only the Commission can propose laws, the DSA will need to gain the backing of the EU parliament (and the Council) if it is to go the legislative distance so the executive needs to take note of MEPs’ views.

Battle over adtech

The mass surveillance of Internet users for ad targeting — a space that’s dominated by Google and Facebook — looks set to be a major battleground as Commission lawmakers draw up the DSA package.

Last month Facebook’s policy VP Nick Clegg, a former MEP himself, urged regional lawmakers to look favorably on a business model he couched as “personalized advertising” — arguing that behavioral ad targeting allows small businesses to level the playing field with better resourced rivals.

However the legality of the model remains under legal attack on multiple fronts in the EU.

Scores of complaints have been lodged with EU data protection agencies over the mass exploitation of Internet users’ data by the adtech industry since the General Data Protection Regulation (GDPR) begun being applied — with complaints raising questions over the lawfulness of the processing and the standard of consent claimed.

Just last week, a preliminary report by Belgium’s data watchdog found that a flagship tool for gathering Internet users’ consent to ad tracking that’s operated by the IAB Europe fails to meet the required GDPR standard.

The use of Internet users’ personal data in the high velocity information exchange at the core of programmatic’s advertising’s real-time-bidding (RTB) process is also being probed by Ireland’s DPC, following a series of complaints. The UK’s ICO has warned for well over a year of systemic problems with RTB too.

Meanwhile some of the oldest unresolved GDPR complaints pertain to so-called ‘forced consent’ by Facebook  — given GDPR’s requirement that for consent to be lawful it must be freely given. Yet Facebook does not offer any opt-out from behavioral targeting; the ‘choice’ it offers is to use its service or not use it.

Google has also faced complaints over this issue. And last year France’s CNIL fined it $57M for not providing sufficiently clear info to Android users over how it was processing their data. But the key question of whether consent is required for ad targeting remains under investigation by Ireland’s DPC almost 2.5 years after the original GDPR complaint was filed — meaning the clock is ticking on a decision.

And still there’s more: Facebook’s processing of EU users’ personal data in the US also faces huge legal uncertainty because of the clash between fundamental EU privacy rights and US surveillance law.

A major ruling (aka Schrems II) by Europe’s top court this summer has made it clear EU data protection agencies have an obligation to step in and suspend transfers of personal data to third countries when there’s a risk the information is not adequately protected. This led to Ireland’s DPC sending Facebook a preliminary order to suspend EU data transfers.

Facebook has used the Irish courts to get a stay on that while it seeks a judiciary review of the regulator’s process — but the overarching legal uncertainty remains. (Not least because the complainant, angry that data continues to flow, has also been granted a judicial review of the DPC’s handling of his original complaint.)

There has also been an uptick in EU class actions targeting privacy rights, as the GDPR provides a framework that litigation funders feel they can profit off of.

All this legal activity focused on EU citizens’ privacy and data rights puts pressure on Commission lawmakers not to be seen to row back standards as they shape the DSA package — with the parliament now firing its own warning shot calling for tighter restrictions on intrusive adtech.

It’s not the first such call from MEPs, either. This summer the parliament urged the Commission to “ban platforms from displaying micro-targeted advertisements and to increase transparency for users”. And while they’ve now stepped away from calling for an immediate outright ban, yesterday’s votes were preceded by more detailed discussion — as parliamentarians sought to debate in earnest with the aim of influencing what ends up in the DSA package.

Ahead of the committee votes, online ad standards body, the IAB Europe, also sought to exert influence — putting out a statement urging EU lawmakers not to increase the regulatory load on online content and services.

“A facile and indiscriminate condemnation of ‘tracking’ ignores the fact that local, generalist press whose investigative reporting holds power to account in a democratic society, cannot be funded with contextual ads alone, since these publishers do not have the resources to invest in lifestyle and other features that lend themselves to  contextual targeting,” it suggested.

“Instead of adding redundant or contradictory provisions to the current rules, IAB Europe urges EU policymakers and regulators to work with the industry and support existing legal compliance standards such as the IAB Europe Transparency & Consent Framework [TCF], that can even help regulators with enforcement. The DSA should rather tackle clear problems meriting attention in the online space,” it added in the statement last month.

However, as we reported last week, the IAB Europe’s TCF has been found not to comply with existing EU standards following an investigation by the Belgium DPA’s inspectorate service — suggesting the tool offers quite the opposite of ‘model’ GDPR compliance. (Although a final decision by the DPA is pending.)

The EU parliament’s Civil Liberties committee also put forward a non-legislative resolution yesterday, focused on fundamental rights — including support for privacy and data protection — that gained MEPs’ backing.

Its resolution asserted that microtargeting based on people’s vulnerabilities is problematic, as well as raising concerns over the tech’s role as a conduit in the spreading of hate speech and disinformation.

The committee got backing for a call for greater transparency on the monetisation policies of online platforms.

‘Know your business customer’

Other measures MEPs supported in the series of votes yesterday included a call to set up a binding ‘notice-and-action’ mechanism so Internet users can notify online intermediaries about potentially illegal online content or activities — with the possibility of redress via a national dispute settlement body.

While MEPs rejected the use of upload filters or any form of ex-ante content control for harmful or illegal content. — saying the final decision on whether content is legal or not should be taken by an independent judiciary, not by private undertakings.

They also backed dealing with harmful content, hate speech and disinformation via enhanced transparency obligations on platforms and by helping citizens acquire media and digital literacy so they’re better able to navigate such content.

A push by the parliament’s Internal Market Committee for a ‘Know Your Business Customer’ principle to be introduced — to combat the sale of illegal and unsafe products online — also gained MEPs’ backing, with parliamentarians supporting measures to make platforms and marketplaces do a better job of detecting and taking down false claims and tackling rogue traders.

Parliamentarians also supported the introduction of specific rules to prevent (not merely remedy) market failures caused by dominant platform players as a means of opening up markets to new entrants — signalling support for the Commission’s plan to introduce ex ante rules for ‘gatekeeper’ platforms.

Liability for ‘high risk’ AI

The parliament also backed a legislative initiative recommending rules for AI — urging Commission lawmakers to present a new legal framework outlining the ethical principles and legal obligations to be followed when developing, deploying and using artificial intelligence, robotics and related technologies in the EU including for software, algorithms and data.

The Commission has made it clear it’s working on such a framework, setting out a white paper this year — with a full proposal expected in 2021.

MEPs backed a requirement that ‘high-risk’ AI technologies, such as those with self-learning capacities, be designed to allow for human oversight at any time — and called for a future-oriented civil liability framework that would make those operating such tech strictly liable for any resulting damage.

The parliament agreed such rules should apply to physical or virtual AI activity that harms or damages life, health, physical integrity, property, or causes significant immaterial harm if it results in “verifiable economic loss”.

#adtech, #advertising-tech, #ai, #artificial-intelligence, #behavioral-advertising, #digital-services-act, #eu-parliament, #europe, #facebook, #gdpr, #microtargeting, #online-regulation, #policy, #privacy, #tc

0

It’s Google’s World. We Just Live In It.

Googling something was all we once did with Google. Now we spend hours a day using its maps, videos, security cameras, email, smartphones and more.

#antitrust-laws-and-competition-issues, #computers-and-the-internet, #content-type-service, #google-inc, #home-automation-and-smart-homes, #maps, #mobile-applications, #online-advertising, #privacy, #search-engines, #video-recordings-downloads-and-streaming

0

I Used a Sperm Donor. Should I Introduce My Daughter to Her Half Siblings?

The magazine’s Ethicist columnist on whether to make your donor’s numerous offspring part of your child’s family — and more.

#artificial-insemination, #children-and-childhood, #privacy, #single-mothers

0

EU switches on cross-border interoperability for first batch of COVID-19 contacts tracing apps

The European Union has switched on cross-border interoperability for a first batch of COVID-19 contacts tracing apps that use Bluetooth proximity to calculate the exposure risk of smartphone users after a pilot of the system last month.

National apps whose backends are now linked through the gateway service are Germany’s Corona-Warn-App, the Republic of Ireland’s COVID tracker, and Italy’s immuni app.

This means a user of one of those apps who travels to any of the other countries can expect their national app to send relevant exposure notifications in the same way it should if they had not travelled — without the need to download any additional software.

Collectively, the three national COVID-19 apps have been downloaded by around 30 million people which the EU said corresponds to two-thirds of such downloads in the region.

Image credit: EU Publications Office

Other national apps are expected to gain interoperability as they are added to the service in the coming weeks — with at least 18 more compatible national apps identified at this stage.

A second batch of national apps is expected to be added next week after a period of testing — namely: Czechia’s eRouška, Denmark’s smitte stop, Latvia’s Apturi COVID and Spain’s Radar Covid (although the latter still doesn’t have full coverage in Spain with the Catalonia region yet to integrate it with its regional healthcare system). Further compatible apps are slated to be added in November.

The gateway has been designed to work, in the first instance, with official coronavirus apps that have a decentralized architecture — meaning any that use a centalized architecture, such as France’s StopCovid app, aren’t currently compatible.

The UK’s collection of apps, meanwhile — for England & Wales, Scotland and Northern Ireland — are unlikely to get plugged in, despite having a technically compatible app architecture, as the country is due to exit the trading bloc at the end of this year. (So interoperability would require a separate agreement between the UK and the EU.)

“About two third of EU Member States have developed compatible tracing and warning apps, and the gateway is open to all of them, once they are ready to connect. The connection will gradually take place during October and November, however apps can also connect at a later stage if national authorities wish so. An ‘onboarding protocol’ has been developed, setting out the necessary steps,” the Commission notes in an Q&A.

The cross-border system for the EU’s apps works via the use of a gateway server, developed and set up by T-Systems and SAP and operated from the Commission’s data centre in Luxembourg, which receives and passes on arbitrary identifiers between national apps.

“No other information than arbitrary keys, generated by the apps, will be handled by the gateway,” the EU notes in a press release. “The information is pseudonymised, encrypted, kept to the minimium, and only stored as long as necessary to trace back infections. It does not allow the identification of individual persons, nor to track location or movement of devices.”

Getting a cross-border system up and running so swiftly across a patchwork of national COVID-19 apps is an achievement for the EU, even as there are ongoing questions about the utility of Bluetooth-based coronavirus exposure notifications in the fight against the spread of the novel coronavirus — with much of Europe now experiencing a second wave of the pandemic.

However EU commissioners suggested today that such apps can be a useful complement to other measures, such as manual contact tracing.

Commenting in a statement, StellaKyriakides, EU commissioner for health and food safety, said: “Coronavirus tracing and warning apps can effectively complement other measures like increased testing and manual contact tracing. With cases on the rise again, they can play an important role to help us break the transmission chains. When working across borders these apps are even more powerful tools. Our gateway system going live today is an important step in our work, and I would call on citizens to make use of such apps, to help protecting each other.”

“Free movement is an integral part of the Single Market — the gateway is facilitating this while helping save lives,” added Thierry Breton, commissioner for the internal market.

#apps, #coronavirus-contacts-tracing, #covid-19, #eu, #europe, #exposure-notifications, #health, #interoperability, #privacy

0

Pimloc gets $1.8M for its AI-based visual search and redaction tool

UK-based Pimloc has closed a £1.4 million (~$1.8M) seed funding round led by Amadeus Capital Partners. Existing investor Speedinvest and other unnamed shareholders also participated in the round.

The 2016-founded computer vision startup launched a AI -powered photo classifier service called Pholio in 2017 — pitching the service as a way for smartphone users to reclaim agency over their digital memories without having to hand their data over to cloud giants like Google.

It has since pivoted to position Pholio as a “specialist search and discovery platform” for large image and video collections and live streams (such as those owned by art galleries or broadcasters) — and also launched a second tool powered by its deep learning platform. This product, Secure Redact, offers privacy-focused content moderation tools — enabling its users to find and redact personal data in visual content.

An example use-case it gives is for law enforcement to anonymize bodycam footage so it can be repurposed for training videos or prepared for submitting as evidence.

Pimloc has been working with diverse image and video content for several years supporting businesses with a host of classification, moderation and data protection challenges (image libraries, art galleries, broadcasters and CCTV providers),” CEO Simon Randall tells TechCrunch.

“Through our work on the visual privacy side we identified a critical gap in the market for services that allow businesses and governments to manage visual data protection at scale on security footage. Pimloc has worked in this area for a couple of years building capability and product, as a result Pimloc has now focussed the business solely around this mission.”

Secure Redact has two components: A first (automated) step that detects personal data (e.g. faces, heads, bodies) within video content. On top of that is what Randall calls a layer of “intelligent tools” — letting users quickly review and edit results.

“All detections and tracks are auditable and editable by users prior to accepting and redacting,” he explains, adding: “Personal data extends wider than just faces into other objects and scene content including ID cards, tattoos, phone screens (body worn cameras have a habit of picking up messages on the wearer’s phone screen as they are typing, or sensitive notes on their laptop or notebook).”

One specific user of redaction the tool he mentions is the University of Bristol. There a research group, led by Dr Dima Damen, an associate professor in computer vision, is participating in an international consortium of 12 universities which is aiming to amass the largest dataset on egocentric vision — and needs to be able to anonymise the video data set before making it available for academic/open source use.

On the legal side, Randall says Pimloc offers a range of data processing models — thereby catering to differences in how/where data can be processed. “Some customers are happy for Pimloc to act as data processor and use the Secure Redact SaaS solution — they manage their account, they upload footage, and can review/edit/update detections prior to redaction and usage. Some customers run the Secure Redact system on their servers where they are both data controller and processor,” he notes.

“We have over 100 users signed up for the SaaS service covering mobility, entertainment, insurance, health and security. We are also in the process of setting up a host of on-premise implementations,” he adds.

Asked which sectors Pimloc sees driving the most growth for its platform in the coming years, he lists the following: smart cities/mobility platforms (with safety/analytics demand coming from the likes of councils, retailers, AVs); the insurance industry, which he notes is “capturing and using an increasing amount of visual data for claims and risk monitoring” and thus “looking at responsible systems for data management and processing”; video/telehealth, with traditional consultations moving into video and driving demand for visual diagnosis; and law enforcement, where security goals need to be supported by “visual privacy designed in by default” (at least where forces are subject to European data protection law).

On the competitive front, he notes that startups are increasingly focusing on specialist application areas for AI — arguing they have an opportunity to build compelling end-to-end propositions which are harder for larger tech companies to focus on.

For Pimlock specifically he argues it has an edge in its particular security-focused niche — given “deep expertise” and specific domain experience.

“There are low barriers to entry to create a low quality product but very high technical barriers to create a service that is good enough to use at scale with real ‘in the wild’ footage,” he argues, adding: The generalist services of the larger tech players do not match-up with domain specific provisions of Pimloc/Secure Redact. Video security footage is a difficult domain for AI, systems trained on lifestyle/celebrity or other general data sets perform poorly on real security footage.”

Commenting on the seed funding in a statement, Alex van Someren, MD of Amadeus Capital Partners, said: “There is a critical need for privacy by design and large-scale solutions, as video grows as a data source for mobility, insurance, commerce and smart cities, while our reliance on video for remote working increases. We are very excited about the potential of Pimloc’s products to meet this challenge.”

“Consumers around the world are rightfully concerned with how enterprises are handling the growing volume of visual data being captured 24/7. We believe Pimloc has developed an industry leading approach to visual security and privacy that will allow businesses and governments to manage the usage of visual data whilst protecting consumers. We are excited to support their vision as they expand into the wider Enterprise and SaaS markets,” added Rick Hao, principal at Speedinvest, in another supporting statement.

#ai, #amadeus-capital-partners, #artificial-intelligence, #computer-vision, #pimloc, #privacy, #recent-funding, #startups, #visual-search

0

Instagram’s handling of kids’ data is now being probed in the EU

Facebook’s lead data regulator in Europe has opened another two probes into its business empire — both focused on how the Instagram platform processes children’s information.

The action by Ireland’s Data Protection Commission (DPC), reported earlier by the Telegraph, comes more than a year after a US data scientist reported concerns to Instagram that its platform was leaking the contact information of minors. David Stier went on to publish details of his investigation last year — saying Instagram had failed to make changes to prevent minors’ data being accessible.

He found that children who changed their Instagram account settings to a business account had their contact info (such as an email address and phone number) displayed unmasked via the platform — arguing that “millions” of children had had their contact information exposed as a result of how Instagram functions.

Facebook disputes Stier’s characterization of the issue — saying it’s always made it clear that contact info is displayed if people choose to switch to a business account on Instagram.

It also does now let people opt out of having their contact info displayed if they switch to a business account.

Nonetheless, its lead EU regulator has now said it’s identified “potential concerns” relating to how Instagram processes children’s data.

Per the Telegraph’s report the regulator opened the dual inquiries late last month in response to claims the platform had put children at risk of grooming or hacking by revealing their contact details. 

The Irish DPC did not say that but did confirm two new statutory inquiries into Facebook’s processing of children’s data on the fully owned Instagram platform in a statement emailed to TechCrunch in which it notes the photo-sharing platform “is used widely by children in Ireland and across Europe”.

“The DPC has been actively monitoring complaints received from individuals in this area and has identified potential concerns in relation to the processing of children’s personal data on Instagram which require further examination,” it writes.

The regulator’s statement specifies that the first inquiry will examine the legal basis Facebook claims for processing children’s data on the Instagram platform, and also whether or not there are adequate safeguards in place.

Europe’s General Data Protection Regulation (GDPR) includes specific provisions related to the processing of children’s information — with a hard cap set at age 13 for kids to be able to consent to their data being processed. The regulation also creates an expectation of baked in safeguards for kids’ data.

“The DPC will set out to establish whether Facebook has a legal basis for the ongoing processing of children’s personal data and if it employs adequate protections and or restrictions on the Instagram platform for such children,” it says of the first inquiry, adding: “This Inquiry will also consider whether Facebook meets its obligations as a data controller with regard to transparency requirements in its provision of Instagram to children.”

The DPC says the second inquiry will focus on the Instagram profile and account settings — looking at “the appropriateness of these settings for children”.

“Amongst other matters, this Inquiry will explore Facebook’s adherence with the requirements in the GDPR in respect to Data Protection by Design and Default and specifically in relation to Facebook’s responsibility to protect the data protection rights of children as vulnerable persons,” it adds.

In a statement responding to the regulator’s action, a Facebook company spokesperson told us:

We’ve always been clear that when people choose to set up a business account on Instagram, the contact information they shared would be publicly displayed. That’s very different to exposing people’s information. We’ve also made several updates to business accounts since the time of Mr. Stier’s mischaracterisation in 2019, and people can now opt out of including their contact information entirely. We’re in close contact with the IDPC and we’re cooperating with their inquiries.

Breaches of the GDPR can attract sanctions of as much as 4% of the global annual turnover of a data controller — which, in the case of Facebook, means any future fine for violating the regulation could run to multi-billions of euros.

That said, Ireland’s regulator now has around 25 open investigations related to multinational tech companies (aka cross-border GDPR cases) — a backlog that continues to attract criticism over the plodding progress of decisions. Which means the Instagram inquiries are joining the back of a very long queue.

Earlier this summer the DPC submitted its first draft decision on a cross-border GDPR case — related to a 2018 Twitter breach — sending it on to the other EU DPAs for review.

That step has led to a further delay, as the other EU regulators did not unanimously back the DPC’s decision — triggering a dispute mechanisms set out in the GDPR.

In separate news, an investigation of Instagram influencers by the UK’s Competition and Markets Authority found the platform is failing to protect consumers from being misled. The BBC reports that the platform will roll out new tools over the next year including a prompt for influencers to confirm whether they have received incentives to promote a product or service before they are able to publish a post, and new algorithms built to spot potential advertising content.

#childrens-data, #dpc, #facebook, #gdpr, #instagram, #privacy, #social

0

EU’s Google-Fitbit antitrust decision deadline pushed into 2021

The deadline for Europe to make a call on the Google -Fitbit merger has been pushed out again — with EU regulators now having until January 8, 2021, to take a decision.

The latest change to the provisional deadline, spotted earlier by Reuters, could be the result of one of the parties asking for more time.

Last month the deadline for a decision was extended until December 23 — potentially pushing the decision out beyond a year after Google announced its intention to buy Fitbit, back in November 2019. So if the tech giant was hoping for a simple and swift regulatory rubberstamping its hopes have been diminishing since August when the Commission announced it was going to dig into the detail. Once bitten and all that.

The proposed Fitbit acquisition also comes as Alphabet, Google’s parent, is under intense antitrust scrutiny on multiple fronts on home turf.

Google featured prominently in a report by the House Judiciary Committee on big tech antitrust concerns earlier this month, with US lawmakers recommending a range of remedies — including breaking up platform giants.

European lawmakers are also in the process of drawing up new rules to regulate so-called ‘gatekeeper’ platforms — which would almost certainly apply to Google. A legislative proposal on that is expected before the end of this year, which means it may appear before EU regulators have taken a decision on the Google-Fitbit deal. (And one imagines Google isn’t exactly stoked about that possibility.)

Both competition and privacy concerns have been raised against allowing Google get its hands on Fitbit users’ data.

The tech giant has responded by offering a number of pledges to try to convince regulators — saying it would not use Fitbit health and wellness data for ads and offering to have data separation requirements monitored. It has also said it would commit to maintain third parties’/rivals’ access to its Android ecosystem and Fitbit’s APIs.

However rival wearable makers have continued to criticize the proposed merger. And, earlier this week, consumer protection and human rights groups issued a joint letter — urging regulators to only approve the takeover if “merger remedies can effectively prevent [competition and privacy] harms in the short and long term”.

One thing is clear: With antitrust concerns now writ large against ‘big tech’ the era of ‘friction-free’ acquisitions looks to be behind Google et al.

#antitrust, #competition, #data, #eu, #europe, #fitbit, #gadgets, #google, #privacy

0

IAB Europe’s ad tracking consent framework found to fail GDPR standard

A flagship framework for gathering Internet users’ consent for targeting with behavioral ads — which is designed by ad industry body, the IAB Europe — fails to meet the required legal standards of data protection, according to findings by its EU data supervisor.

The Belgian DPA’s investigation follows complaints against the use of personal data in the real-time bidding (RTB) component of programmatic advertising which contend that a system of high velocity personal data trading is inherently incompatible with data security requirements baked into EU law.

The IAB Europe’s Transparency and Consent Framework (TCF) can be seen popping up all over the regional web, asking users to accept (or reject) ad trackers — with the stated aim of helping publishers comply with the EU’s data protection rules.

It was the ad industry standard’s body’s response to a major update to the bloc’s data protection rules, after the General Data Protection Regulation (GDPR) came into application in May 2018 — tightening standards around consent to process personal data and introducing supersized penalties for non-compliance — thereby cranking up the legal risk for the ad tracking industry.

The IAB Europe introduced the TCF in April 2018, saying at the time that it would “help the digital advertising ecosystem comply with obligations under the GDPR and ePrivacy Directive”.

The framework has been widely adopted, including by adtech giant, Google — which integrated it this August.

Beyond Europe, the IAB has also recently been pushing for a version of the same tool to be used for ‘compliance’ with California’s Consumer Privacy Act.

However the findings by the investigatory division of the Belgian data protection agency cast doubt on all that adoption — suggesting the framework is not fit for purpose.

The inspection service of the Belgium DPA makes a number of findings in a report reviewed by TechCrunch — including that the TCF fails to comply with GDPR principles of transparency, fairness and accountability, and also the lawfulness of processing.

It also finds that the TCF does not provide adequate rules for the processing of special category data (e.g. health information, political affiliation, sexual orientation etc) — yet does process that data.

There are further highly embarrassing findings for the IAB Europe, which the inspectorate found not to have appointed a Data Protection Officer, nor to have a register of its own internal data processing activities.

Its own privacy policy was also found wanting.

We’ve reached out to the IAB Europe for comment on the inspectorate’s findings.

A series of complaints against RTB have been filed across Europe over the past two years, starting in the UK and Ireland.

Dr Johnny Ryan, who filed the original RTB complaints — and is now a senior fellow at the Irish Council for Civil Liberties — told TechCrunch: “The TCF was an attempt by the tracking industry to put a veneer or quasi-legality over the massive data breach at the heart of the behavioral advertising and tracking industry and the Belgian DPA is now peeling that veneer off and exposing the illegality.”

Ryan has previously described the RTB issues as “the greatest data breach ever recorded”.

Last month he published another hair-raising dossier of evidence on how extensively and troublingly RTB leaks personal data — with findings including that a data broker used RTB to profile people with the aim of influencing the 2019 Polish Parliamentary Election by targeting LGBTQ+ people. Another data broker was found to be profiling and targeting Internet users in Ireland under categories including “Substance abuse”, “Diabetes,” “Chronic Pain” and “Sleep Disorders”.

In a statement, Ravi Naik, the solicitor who worked on the original RTB complaints, had this to say on the Belgian inspectorate’s findings: “These findings are damning and overdue. As the standard setters, the IAB is responsible for breaches of the GDPR. Their supervisory authority has rightly found that the IAB ‘neglects’ the risks to data subjects. The IAB’s responsibility now is to stop these breaches.”

Following the filing of RTB complaints, the UK’s data watchdog, the ICO, issued a warning about behavioural advertising in June 2019 — urging the industry to take note of the need to comply with data protection standards.

However the regulator has failed to follow up with any enforcement action — unless you count multiple mildly worded blog posts. Most recently it paused its (still ongoing) investigation into the issue because of the pandemic.

In another development last year, Ireland’s DPC opened an investigation into Google’s online Ad Exchange — looking into the lawful basis for its processing of personal data. But that investigation is one of scores that remain open on its desk. And the Irish regulator continues to face criticism over the length of time it’s taking to issue decisions on major cross-border GDPR cases pertaining to big tech.

Jef Ausloos, a postdoc researcher in data privacy at the University of Amsterdam — and one of the complainants in the Belgian case — told TechCrunch the move by the DPA puts pressure on other EU regulators to act, calling out what he described as “their complete, deer-in-the-headlights inaction“.

“I think we’ll see more of this in the coming months/year, i.e. other DPAs sick and tired, taking matters into their own hands — instead of waiting on the Irish,” he added.

“We are happy to finally see a data protection authority having the resolved to take on the online advertisement industry at its roots. This may be the first important step in taking down surveillance capitalism,” Ausloos also said in a statement.

There are still several steps to go before the Belgian DPA takes (any) action on the substance of its inspectorate’s report — with a number of steps outstanding in the regulatory process. We’ve reached out to the Belgian DPA for comment.

But, per the complainants, the inspectorate’s findings have been forwarded to the Litigation Chamber, and action is expected in early 2021. Which suggests privacy watchers in the EU might finally get to uphold their rights against the ad tracking industry/data industrial complex in the near future.

For publishers the message is a need to change how they monetize their content: Rights-respecting alternatives to creepy ads are possible (e.g. contextual ad targeting which does not use personal data).

Some publishers have already found the switch to contextual ads to be a good news story for their revenues. Subscription business models are also available (even if not all VCs are fans).

#advertising-tech, #behavioral-ads, #data-protection, #europe, #gdpr, #iab-europe, #privacy, #programmatic-advertising, #rtb

0

Zoom to start first phase of E2E encryption rollout next week

Zoom will begin rolling out end-to-end encryption to users of its videoconferencing platform from next week, it said today.

The platform, whose fortunes have been supercharged by the pandemic-driven boom in remote working and socializing this year, has been working on rebooting its battered reputation in the areas of security and privacy since April — after it was called out on misleading marketing claims of having E2E encryption (when it did not). E2E is now finally on its way though.

“We’re excited to announce that starting next week, Zoom’s end-to-end encryption (E2EE) offering will be available as a technical preview, which means we’re proactively soliciting feedback from users for the first 30 days,” it writes in a blog post. “Zoom users — free and paid — around the world can host up to 200 participants in an E2EE meeting on Zoom, providing increased privacy and security for your Zoom sessions.”

Zoom acquired Keybase in May, saying then that it was aiming to develop “the most broadly used enterprise end-to-end encryption offering”.

However, initially, CEO Eric Yuan said this level of encryption would be reserved for fee-paying users only. But after facing a storm of criticism the company enacted a swift U-turn — saying in June that all users would be provided with the highest level of security, regardless of whether they are paying to use its service or not.

Zoom confirmed today that Free/Basics users who want to get access to E2EE will need to participate in a one-time verification process — in which it will ask them to provide additional pieces of information, such as verifying a phone number via text message — saying it’s implementing this to try to reduce “mass creation of abusive accounts”.

“We are confident that by implementing risk-based authentication, in combination with our current mix of tools — including our work with human rights and children’s safety organizations and our users’ ability to lock down a meeting, report abuse, and a myriad of other features made available as part of our security icon — we can continue to enhance the safety of our users,” it writes.

Next week’s roll out of a technical preview is phase 1 of a four-stage process to bring E2E encryption to the platform.

This means there are some limitations — including on the features that are available in E2EE Zoom meetings (you won’t have access to join before host, cloud recording, streaming, live transcription, Breakout Rooms, polling, 1:1 private chat, and meeting reactions); and on the clients that can be used to join meetings (for phase 1 all E2EE meeting participants must join from the Zoom desktop client, mobile app, or Zoom Rooms). 

The next phase of the E2EE rollout — which will include “better identity management and E2EE SSO integration”, per Zoom’s blog — is “tentatively” slated for 2021.

From next week, customers wanting to check out the technical preview must enable E2EE meetings at the account level and opt-in to E2EE on a per-meeting basis.

All meeting participants must have the E2EE setting enabled in order to join an E2EE meeting. Hosts can enable the setting for E2EE at the account, group, and user level and can be locked at the account or group level, Zoom notes in an FAQ.

The AES 256-bit GCM encryption that’s being used is the same as Zoom currently uses but here combined with public key cryptography — which means the keys are generated locally, by the meeting host, before being distributed to participants, rather than Zoom’s cloud performing the key generating role.

“Zoom’s servers become oblivious relays and never see the encryption keys required to decrypt the meeting contents,” it explains of the E2EE implementation.

If you’re wondering how you can be sure you’ve joined an E2EE Zoom meeting a dark padlock will be displayed atop the green shield icon in the upper left corner of the meeting screen. (Zoom’s standard GCM encryption shows a checkmark here.)

Meeting participants will also see the meeting leader’s security code — which they can use to verify the connection is secure. “The host can read this code out loud, and all participants can check that their clients display the same code,” Zoom notes.

#e2e-encryption, #enterprise, #privacy, #security, #videoconferencing, #zoom

0

If the ad industry is serious about transparency, let’s open-source our SDKs

Year after year, a lack of transparency in how ad traffic is sourced, sold and measured is cited by advertisers as a source of frustration and a barrier to entry in working with various providers. But despite progress on the protection and privacy of data through laws like GDPR and COPPA, the overall picture regarding ad-marketing transparency has changed very little.

In part, this is due to the staggering complexity of how programmatic and other advertising technologies work. With automated processes managing billions of impressions every day, there is no universal solution to making things more simple and clear. So the struggle for the industry is not necessarily a lack of intent around transparency, but rather how to deliver it.

Frustratingly, evidence shows that the way data is collected and used by some industry players has played a large part in reducing people’s trust in online advertising. This is not a problem that was created overnight. There is a long history and growing sense of consumer frustration with the way their data is being used, analyzed and monetized and a similar frustration by advertisers with the transparency and legitimacy of ad clicks for which they are asked to pay.

There are continuing efforts by organizations like the IAB and TAG to create policies for better transparency such as ads.txt. But without hard and fast laws, the responsibility lies with individual companies.

One relatively simple yet largely spurned practice that would engender transparency and trust for the benefit of all parties (brands, consumers and ad/marketing providers) would be for the industry to come together and have all parties open their SDKs.

Why open-sourcing benefits advertisers, publishers and the ad industry

Open-source software is code that anyone is free to use, analyze, alter and improve.

Auditing the code and adjusting the SDKs functionality based on individual needs is a common practice — and so too are audits by security companies or interested parties who are rightly on the lookout for app fraud. By showing exactly how the code within the SDK has been written, it is the best way to reassure developers and partners that there are no hidden functions or unwanted features.

Everyone using open-source SDKs can learn exactly how it works, and because it is under an open-source license, anyone can suggest modifications and improvements in the code.

Open source brings some risks, but much bigger rewards

The main risk from opening up an SDK code is that third parties will look for ways to exploit it and insert their own malicious code, or else look at potential vulnerabilities to access back-end services and data. However, providers should be on the lookout and be able to fix the potential vulnerabilities as they arise.

As for the rewards, open-sourcing engenders trust and transparency, which should certainly translate into customer loyalty and consumer confidence. After all, we are all operating in a market where advertisers and developers can choose who they want to work with — and on what terms.

Selfishly but practically speaking, opening SDKs can also help companies in our industry protect themselves from others’ baseless claims that are simply intended to promote their products. With open standards, there are no unsubstantiated, false accusations intended for publicity. The proof is out there for everyone to see.

How ad tech is embracing open source

In the ad tech space, companies such as MoPub, Appodeal and AppsFlyer are just a few that have already made some or all of their SDKs available through an open-source license.

All of these companies have decided to use an open-source approach because they recognize the importance of transparency and trust, especially when you are placing the safety and reputation of your brand in the hands of an algorithm. However, the majority of SDKs remain closed.

Relying on forward-thinking companies to set their own transparency levels will only take our industry so far. It’s time for stronger action around trust and data transparency. In the same way that GDPR and COPPA have required companies to address privacy and, ultimately, to have forced a change that was needed, open-sourcing our SDKs will take the ad-marketing space to new heights and drive new levels of trust and deployment with our clients, competitors, legislators and consumers.

The industry-wide challenge of transparency won’t be solved any time soon, but the positive news is that there is movement in the right direction, with steps that some companies are already taking and others can easily take. By implementing measures to ensure brand-safe placements and helping limit ad fraud; improving relationships between brands, agencies, and programmatic partners; and bringing clarity to consumer data use; confidence in the advertising industry will improve and opportunities will subsequently grow.

That’s why we are calling on all ad/marketing companies to take this step forward with us — for the benefit of our consumers, brands, providers and industry at large — to embrace open-source SDKs as the way to engender trust, transparency and industry transformation. In doing so, we will all be rewarded with consumers who are more trusting of brands and brand advertising, and subsequently, brands who trust us and seek opportunities to implement more sophisticated solutions and grow their business.

#advertising-tech, #column, #digital-marketing, #general-data-protection-regulation, #marketing, #online-advertising, #open-source, #open-source-components, #open-source-startups, #opinion, #privacy

0

Family tracking app Life360 launches ‘Bubbles,’ a location-sharing feature inspired by teens on TikTok

Helicopter parenting turned into surveillance with the debut of family tracking apps like Life360. While the app can alleviate parental fears when setting younger kids loose in the neighborhood, Life360’s teenage users have hated the app’s location tracking features so much that avoiding and dissing the app quickly became a TikTok meme. Life360 could have ignored the criticism — after all, teens aren’t the app’s paying subscribers; it’s the parents. But Life360 CEO Chris Hulls took a different approach. He created a TikTok account and started a dialogue with the app’s younger users. As a result of these conversations, the company has now launched a new privacy-respecting feature, “Bubbles.”

Bubbles work by allowing any Life360 Circle member to share a circle representing their generalized location instead of their exact whereabouts. To set a bubble, the user can adjust the radius on the map anywhere from 1 to 25 miles in diameter, for a given period of time of 1 to 6 hours. After this temporary bubble is created, Life360’s other existing safety and messaging features will remain enabled. But parents won’t be able to see precisely where their teen is located, other than somewhere in the bubble.

Image Credits: Life360

For example, a teen could tell their parents they were hanging out with some friends in a given part of town after school, then set a bubble accordingly. But without popping that bubble, the parents wouldn’t know if their teenager was at a friend’s house, out driving around, at a park, out shopping, and so on. The expectation is that parents and teens should communicate with one another, not relying on cyberstalking. Plus, parents need to respect that teens deserve to have more freedom to make choices, even if they will sometimes break the rules and then have to suffer the consequences.

A location bubble isn’t un-poppable, however. The bubble will burst if a car crash or another other emergency is detected, the company says. A parent can also choose to override the setting and pop the bubble for any reason — like if they don’t hear from the teen for a long period of time or suspect the teen may be unsafe. This could encourage a teen to increase their direct communication with a parent in order to reassure them that they are safe, rather than risk their parent turning tracking back on.

But parents are actively discouraged from popping the bubbles out of fear. Before the bubble is burst, the app will ask if the user if they’re sure they want to do so, reminding the them also that the member will be notified about the bubble being burst. This gives parents a moment to pause and reconsider whether it’s really enough of an emergency to break their teen’s trust and privacy.

Image Credits: Life360

The feature isn’t necessarily going to solve the problems for teens who want to sneak out or just be un-tracked entirely, which is where many of the complaints have stemmed from in recent years. Instead, it’s meant to represent a compromise in the battle between adult surveillance of kids’ every move and teenagers’ needs to have more personal freedom.

Hulls says the idea for the new feature was inspired by conversations he had with teens on TikTok about Life360’s issues.

“Teens are a core part of the family unit – and our user base – and we value their input,” said Hulls. “After months of communicating with both parents and teens, I am proud to launch a feature that was designed with the whole family in mind, continuing our mission of redefining how safety is delivered to families,” he added.

Before joining TikTok, the Life360 mobile app had been subject to a downrating campaign where teen users rated the app with just one star in hopes of getting it kicked off the App Store. (Apps are not automatically removed for low ratings, but that hasn’t stopped teens from trying this tactic with anything they don’t like, from Google Classroom’s app to the Trump 2020 app, at times.)

In his TikTok debut, Hulls appeared as Darth Vader then took off the mask to reveal, in his own words, “just your standard, awkward tech CEO.” In the months since, his account has posted and reacted to Life360 memes, answered questions, asked for — and even paid for — helpful user feedback. One of the ideas resulting from the collaboration was “ghost mode,” which is now being referred to at launch as “Bubbles” — a name generated by a TikTok contest to brand the feature.

In addition to sourcing ideas on TikTok, Hulls used the platform to rehabilitate the Life360 brand among teens, explaining how he created the app after Hurricane Katrina to help families reconnect after big emergencies, for example. (True). His videos also suggested that he was now on teens’ side and that building “ghost mode” was going to piss off parents or even lose him his job. (Highly debatable.)

In a related effort, the company posted a YouTube parody video to explain the app’s benefits to parents and teens. The video, suggested to teen users through a notification, hit over a million views in 24 hours.

Many teens, ultimately, came around. “i’m crying he seems so nice,” said one commenter. “ngl it’s the parents not the app,” admitted another.

In other words, the strategy worked. Hulls’ “life360ceo” TikTok account has since gained over 231,000 followers and its videos have been “liked” 6.5 million times. Teens have also turned their righteous anger back to where it may actually belong — at their cyberstalking parents, not the tech enabling the location-tracking.

Bubbles is now part of the most recent version of the Life360 app, a free download on iOS and Android. The company offers an optional upgrade to premium plans for families in need of extra features, like location history, crash detection and roadside assistance, among other things.

Family trackers are a large and growing business. As of June 2020, Life360 had 25 million monthly active users located in more than 195 countries. The company’s annualized monthly revenue was forecasted at $77.9 million, a 26% increase year-over-year.

To celebrate the launch of Bubbles, this past Saturday, Life360 launched a branded Hashtag Challenge on TikTok, #ghostmode, for a $10,000 prize. As of today, the hashtag already has 1.4 billion views.

 

 

 

 

 

 

#apps, #life360, #mobile, #privacy, #surveillance, #tiktok

0

If data is labor, can collective bargaining limit big tech?

There are plenty of reasons to doubt that the House Judiciary Committee’s antitrust report will mark a turning point in the digital economy. In the end, it lacked true bipartisan support. Yet we can still marvel at the extent of left-right agreement over its central finding: The big tech companies wield troublingly great power over American society.

The bigger worry is whether the solutions on the table cut to the heart of the problem. One wonders whether empowered antitrust agencies can solve the problem before them — and whether they can keep the public behind them. For the proposition that many Facebooks would be better than one simply doesn’t resonate.

There are good reasons why not. Despite all their harms, we know that whatever benefits these platforms provide are largely a result of their titanic scale. We are as uneasy with the platforms’ exercises of their vast power over suppliers and users, as we are with their forbearance; yet it is precisely because of their enormous scale that we use their services. So if regulators broke up the networks, consumers would simply flock toward whatever platforms had the most scale, pushing the industry toward reconsolidation.

Does this mean that the platforms do not have too much power, that they are not harming society? No. It simply means they are infrastructure. In other words, we don’t need these technology platforms to be more fragmented, we need them to belong to us. We need democratic, rather than strictly market processes, to determine how they wield their power.

When you notice that an institution is infrastructure, the usual reaction is to suggest nationalization or regulation. But today, we have good reasons to suspect our political system is not up to this task. Even if an ideal government could competently tackle a problem as complex as managing the 21st century’s digital infrastructure, ours probably cannot.

This appears to leave us in a lose-lose situation and explains the current mood of resignation. But there is another option that we seem to have forgotten about. Labor organization has long afforded control to a broad array of otherwise-powerless stakeholders over the operation of powerful business enterprises. Why is this not on the table?

A growing army of academics, technologists, and commentators are warming to the proposition that “data is labor.” In short, this is the idea that the vast data streams we all produce through our contact with the digital world are a legitimate sort of work-product — over which we ought to have much more meaningful rights than the laws now afford. Collective bargaining plays a central role in this picture. Because the reason that the markets are now failing (to the benefit of the Silicon Valley giants) is that we are all trying to negotiate only for ourselves, when in fact the very nature of data is that it always touches and implicates the interests of many people.

This may seem like a complicated or intractable problem, but leading thinkers are already working on legal and technical solutions.

So in some sense, the scale of the tech giants may indeed not be such a bad thing — the problem, instead, is the power that scale gives them. But what if Facebook had to do business with large coalitions representing ordinary peoples’ data interests — presumably paying large sums, or admitting these representatives into its governance — in order to get the right to exploit its users’ data? That would put power back where it belongs, without undermining the inherent benefits of large platforms. It just might be a future we can believe in.

So what is the way forward? The answer to this question is enabling collective bargaining through data unions. Data unions would become the necessary counterpart to big tech’s information acquiring transitions. By requiring the big tech companies to deal with data unions authorized to negotiate on behalf of their memberships, both of the problems that have allowed these giant tech companies to amass the power to corrupt society are solved.

Labor unions did not gain true traction until the passage of the National Labor Relations Act of 1935. Perhaps, rather than burning our political capital on breaking up the tech giants through a slow and potentially Sisyphean process, we should focus on creating a 21st century version of this groundbreaking legislation — legislation to protect the data rights of all citizens and provide a responsible legal framework for data unions to represent public interests from the bottom up.

#collective-bargaining, #column, #digital-economy, #economy, #facebook, #labor, #opinion, #policy, #privacy, #social, #tc

0

France’s Health Data Hub to move to European cloud infrastructure to avoid EU-US data transfers

France’s data regulator CNIL has issued some recommendations for French services that handle health data, as Mediapart first reported. Those services should avoid using American cloud hosting companies altogether, such as Microsoft Azure, Amazon Web Services and Google Cloud.

Those recommandations follow a landmark ruling by Europe’s top court in July. The ruling, dubbed Schrems II, struck down the EU-US Data Privacy Shield. Under the Privacy Shield, companies could outsource data processing from the EU to the US in bulk. Due to concerns over US surveillance laws, that mechanism is no longer allowed.

The CNIL is going one step further by saying that services and companies that handle health data should also avoid doing business with American companies — it’s not just about processing European data in Europe. Once again, this is all about avoiding falling under U.S. regulation and rulings.

The regulator sent those recommendations to one of France’s top courts (Conseil d’État). SantéNathon, a group of organizations and unions, originally notified the CNIL over concerns about France’s Health Data Hub.

France is currently building a platform to store health data at the national level. The idea is to build a hub that makes it easier to study rare diseases and use artificial intelligence to improve diagnoses. It is supposed to aggregate data from different sources and make it possible to share some data with public and private institutions for those specific cases.

The technical choices have been controversial as the French government originally chose to partner with Microsoft and its cloud platform Microsoft Azure.

Microsoft, like many other companies, relies on Standard Contractual Clauses for EU-US data transfers. But the Court of Justice of the EU has made it clear that EU regulators have to intervene if data is being transferred to an unsafe country when it comes to privacy and surveillance.

The CNIL believes that an American company could process data in Europe but it would still fall under FISA702 and other surveillance laws. Data would still end up in the hands of American authorities. In other words, it is being extra careful with health data for now, while Schrems II is still unfolding.

“We’re working with health minister Olivier Véran on transferring the Health Data Hub to French or European platforms following the Privacy Shield bombshell,” France’s digital minister Cédric O told Public Sénat.

The French government is now looking at other solutions for the Health Data Hub. In the near future, if France’s top court confirms the CNIL’s recommendations, it could also have some effects for French companies that handle health data, such as Doctolib and Alan.

#europe, #health-data-hub, #privacy, #privacy-shield, #schrems-ii

0

A prison video visitation service exposed private calls between inmates and their attorneys

Fearing the spread of coronavirus, jails and prisons remain on lockdown. Visitors are unable to see their loved ones serving time, forcing friends and families to use prohibitively expensive video visitation services that often don’t work.

But now the security and privacy of these systems are under scrutiny after one St Louis-based prison video visitation provider had a security lapse that exposed thousands of phone calls between inmates and their families, but also calls with their attorneys that were supposed to be protected by attorney-client privilege.

HomeWAV, which serves a dozen prisons across the U.S., left a dashboard for one of its databases exposed to the internet without a password, allowing anyone to read, browse and search the call logs and transcriptions of calls between inmates and their friends and family members. The transcriptions also showed the phone number of the caller, which inmate, and the duration of the call.

Security researcher Bob Diachenko found the dashboard, which had been public since at least April, he said. TechCrunch reported the issue to HomeWAV, which shut down the system hours later.

In an email, HomeWAV chief executive John Best confirmed the security lapse.

“One of our third-party vendors has confirmed that they accidentally took down the password, which allowed access to the server,” he told TechCrunch, without naming the third-party. Best said the company will inform inmates, families and attorneys of the incident.

Somil Trivedi, a senior staff attorney at the ACLU’s Criminal Law Reform Project, told TechCrunch: “What we see again and again is that the rights of incarcerated people are the first to be trampled when the system fails — as it always, invariably does.”

“Our justice system is only as good as the protections for the most vulnerable. As always, people of color, those who can’t afford lawyers, and those with disabilities will pay the highest price for this mistake. Technology cannot fix the fundamental failings of the criminal legal system — and it will exacerbate them if we’re not deliberate and cautious,” said Trivedi.

Inmates have almost no expectations of privacy, and nearly all prisons in the U.S. record the phone and video calls of their inmates — even if it’s not disclosed at the beginning of each call. Prosecutors and investigators are known to listen back to recordings in case an inmate incriminates themselves on a call.

HomeWAV, a prison video visitation tech company, exposed thousands of phone calls between inmates and their families, but also calls with their attorneys that were supposed to be protected by attorney-client privilege. (Image: HomeWAV/YouTube)

The calls between inmates and their attorneys, however, are not supposed to be monitored because of attorney-client privilege, a rule that protects the communications between an attorney and their client from being used in court.

Despite this, there are known cases of U.S. prosecutors using recorded calls between an attorney and their incarcerated clients. Last year, prosecutors in Louisville, Ky., allegedly listened to dozens of calls between a murder suspect and his attorneys. And, earlier this year defense attorneys in Maine said they were routinely recorded by several county jails, and their calls protected under attorney-client privilege were turned over to prosecutors in at least four cases.

HomeWAV’s website says: “Unless a visitor has been previously registered as a clergy member, or a legal representative with whom the inmate is entitled to privileged communication, the visitor is advised that visits may be recorded, and can be monitored.”

But when asked, HomeWAV’s Best would not say why the company had recorded and transcribed conversations protected by attorney-client privilege.

Several of the transcriptions reviewed by TechCrunch showed attorneys clearly declaring that their calls were covered under attorney-client privilege, effectively telling anyone listening in that the call was off-limits.

TechCrunch spoke to two attorneys, whose communications with their clients in prison over the past six months were recorded and transcribed by HomeWAV, but asked that we not name them or their clients as doing so might harm their client’s legal defense. Both expressed alarm that their calls had been recorded. One of the attorneys said that they had verbally asserted attorney-client privilege on the call, while the other attorney also considered that their call was protected by attorney-client privilege but declined to comment further until they had spoken to their client.

Another defense attorney, Daniel Repka, told TechCrunch confirmed one of his calls with a client in prison in September was recorded, transcribed and subsequently exposed, but said that the call was not sensitive.

“We did not relay any information that would be considered protected by attorney-client privilege,” said Repka. “Anytime I have a client who calls me from a jail, I’m very conscious and aware of the possibility not only of security breaches, but also the potential ability to access these phone calls by the county attorney’s office,” he said.

Repka described attorney-client privilege as “sacred” for attorneys and their clients. “It’s really the only way that we’re able to ensure that attorneys are able to represent their clients in the most effective and zealous way possible,” he said.

“The best practice for attorneys is always, always, always to go visit your client at the jail in person where you’re in a room, and you have far more privacy than over a telephone line that you know has been designated as a recording device,” he said.

But the challenges brought by the pandemic has made in-person visits difficult, or impossible in some states. The Marshall Project, a non-partisan organization focusing on criminal justice in the U.S., said several states have suspended in-person visitation because of the threat posed by coronavirus, including legal visits.

Even prior to the pandemic, some prisons ended in-person visitation in favor of video calls.

Video visitation technology is now a billion-dollar industry, with companies like Securus making millions each year by charging callers often exorbitant fees to call their incarcerated loved ones.

HomeWAV isn’t the only video visitation service to have faced security issues.

In 2015, an apparent breach at Securus resulted in the leak of some 70 million inmate phone calls by an anonymous hacker and shared with The Intercept. Many of the recordings in the cache also contained calls designated protected by attorney-client privilege, the publication reported.

In August, Diachenko reported a similar security lapse at TelMate, another prison visitation provide, which saw millions of inmate messages exposed because of a passwordless database.


You can send tips securely over Signal and WhatsApp to +1 646-755-8849 or you can send an encrypted email to: zack.whittaker@protonmail.com

#articles, #government, #justice, #prisons, #privacy, #security, #technology, #united-states

0

Dr Lal PathLabs, one of India’s largest blood test labs, exposed patient data

Dr Lal PathLabs, one of the largest lab testing companies in India, left a huge cache of patient data on a public server for months, TechCrunch has learned.

The lab testing giant, headquartered in New Delhi, serves some 70,000 patients a day, and quickly became a major player in testing patients for COVID-19 after winning approval from the Indian government.

But the company was storing hundreds of large spreadsheets packed with sensitive patient data in a storage bucket, hosted on Amazon Web Services (AWS), without a password, allowing anyone to access the data inside.

Australia-based security expert Sami Toivonen found the exposed data and reported it to Dr Lal PathLabs in September. The company quickly shut down access to the bucket but the company did not reply, Toivonen told TechCrunch.

It’s not known how long the bucket was exposed.

Toivonen said the exposed data amounted to millions of individual patient bookings.

A redacted section of the spreadsheets containing patient data, including name, address, phone number, and gender, as well as the test the patient is requesting. (Screenshot: TechCrunch)

The spreadsheets appear to contain daily records of patient lab tests. Each spreadsheet contained a patient’s name, address, gender, date of birth, and cell number, as well as details of the test that the patient is taking, which could indicate or infer a medical diagnosis or a health condition.

Some booking records contained additional remarks about the patient, such as if they had tested positive for COVID-19.

Toivonen provided TechCrunch with a sample of the files from the exposed server for verification. We reached out to several patients to confirm their details found in the spreadsheet.

“Once I discovered this I was blown away that another publicly-listed organization had failed to secure their data, but I do believe that security is a team sport and everyone’s responsibility,” Toivonen told TechCrunch. “I’m glad that they secured it within a few hours after I contacted them because this kind of exposure with millions of patient records could be misused in so many ways by the malicious actors.”

“I was also a little surprised that they didn’t respond to my responsible disclosure,” he said.

A spokesperson for Dr Lal PathLabs said it was “investigating” the security lapse but did not answer our questions, including if the company plans to inform its patients of the exposure.

#covid-19, #health, #india, #new-delhi, #privacy, #security, #spokesperson, #spreadsheet, #web-services

0

Now you can enforce your privacy rights with a single browser tick

Now you can enforce your privacy rights with a single browser tick

Enlarge (credit: Global Privacy Control)

Anyone who remembers Do Not Track—the initiative that was supposed to allow browser users to reclaim their privacy on the Web—knows it was a failure. Not only did websites ignore it, using it arguably made people less private because it made them stick out. Now, privacy advocates are back with a new specification, and this time they’ve brought the lawyers.

Under the hood, the specification, known as Global Privacy Control, works pretty much the same way Do Not Track did. A small HTTP header informs sites that a visitor doesn’t want their data sold. The big difference this time is the enactment of the Consumer Privacy Act in California and, possibly, the General Data Protection Regulation in Europe, both of which give consumers broad rights over how their private information can be used.

At the moment, California residents who don’t want websites to sell their data must register their choice with each site, often each time they visit it. That’s annoying and time-consuming. But the California law specifically contemplates “user-enabled global privacy controls, such as a browser plug-in or privacy setting,” that signal the choice. That’s what the Global Privacy Control—or GPG—does.

Read 7 remaining paragraphs | Comments

#biz-it, #browsers, #do-not-track, #global-privacy-control, #policy, #privacy, #tech

0