Google misled consumers over location data settings, Australia court finds

Google’s historical collection of location data has got it into hot water in Australia where a case brought by the country’s Competition and Consumer Commission (ACCC) has led to a federal court ruling that the tech giant misled consumers by operating a confusing dual-layer of location settings in what the regulator describes as a “world-first enforcement action”.

The case relates to personal location data collected by Google through Android mobile devices between January 2017 and December 2018.

Per the ACCC, the court ruled that “when consumers created a new Google Account during the initial set-up process of their Android device, Google misrepresented that the ‘Location History’ setting was the only Google Account setting that affected whether Google collected, kept or used personally identifiable data about their location”.

“In fact, another Google Account setting titled ‘Web & App Activity’ also enabled Google to collect, store and use personally identifiable location data when it was turned on, and that setting was turned on by default,” it wrote.

The Court also ruled that Google misled consumers when they later accessed the ‘Location History’ setting on their Android device during the same time period to turn that setting off because it did not inform them that by leaving the ‘Web & App Activity’ setting switched on, Google would continue to collect, store and use their personally identifiable location data.

“Similarly, between 9 March 2017 and 29 November 2018, when consumers later accessed the ‘Web & App Activity’ setting on their Android device, they were misled because Google did not inform them that the setting was relevant to the collection of personal location data,” the ACCC added.

Similar complaints about Google’s location data processing being deceptive — and allegations that it uses manipulative tactics in order to keep tracking web users’ locations for ad-targeting purposes — have been raised by consumer agencies in Europe for years. And in February 2020 the company’s lead data regulator in the region finally opened an investigation. However that probe remains ongoing.

Whereas the ACCC said today that it will be seeking “declarations, pecuniary penalties, publications orders, and compliance orders” following the federal court ruling. Although it added that the specifics of its enforcement action will be determined “at a later date”. So it’s not clear exactly when Google will be hit with an order — nor how large a fine it might face.

The tech giant may also seek to appeal the court ruling.

Google said today it’s reviewing its legal options and considering a “possible appeal” — highlighting the fact the Court did not agree wholesale with the ACCC’s case because it dismissed some of the allegations (related to certain statements Google made about the methods by which consumers could prevent it from collecting and using their location data, and the purposes for which personal location data was being used by Google).

Here’s Google’s statement in full:

“The court rejected many of the ACCC’s broad claims. We disagree with the remaining findings and are currently reviewing our options, including a possible appeal. We provide robust controls for location data and are always looking to do more — for example we recently introduced auto delete options for Location History, making it even easier to control your data.”

While Mountain View denies doing anything wrong in how it configures location settings — while simultaneously claiming it’s always looking to improve the controls it offers its users — Google’s settings and defaults have, nonetheless, got it into hot water with regulators before.

Back in 2019 France’s data watchdog, the CNIL, fined it $57M over a number of transparency and consent failures under the EU’s General Data Protection Regulation. That remains the largest GDPR penalty issued to a tech giant since the regulation came into force a little under three years ago — although France has more recently sanctioned Google $120M under different EU laws for dropping tracking cookies without consent.

Australia, meanwhile, has forged ahead with passing legislation this year that directly targets the market power of Google (and Facebook) — passing a mandatory news media bargaining code in February which aims to address the power imbalance between platform giants and publishers around the reuse of journalism content.

#accc, #android, #australia, #consumer-protection, #consumer-rights, #general-data-protection-regulation, #google, #lawsuit, #location-data, #privacy

0

Facebook faces ‘mass action’ lawsuit in Europe over 2019 breach

Facebook is to be sued in Europe over the major leak of user data that dates back to 2019 but which only came to light recently after information on 533M+ accounts was found posted for free download on a hacker forum.

Today Digital Rights Ireland (DRI) announced it’s commencing a “mass action” to sue Facebook, citing the right to monetary compensation for breaches of personal data that’s set out in the European Union’s General Data Protection Regulation (GDPR).

Article 82 of the GDPR provides for a ‘right to compensation and liability’ for those affected by violations of the law. Since the regulation came into force, in May 2018, related civil litigation has been on the rise in the region.

The Ireland-based digital rights group is urging Facebook users who live in the European Union or European Economic Area to check whether their data was breach — via the haveibeenpwned website (which lets you check by email address or mobile number) — and sign up to join the case if so.

Information leaked via the breach includes Facebook IDs, location, mobile phone numbers, email address, relationship status and employer.

Facebook has been contacted for comment on the litigation.

The tech giant’s European headquarters is located in Ireland — and earlier this week the national data watchdog opened an investigation, under EU and Irish data protection laws.

A mechanism in the GDPR for simplifying investigation of cross-border cases means Ireland’s Data Protection Commission (DPC) is Facebook’s lead data regulator in the EU. However it has been criticized over its handling of and approach to GDPR complaints and investigations — including the length of time it’s taking to issue decisions on major cross-border cases. And this is particularly true for Facebook.

With the three-year anniversary of the GDPR fast approaching, the DPC has multiple open investigations into various aspects of Facebook’s business but has yet to issue a single decision against the company.

(The closest it’s come is a preliminary suspension order issued last year, in relation to Facebook’s EU to US data transfers. However that complaint long predates GDPR; and Facebook immediately filed to block the order via the courts. A resolution is expected later this year after the litigant filed his own judicial review of the DPC’s processes).

Since May 2018 the EU’s data protection regime has — at least on paper — baked in fines of up to 4% of a company’s global annual turnover for the most serious violations.

Again, though, the sole GDPR fine issued to date by the DPC against a tech giant (Twitter) is very far off that theoretical maximum. Last December the regulator announced a €450k (~$547k) sanction against Twitter — which works out to around just 0.1% of the company’s full-year revenue.

That penalty was also for a data breach — but one which, unlike the Facebook leak, had been publicly disclosed when Twitter found it in 2019. So Facebook’s failure to disclose the vulnerability it discovered and claims it fixed by September 2019, which led to the leak of 533M accounts now, suggests it should face a higher sanction from the DPC than Twitter received.

However even if Facebook ends up with a more substantial GDPR penalty for this breach the watchdog’s caseload backlog and plodding procedural pace makes it hard to envisage a swift resolution to an investigation that’s only a few days old.

Judging by past performance it’ll be years before the DPC decides on this 2019 Facebook leak — which likely explains why the DRI sees value in instigating class-action style litigation in parallel to the regulatory investigation.

“Compensation is not the only thing that makes this mass action worth joining. It is important to send a message to large data controllers that they must comply with the law and that there is a cost to them if they do not,” DRI writes on its website.

It also submitted a complaint about the Facebook breach to the DPC earlier this month, writing then that it was “also consulting with its legal advisors on other options including a mass action for damages in the Irish Courts”.

It’s clear that the GDPR enforcement gap is creating a growing opportunity for litigation funders to step in in Europe and take a punt on suing for data-related compensation damages — with a number of other mass actions announced last year.

In the case of DRI its focus is evidently on seeking to ensure that digital rights are upheld. But it told RTE that it believes compensation claims which force tech giants to pay money to users whose privacy rights have been violated is the best way to make them legally compliant.

Facebook, meanwhile, has sought to play down the breach it failed to disclose in 2019 — claiming it’s ‘old data’ — a deflection that ignores the fact that people’s dates of birth don’t change (nor do most people routinely change their mobile number or email address).

Plenty of the ‘old’ data exposed in this latest massive Facebook leak will be very handy for spammers and fraudsters to target Facebook users — and also now for litigators to target Facebook for data-related damages.

#data-protection, #data-protection-commission, #data-security, #digital-rights, #digital-rights-ireland, #europe, #european-union, #facebook, #gdpr, #general-data-protection-regulation, #ireland, #lawsuit, #litigation, #personal-data, #privacy, #social, #social-media, #tc, #twitter

0

How startups can ensure CCPA and GDPR compliance in 2021

Data is the most valuable asset for any business in 2021. If your business is online and collecting customer personal information, your business is dealing in data, which means data privacy compliance regulations will apply to everyone — no matter the company’s size.

Small startups might not think the world’s strictest data privacy laws — the California Consumer Privacy Act (CCPA) and Europe’s General Data Protection Regulation (GDPR) — apply to them, but it’s important to enact best data management practices before a legal situation arises.

Data compliance is not only critical to a company’s daily functions; if done wrong or not done at all, it can be quite costly for companies of all sizes.

For example, failing to comply with the GDPR can result in legal fines of €20 million or 4% of annual revenue. Under the CCPA, fines can also escalate quickly, to the tune of $2,500 to $7,500 per person whose data is exposed during a data breach.

If the data of 1,000 customers is compromised in a cybersecurity incident, that would add up to $7.5 million. The company can also be sued in class action claims or suffer reputational damage, resulting in lost business costs.

It is also important to recognize some benefits of good data management. If a company takes a proactive approach to data privacy, it may mitigate the impact of a data breach, which the government can take into consideration when assessing legal fines. In addition, companies can benefit from business insights, reduced storage costs and increased employee productivity, which can all make a big impact on the company’s bottom line.

Challenges of data compliance for startups

Data compliance is not only critical to a company’s daily functions; if done wrong or not done at all, it can be quite costly for companies of all sizes. For example, Vodafone Spain was recently fined $9.72 million under GDPR data protection failures, and enforcement trackers show schools, associations, municipalities, homeowners associations and more are also receiving fines.

GDPR regulators have issued $332.4 million in fines since the law was enacted almost two years ago and are being more aggressive with enforcement. While California’s attorney general started CCPA enforcement on July 1, 2020, the newly passed California Privacy Rights Act (CPRA) only recently created a state agency to more effectively enforce compliance for any company storing information of residents in California, a major hub of U.S. startups.

That is why in this age, data privacy compliance is key to a successful business. Unfortunately, many startups are at a disadvantage for many reasons, including:

0

Uber hit with default ‘robo-firing’ ruling after another EU labor rights GDPR challenge

Labor activists challenging Uber over what they allege are ‘robo-firings’ of drivers in Europe have trumpeted winning a default judgement in the Netherlands — where the Court of Amsterdam ordered the ride-hailing giant to reinstate six drivers who the litigants claim were unfairly terminated “by algorithmic means.”

The court also ordered Uber to pay the fired drivers compensation.

The challenge references Article 22 of the European Union’s General Data Protection Regulation (GDPR) — which provides protection for individuals against purely automated decisions with a legal or significant impact.

The activists say this is the first time a court has ordered the overturning of an automated decision to dismiss workers from employment.

However the judgement, which was issued on February 24, was issued by default — and Uber says it was not aware of the case until last week, claiming that was why it did not contest it (nor, indeed, comply with the order).

It had until March 29 to do so, per the litigants, who are being supported by the App Drivers & Couriers Union (ADCU) and Worker Info Exchange (WIE).

Uber argues the default judgement was not correctly served and says it is now making an application to set the default ruling aside and have its case heard “on the basis that the correct procedure was not followed.”

It envisages the hearing taking place within four weeks of its Dutch entity, Uber BV, being made aware of the judgement — which it says occurred on April 8.

“Uber only became aware of this default judgement last week, due to representatives for the ADCU not following proper legal procedure,” an Uber spokesperson told TechCrunch.

A spokesperson for WIE denied that correct procedure was not followed but welcomed the opportunity for Uber to respond to questions over how its driver ID systems operate in court, adding: “They [Uber] are out of time. But we’d be happy to see them in court. They will need to show meaningful human intervention and provide transparency.”

Uber pointed to a separate judgement by the Amsterdam Court last month — which rejected another ADCU- and WIE-backed challenge to Uber’s anti-fraud systems, with the court accepting its explanation that algorithmic tools are mere aids to human “anti-fraud” teams who it said take all decisions on terminations.

“With no knowledge of the case, the Court handed down a default judgement in our absence, which was automatic and not considered. Only weeks later, the very same Court found comprehensively in Uber’s favour on similar issues in a separate case. We will now contest this judgement,” Uber’s spokesperson added.

However WIE said this default judgement “robo-firing” challenge specifically targets Uber’s Hybrid Real-Time ID System — a system that incorporates facial recognition checks and which labor activists recently found misidentifying drivers in a number of instances.

It also pointed to a separate development this week in the U.K. where it said the City of London Magistrates Court ordered the city’s transport regulator, TfL, to reinstate the licence of one of the drivers revoked after Uber routinely notified it of a dismissal (also triggered by Uber’s real time ID system, per WIE).

Reached for comment on that, a TfL spokesperson said: “The safety of the travelling public is our top priority and where we are notified of cases of driver identity fraud, we take immediate licensing action so that passenger safety is not compromised. We always require the evidence behind an operator’s decision to dismiss a driver and review it along with any other relevant information as part of any decision to revoke a licence. All drivers have the right to appeal a decision to remove a licence through the Magistrates’ Court.”

The regulator has been applying pressure to Uber since 2017 when it took the (shocking to Uber) decision to revoke the company’s licence to operate — citing safety and corporate governance concerns.

Since then Uber has been able to continue to operate in the U.K. capital but the company remains under pressure to comply with a laundry list of requirements set by TfL as it tries to regain a full operator licence.

Commenting on the default Dutch judgement on the Uber driver terminations in a statement, James Farrar, director of WIE, accused gig platforms of “hiding management control in algorithms.”

“For the Uber drivers robbed of their jobs and livelihoods this has been a dystopian nightmare come true,” he said. “They were publicly accused of ‘fraudulent activity’ on the back of poorly governed use of bad technology. This case is a wake-up call for lawmakers about the abuse of surveillance technology now proliferating in the gig economy. In the aftermath of the recent U.K. Supreme Court ruling on worker rights gig economy platforms are hiding management control in algorithms. This is misclassification 2.0.”

In another supporting statement, Yaseen Aslam, president of the ADCU, added: “I am deeply concerned about the complicit role Transport for London has played in this catastrophe. They have encouraged Uber to introduce surveillance technology as a price for keeping their operator’s license and the result has been devastating for a TfL licensed workforce that is 94% BAME. The Mayor of London must step in and guarantee the rights and freedoms of Uber drivers licensed under his administration.”  

When pressed on the driver termination challenge being specifically targeted at its Hybrid Real-Time ID system, Uber declined to comment in greater detail — claiming the case is “now a live court case again”.

But its spokesman suggested it will seek to apply the same defence against the earlier “robo-firing” charge — when it argued its anti-fraud systems do not equate to automated decision making under EU law because “meaningful human involvement [is] involved in decisions of this nature”.

 

#app-drivers-couriers-union, #artificial-intelligence, #automated-decisions, #europe, #european-union, #facial-recognition, #gdpr, #general-data-protection-regulation, #gig-worker, #james-farrar, #labor, #lawsuit, #london, #netherlands, #transport-for-london, #uber, #united-kingdom

0

EU plan for risk-based AI rules to set fines as high as 4% of global turnover, per leaked draft

European Union lawmakers who are drawing up rules for applying artificial intelligence are considering fines of up to 4% of global annual turnover (or €20M, if greater) for a set of prohibited use-cases, according to a leaked draft of the AI regulation — reported earlier by Politico — that’s expected to be officially unveiled next week.

The plan to regulate AI has been on the cards for a while. Back in February 2020 the European Commission published a white paper, sketching plans for regulating so-called “high risk” applications of artificial intelligence.

At the time EU lawmakers were toying with a sectoral focus — envisaging certain sectors like energy and recruitment as vectors for risk. However that approach appears to have been rethought, per the leaked draft — which does not limit discussion of AI risk to particular industries or sectors.

Instead, the focus is on compliance requirements for high risk AI applications, wherever they may occur (weapons/military uses are specifically excluded, however, as such use-cases fall outside the EU treaties). Although it’s not abundantly clear from this draft exactly how ‘high risk’ will be defined.

The overarching goal for the Commission here is to boost public trust in AI, via a system of compliance checks and balances steeped in “EU values” in order to encourage uptake of so-called “trustworthy” and “human-centric” AI. So even makers of AI applications not considered to be ‘high risk’ will still be encouraged to adopt codes of conduct — “to foster the voluntary application of the mandatory requirements applicable to high-risk AI systems”, as the Commission puts it.

Another chunk of the regulation deals with measures to support AI development in the bloc — pushing Member States to establish regulatory sandboxing schemes in which startups and SMEs can be proritized for support to develop and test AI systems before bringing them to market.

Competent authorities “shall be empowered to exercise their discretionary powers and levers of proportionality in relation to artificial intelligence projects of entities participating the sandbox, while fully preserving authorities’ supervisory and corrective powers,” the draft notes.

What’s high risk AI?

Under the planned rules, those intending to apply artificial intelligence will need to determine whether a particular use-case is ‘high risk’ and thus whether they need to conduct a mandatory, pre-market compliance assessment or not.

“The classification of an AI system as high-risk should be based on its intended purpose — which should refer to the use for which an AI system is intended, including the specific context and conditions of use and — and be determined in two steps by considering whether it may cause certain harms and, if so, the severity of the possible harm and the probability of occurrence,” runs one recital in the draft.

“A classification of an AI system as high-risk for the purpose of this Regulation may not necessarily mean that the system as such or the product as a whole would necessarily be considered as ‘high-risk’ under the criteria of the sectoral legislation,” the text also specifies.

Examples of “harms” associated with high-risk AI systems are listed in the draft as including: “the injury or death of a person, damage of property, systemic adverse impacts for society at large, significant disruptions to the provision of essential services for the ordinary conduct of critical economic and societal activities, adverse impact on financial, educational or professional opportunities of persons, adverse impact on the access to public services and any form of public assistance, and adverse impact on [European] fundamental rights.”

Several examples of high risk applications are also discussed — including recruitment systems; systems that provide access to educational or vocational training institutions; emergency service dispatch systems; creditworthiness assessment; systems involved in determining taxpayer-funded benefits allocation; decision-making systems applied around the prevention, detection and prosecution of crime; and decision-making systems used to assist judges.

So long as compliance requirements — such as establishing a risk management system and carrying out post-market surveillance, including via a quality management system — are met such systems would not be barred from the EU market under the legislative plan.

Other requirements include in the area of security and that the AI achieves consistency of accuracy in performance — with a stipulation to report to “any serious incidents or any malfunctioning of the AI system which constitutes a breach of obligations” to an oversight authority no later than 15 days after becoming aware of it.

“High-risk AI systems may be placed on the Union market or otherwise put into service subject to compliance with mandatory requirements,” the text notes.

“Mandatory requirements concerning high-risk AI systems placed or otherwise put into service on the Union market should be complied with taking into account the intended purpose of the AI system and according to the risk management system to be established by the provider.

“Among other things, risk control management measures identified by the provider should be based on due consideration of the effects and possible interactions resulting from the combined application of the mandatory requirements and take into account the generally acknowledged state of the art, also including as reflected in relevant harmonised standards or common specifications.”

Prohibited practices and biometrics

Certain AI “practices” are listed as prohibited under Article 4 of the planned law, per this leaked draft — including (commercial) applications of mass surveillance systems and general purpose social scoring systems which could lead to discrimination.

AI systems that are designed to manipulate human behavior, decisions or opinions to a detrimental end (such as via dark pattern design UIs), are also listed as prohibited under Article 4; as are systems that use personal data to generate predictions in order to (detrimentally) target the vulnerabilities of persons or groups of people.

A casual reader might assume the regulation is proposing to ban, at a stroke, practices like behavioral advertising based on people tracking — aka the business models of companies like Facebook and Google. However that assumes adtech giants will accept that their tools have a detrimental impact on users.

On the contrary, their regulatory circumvention strategy is based on claiming the polar opposite; hence Facebook’s talk of “relevant” ads. So the text (as written) looks like it will be a recipe for (yet) more long-drawn out legal battles to try to make EU law stick vs the self-interested interpretations of tech giants.

The rational for the prohibited practices is summed up in an earlier recital of the draft — which states: “It should be acknowledged that artificial intelligence can enable new manipulative, addictive, social control and indiscriminate surveillance practices that are particularly harmful and should be prohibited as contravening the Union values of respect for human dignity, freedom, democracy, the rule of law and respect for human rights.”

It’s notable that the Commission has avoided proposing a ban on the use of facial recognition in public places — as it had apparently been considering, per a leaked draft early last year, before last year’s White Paper steered away from a ban.

In the leaked draft “remote biometric identification” in public places is singled out for “stricter conformity assessment procedures through the involvement of a notified body” — aka an “authorisation procedure that addresses the specific risks implied by the use of the technology” and includes a mandatory data protection impact assessment — vs most other applications of high risk AIs (which are allowed to meet requirements via self-assessment).

“Furthermore the authorising authority should consider in its assessment the likelihood and severity of harm caused by inaccuracies of a system used for a given purpose, in particular with regard to age, ethnicity, sex or disabilities,” runs the draft. “It should further consider the societal impact, considering in particular democratic and civic participation, as well as the methodology, necessity and proportionality for the inclusion of persons in the reference database.”

AI systems “that may primarily lead to adverse implications for personal safety” are also required to undergo this higher bar of regulatory involvement as part of the compliance process.

The envisaged system of conformity assessments for all high risk AIs is ongoing, with the draft noting: “It is appropriate that an AI system undergoes a new conformity assessment whenever a change occurs which may affect the compliance of the system with this Regulation or when the intended purpose of the system changes.”

“For AI systems which continue to ‘learn’ after being placed on the market or put into service (i.e. they automatically adapt how functions are carried out) changes to the algorithm and performance which have not been pre-determined and assessed at the moment of the conformity assessment shall result in a new conformity
assessment of the AI system,” it adds.

The carrot for compliant businesses is to get to display a ‘CE’ mark to help them win the trust of users and friction-free access across the bloc’s single market.

“High-risk AI systems should bear the CE marking to indicate their conformity with this Regulation so that they can move freely within the Union,” the text notes, adding that: “Member States should not create obstacles to the placing on the market or putting into service of AI systems that comply with the requirements laid down in this Regulation.”

Transparency for bots and deepfakes

As well as seeking to outlaw some practices and establish a system of pan-EU rules for bringing ‘high risk’ AI systems to market safely — with providers expected to make (mostly self) assessments and fulfil compliance obligations (such as around the quality of the data-sets used to train the model; record-keeping/documentation; human oversight; transparency; accuracy) prior to launching such a product into the market and conduct ongoing post-market surveillance — the proposed regulation seeks shrink the risk of AI being used to trick people.

It does this by suggesting “harmonised transparency rules” for AI systems intended to interact with natural persons (aka voice AIs/chat bots etc); and for AI systems used to generate or manipulate image, audio or video content (aka deepfakes).

“Certain AI systems intended to interact with natural persons or to generate content may pose specific risks of impersonation or deception irrespective of whether they qualify as high-risk or not. In certain circumstances, the use of these systems should therefore be subject to specific transparency obligations without prejudice to the requirements and obligations for high-risk AI systems,” runs the text.

“In particular, natural persons should be notified that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use. Moreover, users, who use an AI system to generate or manipulate image, audio or video content that appreciably resembles existing persons, places or events and would falsely appear to a reasonable person to be authentic, should disclose that the content has been artificially created or manipulated by labelling the artificial intelligence output accordingly and disclosing its artificial origin.

“This labelling obligation should not apply where the use of such content is necessary for the purposes of safeguarding public security or for the exercise of a legitimate right or freedom of a person such as for satire, parody or freedom of arts and sciences and subject to appropriate safeguards for the rights and freedoms of third parties.”

What about enforcement?

While the proposed AI regime hasn’t yet been officially unveiled by the Commission — so details could still change before next week — a major question mark looms over how a whole new layer of compliance around specific applications of (often complex) artificial intelligence can be effectively oversee and any violations enforced, especially given ongoing weaknesses in the enforcement of the EU’s data protection regime (which begun being applied back in 2018).

So while providers of high risk AIs are required to take responsibility for putting their system/s on the market (and therefore for compliance with all the various stipulations, which also include registering high risk AI systems in an EU database the Commission intends to maintain), the proposal leaves enforcement in the hands of Member States — who will be responsible for designating one or more national competent authorities to supervise application of the oversight regime.

We’ve seen how this story plays out with the General Data Protection Regulation. The Commission itself has conceded GDPR enforcement is not consistently or vigorously applied across the bloc — so a major question is how these fledgling AI rules will avoid the same forum-shopping fate?

“Member States should take all necessary measures to ensure that the provisions of this Regulation are implemented, including by laying down effective, proportionate and dissuasive penalties for their infringement. For certain specific infringements, Member States should take into account the margins and criteria set out in this Regulation,” runs the draft.

The Commission does add a caveat — about potentially stepping in in the event that Member State enforcement doesn’t deliver. But there’s no near term prospect of a different approach to enforcement, suggesting the same old pitfalls will likely appear.

“Since the objective of this Regulation, namely creating the conditions for an ecosystem of trust regarding the placing on the market, putting into service and use of artificial intelligence in the Union, cannot be sufficiently achieved by the Member States and can rather, by reason of the scale or effects of the action, be better achieved at Union level, the Union may adopt measures, in accordance with the principle of subsidiarity as set out in Article 5 of the Treaty on European Union,” is the Commission’s back-stop for future enforcement failure.

The oversight plan for AI includes setting up a mirror entity akin to the GDPR’s European Data Protection Board — to be called the European Artificial Intelligence Board — which will similarly support application of the regulation by issuing relevant recommendations and opinions for EU lawmakers, such as around the list of prohibited AI practices and high-risk systems.

 

#ai, #artificial-intelligence, #behavioral-advertising, #europe, #european-commission, #european-data-protection-board, #european-union, #facebook, #facial-recognition, #general-data-protection-regulation, #policy, #regulation, #tc

0

Facebook’s tardy disclosure of breach timing raises GDPR compliance questions

The question of whether Facebook will face any regulatory sanction over the latest massive historical platform privacy fail to come to light remains unclear. But the timeline of the incident looks increasingly awkward for the tech giant.

While it initially sought to play down the data breach revelations published by Business Insider at the weekend by suggesting that information like people’s birth dates and phone numbers was “old”, in a blog post late yesterday the tech giant finally revealed that the data in question had in fact been scraped from its platform by malicious actors “in 2019” and “prior to September 2019”.

That new detail about the timing of this incident raises the issue of compliance with Europe’s General Data Protection Regulation (GDPR) — which came into application in May 2018.

Under the EU regulation data controllers can face fines of up to 2% of their global annual turnover for failures to notify breaches, and up to 4% of annual turnover for more serious compliance violations.

The European framework looks important because Facebook indemnified itself against historical privacy issues in the US when it settled with the FTC for $5BN back in July 2019 — although that does still mean there’s a period of several months (June to September 2019) which could fall outside that settlement.

Yesterday, in its own statement responding to the breach revelations, Facebook’s lead data supervisor in the EU said the provenance of the newly published dataset wasn’t entirely clear, writing that it “seems to comprise the original 2018 (pre-GDPR) dataset” — referring to an earlier breach incident Facebook disclosed in 2018 which related to a vulnerability in its phone lookup functionality that it had said occurred between June 2017 and April 2018 — but also writing that the newly published dataset also looked to have been “combined with additional records, which may be from a later period”.

Facebook followed up the Irish Data Protection Commission (DPC)’s statement by confirming that suspicion — admitting that the data had been extracted from its platform in 2019, up until September of that year.

Another new detail that emerged in Facebook’s blog post yesterday was the fact users’ data was scraped not via the aforementioned phone lookup vulnerability — but via another method altogether: A contact importer tool vulnerability.

This route allowed an unknown number of “malicious actors” to use software to imitate Facebook’s app and upload large sets of phone numbers to see which ones matched Facebook users.

In this way a spammer (for example), could upload a database of potential phone numbers and link them to not only names but other data like birth date, email address, location — all the better to phish you with.

In its PR response to the breach, Facebook quickly claimed it had fixed this vulnerability in August 2019. But, again, that timing places the incident squarely in the period of GDPR being active.

As a reminder, Europe’s data protection framework bakes in a data breach notification regime that requires data controllers to notify a relevant supervisory authority if they believe a loss of personal data is likely to constitute a risk to users’ rights and freedoms — and to do so without undue delay (ideally within 72 hours of becoming aware of it).

Yet Facebook made no disclosure at all of this incident to the DPC. Indeed, the regulator made it clear yesterday that it had to proactively seek information from Facebook in the wake of BI’s report. That’s the opposite of how EU lawmakers intended the regulation to function.

Data breaches, meanwhile, are broadly defined under the GDPR. It could mean personal data being lost or stolen and/or accessed by unauthorized third parties. It can also relate to deliberate or accidental action or inaction by a data controller which exposes personal data.

Legal risk attached to the breach likely explains why Facebook has studiously avoided describing this latest data protection failure, in which the personal information of more than half a billion users was posted for free download on an online forum, as a ‘breach’.

And, indeed, why it’s sought to downplay the significance of the leaked information — dubbing people’s personal information “old data”. (Even as few people regularly change their mobile numbers, email address, full names and biographical information and so on, and no one (legally) gets a new birth date… )

Its blog post instead refers to data being scraped; and to scraping being “a common tactic that often relies on automated software to lift public information from the internet that can end up being distributed in online forums” — tacitly implying that the personal information leaked via its contact importer tool was somehow public.

The self-serving suggestion being peddled here by Facebook is that hundreds of millions of users had both published sensitive stuff like their mobile phone numbers on their Facebook profiles and left default settings on their accounts — thereby making this personal information ‘publicly available for scraping/no longer private/uncovered by data protection legislation’.

This is an argument as obviously absurd as it is viciously hostile to people’s rights and privacy. It’s also an argument that EU data protection regulators must quickly and definitively reject or be complicit in allowing Facebook (ab)use its market power to torch the very fundamental rights that regulators’ sole purpose is to defend and uphold.

Even if some Facebook users affected by this breach had their information exposed via the contact importer tool because they had not changed Facebook’s privacy-hostile defaults that still raises key questions of GPDR compliance — because the regulation also requires data controllers to adequately secure personal data and apply privacy by design and default.

Facebook allowing hundreds of millions of accounts to have their info freely pillaged by spammers (or whoever) doesn’t sound like good security or default privacy.

In short, it’s the Cambridge Analytica scandal all over again.

Facebook is trying to get away with continuing to be terrible at privacy and data protection because it’s been so terrible at it in the past — and likely feels confident in keeping on with this tactic because it’s faced relatively little regulatory sanction for an endless parade of data scandals. (A one-time $5BN FTC fine for a company than turns over $85BN+ in annual revenue is just another business expense.)

We asked Facebook why it failed to notify the DPC about this 2019 breach back in 2019, when it realized people’s information was once again being maliciously extracted from its platform — or, indeed, why it hasn’t bothered to tell affected Facebook users themselves — but the company declined to comment beyond what it said yesterday.

Then it told us it would not be commenting on its communications with regulators.

Under the GDPR, if a breach poses a high risk to users’ rights and freedoms a data controller is required to notify affected individuals — with the rational being that prompt notification of a threat can help people take steps to protect themselves from the risks of their data being breached, such as fraud and ID theft.

Yesterday Facebook also said it does not have plans to notify users either.

Perhaps the company’s trademark ‘thumbs up’ symbol would be more aptly expressed as a middle finger raised at everyone else.

 

#data-controller, #data-protection, #dpc, #europe, #european-union, #facebook, #federal-trade-commission, #gdpr, #general-data-protection-regulation, #personal-data, #privacy, #security-breaches, #united-states

0

Answers being sought from Facebook over latest data breach

Facebook’s lead data protection regulator in the European Union is seeking answers from the tech giant over a major data breach reported on over the weekend.

The breach was reported on by Business Insider on Saturday which said personal data (including email addresses and mobile phone numbers) of more than 500M Facebook accounts had been posted to a low level hacking forum — making the personal information on hundreds of millions of Facebook users’ accounts freely available.

“The exposed data includes the personal information of over 533M Facebook users from 106 countries, including over 32M records on users in the US, 11M on users in the UK, and 6M on users in India,” Business Insider said, noting that the dump includes phone numbers, Facebook IDs, full names, locations, birthdates, bios, and some email addresses.

Facebook responded to the report of the data dump by saying it related to a vulnerability in its platform it had “found and fixed” in August 2019 — dubbing the info “old data” which it also claimed had been reported on in 2019. However as security experts were quick to point out, most people don’t change their mobile phone number often — so Facebook’s trigger reaction to downplay the breach looks like an ill-thought through attempt to deflect blame.

It’s also not clear whether all the data is all ‘old’, as Facebook’s initial response suggests.

There’s plenty of reasons for Facebook to try to downplay yet another data scandal. Not least because, under European Union data protection rules, there are stiff penalties for companies that fail to promptly report significant breaches to relevant authorities. And indeed for breaches themselves — as the bloc’s General Data Protection Regulation (GDPR) bakes in an expectation of security by design and default.

By pushing the claim that the leaked data is “old” Facebook may be hoping to peddle the idea that it predates the GDPR coming into application (in May 2018).

However the Irish Data Protection Commission (DPC), Facebook’s lead data supervisor in the EU, told TechCrunch that it’s not abundantly clear whether that’s the case at this point.

“The newly published dataset seems to comprise the original 2018 (pre-GDPR) dataset and combined with additional records, which may be from a later period,” the DPC’s deputy commissioner, Graham Doyle said in a statement.

“A significant number of the users are EU users. Much of the data appears to been data scraped some time ago from Facebook public profiles,” he also said.

“Previous datasets were published in 2019 and 2018 relating to a large-scale scraping of the Facebook website which at the time Facebook advised occurred between June 2017 and April 2018 when Facebook closed off a vulnerability in its phone lookup functionality. Because the scraping took place prior to GDPR, Facebook chose not to notify this as a personal data breach under GDPR.”

Doyle said the regulator sought to establish “the full facts” about the breach from Facebook over the weekend and is “continuing to do so” — making it clear that there’s an ongoing lack of clarity on the issue, despite the breach itself being claimed as “old” by Facebook.

The DPC also made it clear that it did not receive any proactive communication from Facebook on the issue — despite the GDPR putting the onus on companies to proactively inform regulators about significant data protection issues. Rather the regulator had to approach Facebook — using a number of channels to try to obtain answers from the tech giant.

Through this approach the DPC said it learnt Facebook believes the information was scraped prior to the changes it made to its platform in 2018 and 2019 in light of vulnerabilities identified in the wake of the Cambridge Analytica data misuse scandal.

A huge database of Facebook phone numbers was found unprotected online back in September 2019.

Facebook had also earlier admitted to a vulnerability with a search tool it offered — revealing in April 2018 that somewhere between 1BN and 2BN users had had their public Facebook information scraped via a feature which allowed people to look up users by inputting a phone number or email — which is one potential source for the cache of personal data.

Last year Facebook also filed a lawsuit against two companies it accused of engaging in an international data scraping operation.

But the fallout from its poor security design choices continue to dog Facebook years after its ‘fix’.

More importantly, the fallout from the massive personal data spill continues to affect Facebook users whose information is now being openly offered for download on the Internet — opening them up to the risk of spam and phishing attacks and other forms of social engineering (such as for attempted identity theft).

There are still more questions than answers about how this “old” cache of Facebook data came to be published online for free on a hacker forum.

The DPC said it was told by Facebook that “the data at issue appears to have been collated by third parties and potentially stems from multiple sources”.

The company also claimed the matter “requires extensive investigation to establish its provenance with a level of confidence sufficient to provide your Office and our users with additional information” — which is a long way of suggesting that Facebook has no idea either.

“Facebook assures the DPC it is giving highest priority to providing firm answers to the DPC,” Doyle also said. “A percentage of the records released on the hacker website contain phone numbers and email address of users.

“Risks arise for users who may be spammed for marketing purposes but equally users need to be vigilant in relation to any services they use that require authentication using a person’s phone number or email address in case third parties are attempting to gain access.”

“The DPC will communicate further facts as it receives information from Facebook,” he added.

At the time of writing Facebook had not responded to a request for comment about the breach.

Facebook users who are concerned whether their information is in the dump can run a search for their phone number or email address via the data breach advice site, haveibeenpwned.

According to haveibeenpwned’s Troy Hunt, this latest Facebook data dump contains far more mobile phone numbers than email addresses.

He writes that he was sent the data a few weeks ago — initially getting 370M records and later “the larger corpus which is now in very broad circulation”.

“A lot of it is the same, but a lot of it is also different,” Hunt also notes, adding: “There is not one clear source of this data.”

 

#computer-security, #data-breach, #data-security, #european-union, #facebook, #gdpr, #general-data-protection-regulation, #social-media, #tc, #troy-hunt, #united-kingdom

0

Inadequate federal privacy regulations leave US startups lagging behind Europe

“A new law to follow” seems unlikely to have featured on many business wishlists this holiday season, particularly if that law concerned data privacy. Digital privacy management is an area that takes considerable resources to whip into shape, and most SMBs just aren’t equipped for it.

But for 2021, I believe startups in the United States should be demanding that legislators deliver a federal privacy law. Yes, they should demand to be regulated.

For every day that goes by without agreed-upon federal standards for data, these companies lose competitive edge to the rest of the world. Soon there may be no coming back.

For every day that goes by without agreed-upon federal standards for data, these companies lose competitive edge to the rest of the world.

Businesses should not view privacy and trust infrastructure requirements as burdensome. They should view them as keys that can unlock the full power of the data they possess. They should stop thinking about privacy as compliance and begin thinking of it as a harmonization of the customer relationship. The rewards flowing to each party from such harmonization are bountiful. The U.S. federal government is in a unique position to help realize those rewards.

To understand what I mean, cast your eyes to Europe, where it’s become clear that the GDPR was nowhere near the final destination of EU data policy. Indeed it was just the launchpad. Europe’s data regime can frustrate (endless cookie banners anyone?), but it has set an agreed-upon standard of protection for citizens and elevated their trust in internet infrastructure.

For example, a Deloitte survey found that 44% of consumers felt that organizations cared more about their privacy after GDPR came into force. With a baseline standard established — seatbelts in every car — Europe is now squarely focused on raising the speed limit.

EU lawmakers recently unveiled plans for “A Europe fit for the Digital Age.” in the words of Internal Market Commissioner Thierry Breton, it’s a plan to make Europe “the most data-empowered continent in the world.”

Here are some pillars of the plan. While reading, imagine that you are a U.S.-based health tech startup. Imagine the disadvantage you would face against a similar, European-based company, if these initiatives came to fruition:

  • A regulatory framework covering data governance, access and reuse between businesses, between businesses and government, and within administrations to create incentives for data sharing.
  • A push to make public-sector data more widely available by opening up “high-value datasets” to enable their reuse to foster innovation.
  • Support for cloud infrastructure, platforms and systems to support the data reuse goals, with investments in European high-impact projects on European data spaces and trustworthy, energy-efficient cloud infrastructures.
  • Sector-specific actions to build European data spaces that focus on specific areas such as industrial manufacturing, the Green New Deal, mobility or health.

There are so many ways governments can help businesses maximize their data leverage in ways that improve society. But the American public currently has no appetite for that. They don’t trust the internet.

They want to see Mark Zuckerberg and Jeff Bezos sweating it out under Senate Committee questioning. Until we trust our leaders to protect basic online rights, widespread data empowerment initiatives will not be politically viable.

In Europe, the equation is totally different. GDPR was the foundation of a European data strategy, not the capstone.

While the EU powers forward, America’s ability to enact federal privacy reform is stymied by two quintessentially American privacy sticking points:

  • Can I personally sue a business that violates my privacy rights?
  • Can individual states build additional privacy protections on top of a federal law, or will it act as a nationwide “ceiling”?

These are important questions that must be answered as a function of our country’s unique cultural and political history. But currently they’re the roadblocks that stall American industry while the EU, seatbelts secure, begins speeding down the data autobahn.

If you want a visceral example of how this gap is already impacting American businesses, look no further than the fallout of the ECJ’s Schrems II decision in the middle of last summer. Europe’s highest court invalidated a key agreement used to transfer EU data back to the U.S., essentially because there’s no federal law to ensure EU citizens’ data would be protected once it lands in America.

The legal wrangling continues, but the impact of this decision was so considerable that Facebook legitimately threatened to quit operating Europe if the Schrems II ruling was enforced.

While issues generated for smaller businesses don’t grab as many headlines, rest assured that on the front lines of this issue, I’ve seen many SMB’s data operations thrown into total chaos. In other words, the geopolitical battle for a data-driven business edge is already well underway. We are losing.

To sum it up, the United States increasingly finds itself in a position that’s unprecedented since the dawn of the internet era: laggard. American tech companies still innovate at a fantastic rate, but America’s inability to marshal private sector practices to reflect evolving public sentiment threatens to become a yoke around the economy’s neck.

The catastrophic response to the COVID-19 pandemic fell far short of other nations’ efforts. Our handling of data privacy protection costs far less in human terms, but it grows astronomically more expensive in dollar terms with every passing day.

The technology exists to treat users respectfully in a cost-effective manner. The public will is there.

The business will is there. The legislative capability is there.

That’s why I believe America’s startup community should demand federal lawmakers follow the recent example of Europe, India, New Zealand, Brazil, South Africa and Canada. They need to introduce federally guaranteed modern data privacy protections as soon as possible.

#column, #data-security, #digital-rights, #europe, #general-data-protection-regulation, #opinion, #policy, #privacy, #startups

0

One CMO’s journey with risk management and compliance

Marketers don’t grow up daydreaming about risk management and compliance. Personally, I never gave governance, risk or compliance (GRC) a second thought outside of making sure my team completed required compliance or phishing training from time to time.

So, when I was tasked with leading the General Data Protection Regulation (GDPR) compliance initiative at a previous employer, I was far from my comfort zone.

What I thought were going to be a few, small requirements regarding how and when we sent emails to contacts based in Europe quickly turned into a complete overhaul of how the organization collected, processed and protected personally identifiable information (PII).

It is a risk leader’s job to facilitate conversations around risk and help guide business unit leaders to finding their own risk appetites.

As it turned out, I had completely underestimated the scope and importance of the project. My first mistake? Assuming compliance was “someone else’s issue.”

Risk management is a team sport

No single risk leader can alone assess, manage and resolve an organization’s risk cap. Without active involvement from business unit leaders across the company in marketing, human resources, sales and more, a company can never have a healthy risk-aware culture.

Leaders successful at developing that culture instill a company-wide team mentality with well-defined objectives, a clear scope and an agreed-upon allocation of responsibility. Ultimately, you need buy-in similar to the way a football coach needs players to buy into the team’s culture and plays for peak performance. While the company’s risk managers may be the quarterbacks when it comes to GRC, the team won’t win without key plays by linemen (sales), running backs (marketing) and receivers (procurement).

It is a risk leader’s job to facilitate conversations around risk and help guide business unit leaders to finding their own risk appetites. It’s not their job to define acceptable levels of risk for us, which is why CMOs, HR and sales leaders have no choice but to take an active role in defining risk for their departments.

Shifting my view on risk management

If I am being honest, I only used to think about risk management in terms of asset protection and cost reduction. My crash course in risk responsibility opened my eyes to the many ways GRC can actually speed deals and furthermore, drive revenue.

#column, #europe, #general-data-protection-regulation, #policy, #privacy, #regulatory-compliance, #risk, #risk-management, #security

0

Microsoft launches Azure Purview, its new data governance service

As businesses gather, store and analyze an ever-increasing amount of data, tools for helping them discover, catalog, track and manage how that data is shared are also becoming increasingly important. With Azure Purview, Microsoft is launching a new data governance service into public preview today that brings together all of these capabilities in a new data catalog with discovery and data governance features.

As Rohan Kumar, Microsoft’s corporate VP for Azure Data told me, this has become a major paint point for enterprises. While they may be very excited about getting started with data-heavy technologies like predictive analytics, those companies’ data- and privacy- focused executives are very concerned to make sure that the way the data is used is compliant or that the company has received the right permissions to use its customers’ data, for example.

In addition, companies also want to make sure that they can trust their data and know who has access to it and who made changes to it.

“[Purview] is a unified data governance platform which automates the discovery of data, cataloging of data, mapping of data, lineage tracking — with the intention of giving our customers a very good understanding of the breadth of the data estate that exists to begin with, and also to ensure that all these regulations that are there for compliance, like GDPR, CCPA, etc, are managed across an entire data estate in ways which enable you to make sure that they don’t violate any regulation,” Kumar explained.

At the core of Purview is its catalog that can pull in data from the usual suspects like Azure’s various data and storage services but also third-party data stores including Amazon’s S3 storage service and on-premises SQL Server. Over time, the company will add support for more data sources.

Kumar described this process as a ‘multi-semester investment,’ so the capabilities the company is rolling out today are only a small part of what’s on the overall roadmap already. With this first release today, the focus is on mapping a company’s data estate.

Image Credits: Microsoft

“Next [on the roadmap] is more of the governance policies,” Kumar said. “Imagine if you want to set things like ‘if there’s any PII data across any of my data stores, only this group of users has access to it.’ Today, setting up something like that is extremely complex and most likely you’ll get it wrong. That’ll be as simple as setting a policy inside of Purview.”

In addition to launching Purview, the Azure team also today launched Azure Synapse, Microsoft’s next-generation data warehousing and analytics service, into general availability. The idea behind Synapse is to give enterprises — and their engineers and data scientists — a single platform that brings together data integration, warehousing and big data analytics.

“With Synapse, we have this one product that gives a completely no code experience for data engineers, as an example, to build out these [data] pipelines and collaborate very seamlessly with the data scientists who are building out machine learning models, or the business analysts who build out reports for things like Power BI.”

Among Microsoft’s marquee customers for the service, which Kumar described as one of the fastest-growing Azure services right now, are FedEx, Walgreens, Myntra and P&G.

“The insights we gain from continuous analysis help us optimize our network,” said Sriram Krishnasamy, senior vice president, strategic programs at FedEx Services. “So as FedEx moves critical high value shipments across the globe, we can often predict whether that delivery will be disrupted by weather or traffic and remediate that disruption by routing the delivery from another location.”

Image Credits: Microsoft

#analytics, #big-data, #business-intelligence, #cloud, #computing, #data, #data-management, #data-protection, #developer, #enterprise, #general-data-protection-regulation, #information, #microsoft, #rohan-kumar, #tc

0

‘Resident Evil’ game maker Capcom confirms data breach after ransomware attack

Capcom, the Japanese game maker behind the Resident Evil and Street Fighter franchises, has confirmed that hackers stole customer data and files from its internal network following a ransomware attack earlier in the month.

That’s an about-turn from the days immediately following the cyberattack, in which Capcom said it had no evidence that customer data had been accessed.

In a statement, the company said data on as many as 350,000 customers may have been stolen, including names, addresses, phone numbers, and in some cases dates of birth. Capcom said the hackers also stole its own internal financial data and human resources files on current and former employees, which included names, addresses, dates of birth, and photos. The attackers also took “confidential corporate information,” the company said, including documents on business partners, sales, and development.

Capcom said that no credit card information was taken, as payments are handled by a third-party company.

But the company warned that the overall amount of data stolen “cannot specifically be ascertained” due to losing its own internal logs in the cyberattack.

Capcom apologized for the breach. “Capcom offers its sincerest apologies for any complications and concerns that this may bring to its potentially impacted customers as well as to its many stakeholders,” the statement read.

The video games maker was hit by the Ragnar Locker ransomware on November 2, prompting the company to shut down its network. Ragnar Locker is a data-stealing ransomware, which exfiltrates data from a victim before encrypting its network, and then threatens to publish the stolen files unless a ransom is paid. In doing so, ransomware groups can still demand a company pays the ransom even if the victim restores their files and systems from backups.

Ragnar Locker’s website now lists data allegedly stolen from Capcom, with a message implying that the company did not pay the ransom.

Capcom said it had informed data protection regulators in Japan and the United Kingdom, as required under European GDPR data breach notification rules. Companies can be fined up to 4% of their annual revenue for falling foul of GDPR rules.

#articles, #capcom, #data-breach, #data-security, #gaming, #general-data-protection-regulation, #japan, #ransomware, #security, #security-breaches, #united-kingdom

0

If the ad industry is serious about transparency, let’s open-source our SDKs

Year after year, a lack of transparency in how ad traffic is sourced, sold and measured is cited by advertisers as a source of frustration and a barrier to entry in working with various providers. But despite progress on the protection and privacy of data through laws like GDPR and COPPA, the overall picture regarding ad-marketing transparency has changed very little.

In part, this is due to the staggering complexity of how programmatic and other advertising technologies work. With automated processes managing billions of impressions every day, there is no universal solution to making things more simple and clear. So the struggle for the industry is not necessarily a lack of intent around transparency, but rather how to deliver it.

Frustratingly, evidence shows that the way data is collected and used by some industry players has played a large part in reducing people’s trust in online advertising. This is not a problem that was created overnight. There is a long history and growing sense of consumer frustration with the way their data is being used, analyzed and monetized and a similar frustration by advertisers with the transparency and legitimacy of ad clicks for which they are asked to pay.

There are continuing efforts by organizations like the IAB and TAG to create policies for better transparency such as ads.txt. But without hard and fast laws, the responsibility lies with individual companies.

One relatively simple yet largely spurned practice that would engender transparency and trust for the benefit of all parties (brands, consumers and ad/marketing providers) would be for the industry to come together and have all parties open their SDKs.

Why open-sourcing benefits advertisers, publishers and the ad industry

Open-source software is code that anyone is free to use, analyze, alter and improve.

Auditing the code and adjusting the SDKs functionality based on individual needs is a common practice — and so too are audits by security companies or interested parties who are rightly on the lookout for app fraud. By showing exactly how the code within the SDK has been written, it is the best way to reassure developers and partners that there are no hidden functions or unwanted features.

Everyone using open-source SDKs can learn exactly how it works, and because it is under an open-source license, anyone can suggest modifications and improvements in the code.

Open source brings some risks, but much bigger rewards

The main risk from opening up an SDK code is that third parties will look for ways to exploit it and insert their own malicious code, or else look at potential vulnerabilities to access back-end services and data. However, providers should be on the lookout and be able to fix the potential vulnerabilities as they arise.

As for the rewards, open-sourcing engenders trust and transparency, which should certainly translate into customer loyalty and consumer confidence. After all, we are all operating in a market where advertisers and developers can choose who they want to work with — and on what terms.

Selfishly but practically speaking, opening SDKs can also help companies in our industry protect themselves from others’ baseless claims that are simply intended to promote their products. With open standards, there are no unsubstantiated, false accusations intended for publicity. The proof is out there for everyone to see.

How ad tech is embracing open source

In the ad tech space, companies such as MoPub, Appodeal and AppsFlyer are just a few that have already made some or all of their SDKs available through an open-source license.

All of these companies have decided to use an open-source approach because they recognize the importance of transparency and trust, especially when you are placing the safety and reputation of your brand in the hands of an algorithm. However, the majority of SDKs remain closed.

Relying on forward-thinking companies to set their own transparency levels will only take our industry so far. It’s time for stronger action around trust and data transparency. In the same way that GDPR and COPPA have required companies to address privacy and, ultimately, to have forced a change that was needed, open-sourcing our SDKs will take the ad-marketing space to new heights and drive new levels of trust and deployment with our clients, competitors, legislators and consumers.

The industry-wide challenge of transparency won’t be solved any time soon, but the positive news is that there is movement in the right direction, with steps that some companies are already taking and others can easily take. By implementing measures to ensure brand-safe placements and helping limit ad fraud; improving relationships between brands, agencies, and programmatic partners; and bringing clarity to consumer data use; confidence in the advertising industry will improve and opportunities will subsequently grow.

That’s why we are calling on all ad/marketing companies to take this step forward with us — for the benefit of our consumers, brands, providers and industry at large — to embrace open-source SDKs as the way to engender trust, transparency and industry transformation. In doing so, we will all be rewarded with consumers who are more trusting of brands and brand advertising, and subsequently, brands who trust us and seek opportunities to implement more sophisticated solutions and grow their business.

#advertising-tech, #column, #digital-marketing, #general-data-protection-regulation, #marketing, #online-advertising, #open-source, #open-source-components, #open-source-startups, #opinion, #privacy

0

Privacy data management innovations reduce risk, create new revenue channels

Privacy data mismanagement is a lurking liability within every commercial enterprise. The very definition of privacy data is evolving over time and has been broadened to include information concerning an individual’s health, wealth, college grades, geolocation and web surfing behaviors. Regulations are proliferating at state, national and international levels that seek to define privacy data and establish controls governing its maintenance and use.

Existing regulations are relatively new and are being translated into operational business practices through a series of judicial challenges that are currently in progress, adding to the confusion regarding proper data handling procedures. In this confusing and sometimes chaotic environment, the privacy risks faced by almost every corporation are frequently ambiguous, constantly changing and continually expanding.

Conventional information security (infosec) tools are designed to prevent the inadvertent loss or intentional theft of sensitive information. They are not sufficient to prevent the mismanagement of privacy data. Privacy safeguards not only need to prevent loss or theft but they must also prevent the inappropriate exposure or unauthorized usage of such data, even when no loss or breach has occurred. A new generation of infosec tools is needed to address the unique risks associated with the management of privacy data.

The first wave of innovation

A variety of privacy-focused security tools emerged over the past few years, triggered in part by the introduction of GDPR (General Data Protection Regulation) within the European Union in 2018. New capabilities introduced by this first wave of innovation were focused in the following three areas:

Data discovery, classification and cataloging. Modern enterprises collect a wide variety of personal information from customers, business partners and employees at different times for different purposes with different IT systems. This data is frequently disseminated throughout a company’s application portfolio via APIs, collaboration tools, automation bots and wholesale replication. Maintaining an accurate catalog of the location of such data is a major challenge and a perpetual activity. BigID, DataGuise and Integris Software have gained prominence as popular solutions for data discovery. Collibra and Alation are leaders in providing complementary capabilities for data cataloging.

Consent management. Individuals are commonly presented with privacy statements describing the intended use and safeguards that will be employed in handling the personal data they supply to corporations. They consent to these statements — either explicitly or implicitly — at the time such data is initially collected. Osano, Transcend.io and DataGrail.io specialize in the management of consent agreements and the enforcement of their terms. These tools enable individuals to exercise their consensual data rights, such as the right to view, edit or delete personal information they’ve provided in the past.

#artificial-intelligence, #b2c, #collaboration-tools, #column, #data-protection, #data-security, #enterprise, #general-data-protection-regulation, #identity-management, #machine-learning, #privacy, #security, #startups

0

Facebook trails expanding portability tools ahead of FTC hearing

Facebook is considering expanding the types of data its users are able to port directly to alternative platforms.

In comments on portability sent to US regulators ahead of an FTC hearing on the topic next month, Facebook says it intends to expand the scope of its data portability offerings “in the coming months”.

It also offers some “possible examples” of how it could build on the photo portability tool it began rolling out last year — suggesting it could in future allow users to transfer media they’ve produced or shared on Facebook to a rival platform or take a copy of their “most meaningful posts” elsewhere.

Allowing Facebook-based events to be shared to third party cloud-based calendar services is another example cited in Facebook’s paper.

It suggests expanding portability in such ways could help content creators build their brands on other platforms or help event organizers by enabling them to track Facebook events using calendar based tools.

However there are no firm commitments from Facebook to any specific portability product launches or expansions of what it offers currently.

For now the tech giant only lets Facebook users directly send copies of their photos to Google’s eponymous photo storage service — a transfer tool it switched on for all users this June.

“We remain committed to ensuring the current product remains stable and performant for people and we are also exploring how we might extend this tool, mindful of the need to preserve the privacy of our users and the integrity of our services,” Facebook writes of its photo transfer tool.

On whether it will expand support for porting photos to other rival services (i.e. not just Google Photos) Facebook has this non-committal line to offer regulators: “Supporting these additional use cases will mean finding more destinations to which people can transfer their data. In the short term, we’ll pursue these destination partnerships through bilateral agreements informed by user interest and expressions of interest from potential partners.”

Beyond allowing photo porting to Google Photos, Facebook users have long been able to download a copy of some of the information it holds on them.

But the kind of portability regulators are increasingly interested in is about going much further than that — meaning offering mechanisms that enable easy and secure data transfers to other services in a way that could encourage and support fast-moving competition to attention-monopolizing tech giants.

The Federal Trade Commission is due to host a public workshop on September 22, 2020, which it says will  “examine the potential benefits and challenges to consumers and competition raised by data portability”.

The regulator notes that the topic has gained interest following the implementation of major privacy laws that include data portability requirements — such as the European Union’s General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA).

It asked for comment submissions by August 21, which is what Facebook’s paper is responding to.

In comments to the Reuters news agency, Facebook’s privacy and public policy manager, Bijan Madhani, said the company wants to see “dedicated portability legislation” coming out of any post-workshop recommendations.

It reports that Facebook supports a portability bill that’s doing the rounds in Congress — called the Access Act, which is sponsored by Democratic Senators Richard Blumenthal and Mark Warner, and Republican senator Josh Hawley — which would require large tech platforms to let their users easily move their data to other services.

Albeit Madhani dubs it a good first step, adding that the company will continue to engage with the lawmakers on shaping its contents.

“Although some laws already guarantee the right to portability, our experience suggests that companies and people would benefit from additional guidance about what it means to put those rules into practice,” Facebook also writes in its comments to the FTC .

Ahead of dipping its toe into portability via the photo transfer tool, Facebook released a white paper on portability last year, seeking to shape the debate and influence regulatory thinking around any tighter or more narrowly defined portability requirements.

In recent months Mark Zuckerberg has also put in facetime to lobby EU lawmakers on the topic, as they work on updating regulations around digital services.

The Facebook founder pushed the European Commission to narrow the types of data that should fall under portability rules. In the public discussion with commissioner Thierry Breton, in May, he raised the example of the Cambridge Analytica Facebook data misuse scandal, claiming the episode illustrated the risks of too much platform “openness” — and arguing that there are “direct trade-offs about openness and privacy”.

Zuckerberg went on to press for regulation that helps industry “balance these two important values around openness and privacy”. So it’s clear the company is hoping to shape the conversation about what portability should mean in practice.

Or, to put it another way, Facebook wants to be able to define which data can flow to rivals and which can’t.

“Our position is that portability obligations should not mandate the inclusion of observed and inferred data types,” Facebook writes in further comments to the FTC — lobbying to put broad limits on how much insight rivals would be able to gain into Facebook users who wish to take their data elsewhere.

Both its white paper and comments to the FTC plough this preferred furrow of making portability into a ‘hard problem’ for regulators, by digging up downsides and fleshing out conundrums — such as how to tackle social graph data.

On portability requests that wrap up data on what Facebook refers to as “non-requesting users”, its comments to the FTC work to sew doubt about the use of consent mechanisms to allow people to grant each other permission to have their data exported from a particular service — with the company questioning whether services “could offer meaningful choice and control to non-requesting users”.

“Would requiring consent inappropriately restrict portability? If not, how could consent be obtained? Should, for example, non-requesting users have the ability to choose whether their data is exported each time one of their friends wants to share it with an app? Could an approach offering this level of granularity or frequency of notice could lead to notice fatigue?” Facebook writes, skipping lightly over the irony given the levels of fatigue its own apps’ default notifications can generate for users.

Facebook also appears to be advocating for an independent body or regulator to focus on policy questions and liability issues tied to portability, writing in a blog post announcing its FTC submission: “In our comments, we encourage the FTC to examine portability in practice. We also ask it to recommend dedicated federal portability legislation and provide advice to industry on the policy and regulatory tensions we highlight, so that companies implementing data portability have the clear rules and certainty necessary to build privacy-protective products that enhance people’s choice and control online.”

In its FTC submission the company goes on to suggest that “an independent mechanism or body” could “collaboratively set privacy and security standards to ensure data portability partnerships or participation in a portability ecosystem that are transparent and consistent with the broader goals of data portability”.

Facebook then further floats the idea of an accreditation model under which recipients of user data “could demonstrate, through certification to an independent body, that they meet the data protection and processing standards found in a particular regulation, such as the [EU’s] GDPR or associated code of conduct”.

“Accredited entities could then be identified with a seal and would be eligible to receive data from transferring service providers. The independent body (potentially in consultation with relevant regulators) could work to assess compliance of certifying entities, revoking accreditation where appropriate,” it further suggests.

However its paper also notes the risk that requiring accreditation might present a barrier to entry for the small businesses and startups that might otherwise be best positioned to benefit from portability.

#apps, #congress, #data-portability, #data-protection, #digital-media, #digital-rights, #europe, #european-commission, #european-union, #facebook, #federal-trade-commission, #ftc, #gdpr, #general-data-protection-regulation, #google, #josh-hawley, #mark-warner, #mark-zuckerberg, #policy, #richard-blumenthal, #social, #terms-of-service, #thierry-breton, #united-states

0

Further delay to GDPR enforcement of 2018 Twitter breach

Twitter users have to wait to longer to find out what penalties, if any, the platform faces under the European Union’s General Data Protection Regulation (GDPR) for a data breach that dates back around two years.

In the meanwhile the platform has continued to suffer security failures — including, just last month, when hackers gained control of scores of verified accounts and tweeted out a crypto scam.

The tech firm’s lead regulator in the region, Ireland’s Data Protection Commission (DPC), began investigating an earlier Twitter breach in November 2018 — completing the probe earlier this year and submitting a draft decision to other EU DPAs for review in May, just ahead of the second anniversary of the GDPR’s application.

In a statement on the development, Graham Doyle, the DPC’s deputy commissioner, told TechCrunch: “The Irish Data Protection Commission (DPC) issued a draft decision to other Concerned Supervisory Authorities (CSAs) on 22 May 2020, in relation to this inquiry into Twitter. A number of objections were raised by CSAs and the DPC engaged in a consultation process with them. However, following consultation a number of objections were maintained and the DPC has now referred the matter to the European Data Protection Board (EDPB) under Article 65 of the GDPR.”

Under the regulation’s one-stop-shop mechanism, cross-border cases are handled by a lead regulator — typically where the business has established its regional base. For many tech companies that means Ireland, so the DPC has an oversized role in the regulation of Silicon Valley’s handling of people’s data.

This means it now has a huge backlog of highly anticipated complaints relating to tech giants including Apple, Facebook, Google, LinkedIn and indeed Twitter. The regulator also continues to face criticism for not yet ‘getting it over the line’ in any of these complaints and investigations pertaining to big tech. So the Twitter breach case is being especially closely watched as it looks set to be the Irish DPC’s first enforcement decision in a cross-border GDPR case.

Last year commissioner Helen Dixon said the first of these decisions would be coming “early” in 2020. In the event, we’re past the halfway mark of the year with still no enforcement to show for it. Though the DPC emphasizes the need to follow due process to ensure final decisions stand up to any challenge.

The latest delay in the Twitter case is a consequence of disagreements between the DPC and other regional watchdogs which, under the rules of GDPR, have a right to raise objections on a draft decision where users in their countries are also affected.

It’s not clear what specific objections have been raised to the DPC’s draft Twitter decision, or indeed what Ireland’s regulator has decided in what should be a relatively straightforward case, given it’s a breach — not a complaint about a core element of a data-mining business model.

Far more complex complaints are still sitting on the DPC’s desk. Doyle confirmed that a complaint pertaining to WhatsApp’s legal basis for sharing user data with Facebook remains the next most progressed in the stack, for example.

So, given the DPC’s Twitter breach draft decision hasn’t been universally accepted by Europe’s data watchdogs it’s all but inevitable Facebook -WhatsApp will go through the same objections process. Ergo, expect more delays.

Article 65 of the GDPR sets out a process for handling objections on draft decisions. It allows for one month for DPAs to reach a two-thirds majority, with the possibility for a further extension of another month — which would push a decision on the Twitter case into late October.

If there’s still not enough votes in favor at that point, a further two weeks are allowed for EDPB members to reach a simple majority. If DPAs are still split the Board chair, currently Andrea Jelinek, has the deciding vote. So the body’s role in major decisions over big tech looks set to be very key.

We’ve reached out to the EDPB with questions related to the Twitter objections and will update this report with any response.

The Article 65 process exists to try to find consensus across a patchwork of national and regional data supervisors. But it won’t silence critics who argue the GDPR is not able to be applied fast enough to uphold EU citizens’ rights in the face of fast-iterating data-mining giants.

To wit: Given the latest developments, a final decision on the Twitter breach could be delayed until November — a full two years after the investigation began.

Earlier this summer a two-year review of GDPR by the European Commission, meanwhile, highlighted a lack of uniformly vigorous enforcement. Though commissioners signalled a willingness to wait and see how the one-stop-shop mechanism runs its course on cross-border cases, while admitting there’s a need to reinforce cooperation and co-ordination on cross border issues.

“We need to be sure that it’s possible for all the national authorities to work together. And in the network of national authorities it’s the case — and with the Board [EDPB] it’s possible to organize that. So we’ll continue to work on it,” justice commissioner, Didier Reynders, said in June.

“The best answer will be a decision from the Irish data protection authority about important cases,” he added then.

#andrea-jelinek, #data-protection, #europe, #european-commission, #european-data-protection-board, #european-union, #facebook, #gdpr, #general-data-protection-regulation, #helen-dixon, #ireland, #linkedin, #privacy, #social, #twitter, #whatsapp

0

UK class action style claim filed over Marriott data breach

A class action style suit has been filed in the UK against hotel group Marriott International over a massive data breach that exposed the information of some 500 million guests around the world, including around 30 million residents of the European Union, between July 2014 and September 2018.

The representative legal action against Marriott has been filed by UK resident, Martin Bryant, on behalf of millions of hotel guests domiciled in England & Wales who made reservations at hotel brands globally within the Starwood Hotels group, which is now part of Marriott International.

Hackers gained access to the systems of the Starwood Hotels group, starting in 2014, where they were able to help themselves to information such as guests’ names; email and postal addresses; telephone numbers; gender and credit card data. Marriott International acquired the Starwood Hotels group in 2016 — but the breach went undiscovered until 2018.

Bryant is being represented by international law firm, Hausfeld, which specialises in group actions.

Commenting in a statement, Hausfeld partner, Michael Bywell, said: “Over a period of several years, Marriott International failed to take adequate technical or organisational measures to protect millions of their guests’ personal data which was entrusted to them. Marriott International acted in clear breach of data protection laws specifically put in place to protect data subjects.”

“Personal data is increasingly critical as we live more of our lives online, but as consumers we don’t always realise the risks we are exposed to when our data is compromised through no fault of our own. I hope this case will raise awareness of the value of our personal data, result in fair compensation for those of us who have fallen foul of Marriott’s vast and long-lasting data breach, and also serve notice to other data owners that they must hold our data responsibly,” added Bryant in another supporting statement.

We’ve reached out to Marriott International for comment on the legal action.

A claim website for the action invites other eligible UK individuals to register their interest — and “hold Marriott to account for not securing your personal data”, as it puts it.

Here are the details of who is eligible to register their interest:

The ‘class’ of claimants on whose behalf the claim is brought includes all individuals who at any date prior to 10 September 2018 made a reservation online at a hotel operating under any of the following brands: W Hotels, St. Regis, Sheraton Hotels & Resorts, Westin Hotels & Resorts, Element Hotels, Aloft Hotels, The Luxury Collection, Tribute Portfolio, Le Méridien Hotel & Resorts, Four Points by Sheraton, Design Hotels. In addition, any other brand owned and/or operated by Marriott International Inc or Starwood Hotels and Resorts Worldwide LLC. The individuals must have been resident in England and Wales at some point during the relevant period prior to 10 September 2018 and are resident in England and Wales at the date the claim was issued. They must also have been at least 18 years old at the date the claim was issued.

The claim is being brought as a representative action under Rule 19.6 of the Civil Procedure Rules, per a press release, which also notes that everyone with the same interest as Bryant is included in the claimant class unless they opt out.

Those eligible to participate face no fees or costs, nor do affected guests face any financial risk from the litigation — which is being fully funded by Harbour Litigation Funding, a global litigation funder.

The suit is the latest sign that litigation funders are willing to take a punt on representative actions in the UK as a route to obtaining substantial damages for data issues. Another class action style suit was announced last week — targeting tracking cookies operated by data broker giants, Oracle and Salesforce.

Both lawsuits follow a landmark decision by a UK appeals court last year which allowed a class action-style suit against Google’s use between 2011 and 2012 of tracking cookies to override iPhone users’ privacy settings in Apple’s Safari browser to proceed, overturning an earlier court decision to toss the case.

The other unifying factor is the existence of Europe’s General Data Protection Regulation (GDPR) framework which has opened the door to major fines for data protection violations. So even if EU regulators continue to lack uniform vigour in enforcing data protection law, there’s a chance the region’s courts will do the job for them if more litigation funders see value in bringing representative cases to pursue damages for privacy violations.

The dates of the Marriott data breach means it falls under GDPR — which came into application in May 2018.

The UK’s data watchdog, the ICO, proposed a $123M fine for the security failing in July last year — saying then that the hotel operator had “failed to undertake sufficient due diligence when it bought Starwood and should also have done more to secure its systems”.

However it has yet to hand down a final decision. Asked when the Marriott decision will be finalized, an ICO spokeswoman told us the “regulatory process” has been extended until September 30. No additional detail was offered to explain the delay.

Here’s the regulator’s statement in full:

Under Schedule 16 of the Data Protection Act 2018, Marriott has agreed to an extension of the regulatory process until 30 September. We will not be commenting until the regulatory process has concluded.

#data-protection-law, #europe, #european-union, #gdpr, #general-data-protection-regulation, #hotels, #iphone, #law, #lawsuit, #litigation, #marriott, #marriott-international, #netherlands, #privacy, #salesforce, #starwood, #united-kingdom

0

EU websites’ use of Google Analytics and Facebook Connect targeted by post-Schrems II privacy complaints

A month after Europe’s top court struck down a flagship data transfer arrangement between the EU and the US as unsafe, European privacy campaign group, noyb, has filed complaints against 101 websites with regional operators which it’s identified as still sending data to the US via Google Analytics and/or Facebook Connect integrations.

Among the entities listed in its complaint are ecommerce companies, publishers & broadcasters, telcos & ISPs, banks and universities — including Airbnb Ireland, Allied Irish Banks, Danske Bank, Fastweb, MTV Internet, Sky Deutschland, Takeaway.com and Tele2, to name a few.

“A quick analysis of the HTML source code of major EU webpages shows that many companies still use Google Analytics or Facebook Connect one month after a major judgment by the Court of Justice of the European Union (CJEU) — despite both companies clearly falling under US surveillance laws, such as FISA 702,” the campaign group writes on its website.

“Neither Facebook nor Google seem to have a legal basis for the data transfers. Google still claims to rely on the ‘Privacy Shield’ a month after it was invalidated, while Facebook continues to use the ‘SCCs’ [Standard Contractual Clauses], despite the Court finding that US surveillance laws violate the essence of EU fundamental rights.”

We’ve reached out to Facebook and Google with questions about their legal bases for such transfers — and will update this report with any response.

Privacy watchers will know that noyb’s founder, Max Schrems, was responsible for the original legal challenge that took down an anterior EU-US data arrangement, Safe Harbor, all the way back in 2015. His updated complaint ended up taking down the EU-US Privacy Shield last month — although he’d actually targeted Facebook’s use of a separate data transfer mechanism (SCCs), urging its data supervisor, Ireland’s DPC, to step in and suspend its use of that tool.

The regulator chose to go to court instead, raising wider concerns about the legality of EU-US data transfer arrangements — which resulted in the CJEU concluding that the Commission should not have granted the US a so-called ‘adequacy agreement’, thus pulling the rug out from under Privacy Shield.

The decision means the US is now what’s considered a ‘third country’ in data protection terms, with no special arrangement to enable it to process EU users’ information.

More than that, the court’s ruling also made it clear EU data watchdogs have a responsibility to intervene where they suspect there are risks to EU people’s data if it’s being transferred to a third country via SCCs.

European data watchdogs swiftly warned there would be no grace period for entities still illegally relying on Privacy Shield — so anyone listed in the above complaint that’s still referencing the defunct mechanism in their privacy policy won’t even have a proverbial figleaf to hide their legal blushes.

noyb’s contention with this latest clutch of complaints is that none of the aforementioned 101 websites has a valid legal basis to keep transferring visitor data to the US via the embedded Google Analytics and/or Facebook Connect integrations.

“We have done a quick search on major websites in each EU member state for code from Facebook and Google. These code snippets forward data on each visitor to Google or Facebook. Both companies admit that they transfer data of Europeans to the US for processing, where these companies are under a legal obligation to make such data available to US agencies like the NSA. Neither Google Analytics nor Facebook Connect are essential to run these webpages and are services that could have been replaced or at least deactivated by now,” said Schrems, honorary chair of noyb.eu, in a statement.

Since the CJEU’s Schrems II ruling, and indeed since the Safe Harbor strike down, the US Department of Commerce and European Commission have stuck their heads in the sand — signalling they intend to try cobbling together another data pact to replace the defunct Privacy Shield (which replaced the blasted-to-smithereens (un)Safe Harbor. So, er… ).

Yet without root-and-branch reform of US surveillance law, any third pop by respective lawmakers at papering over the legal schism of US national security priorities vs EU privacy rights is just as surely doomed to fail.

The more cynical among you might say the high level administrative manoeuvers around this topic are, in fact, simply intended to buy more time — for the data to keep flowing and ‘business as usual’ to continue.

But there is now substantial legal risk attached to a strategy of trying to pretend US surveillance law doesn’t exist.

Here’s Schrems again, on last month’s CJEU ruling, suggesting that Facebook and Google could be in the frame for legal liability if they don’t proactively warn EU customers of their data responsibilities: “The Court was explicit that you cannot use the SCCs when the recipient in the US falls under these mass surveillance laws. It seems US companies are still trying to convince their EU customers of the opposite. This is more than shady. Under the SCCs the US data importer would instead have to inform the EU data sender of these laws and warn them. If this is not done, then these US companies are actually liable for any financial damage caused.”

And as noyb’s press release notes, GDPR’s penalties regime can scale as high as 4% of the worldwide turnover of the EU sender and the US recipient of personal data. So, again, hi Facebook, hi Google…

The crowdfunded campaign group has pledged to continue dialling up the pressure on EU regulators to act and on EU data processors to review any US data transfer arrangements — and “adapt to the clear ruling by the EU’s supreme court”, as it puts it.

Other types of legal action are also starting to draw on Europe’s General Data Protection Regulation (GDPR) framework — and, importantly, attract funding — such as two class action style suits filed against Oracle and Salesforce’s use of tracking cookies earlier this month. (As we said when GDPR came into force back in 2018, the lawsuits are coming.)

Now, with two clear strikes from the CJEU on the issue of US surveillance law vs EU data protection, it looks like it’ll be diminishing returns for US tech giants hoping to pretend everything’s okay on the data processing front.

noyb is also putting its money where its mouth is — offering free guidelines and model requests for EU entities to use to help them get their data affairs in prompt legal order. 

“While we understand that some things may need some time to rearrange, it is unacceptable that some players seem to simply ignore Europe’s top court,” Schrems added, in further comments on the latest flotilla of complaints. “This is also unfair towards competitors that comply with these rules. We will gradually take steps against controllers and processors that violate the GDPR and against authorities that do not enforce the Court’s ruling, like the Irish DPC that stays dormant.”

We’ve reached out to Ireland’s Data Protection Commission to ask what steps it will be taking in light of the latest noyb complaints, a number of which target websites that appear to be operated by an Ireland-based legal entity.

Schrems original 2013 complaint against Facebook’s use of SCCs also ended up in Ireland, where the tech giant — and many others — locates its EU EQ. Schrem’s request that the DPC order Facebook to suspend its use of SCCs still hasn’t been fulfilled, some seven years and five complaints later. And the regulator continues to face accusations of inaction, given the growing backlog of cross-border GDPR complaints against tech giants like Facebook and Google.

Ireland’s DPC has still yet to issue a single final decision on any of these major GDPR complaints. But the legal pressure for it and all EU regulators to get a move on and enforce the bloc’s law will only increase, even as class action style lawsuits are filed to try to do what regulators have failed to.

Earlier this summer the Commission acknowledged a lack of uniformly “vigorous” enforcement of GDPR in a review of the mechanism’s first two years of operation.

“The European Data Protection Board [EDPB] and the data protection authorities have to step up their work to create a truly common European culture — providing more coherent and more practical guidance, and work on vigorous but uniform enforcement,” said Věra Jourová, Commission VP for values and transparency then, giving the Commission’s first public assessment of whether GDPR is working.

We’ve also reached out to France’s CNIL to ask what action it will be taking in light of the noyb complaints.

Following the judgement in July the French regulator said it was “conducting a precise analysis”, along with the EDPB, with a view to “drawing conclusions as soon as possible on the consequences of the ruling for data transfers from the European Union to the United States”.

Since then the EDPB guidance has come out — inking the obvious: That transfers on the basis of Privacy Shield “are illegal”. And while the CJEU ruling did not invalidate the use of SCCs it gave only a very qualified green light to continued use.

As we reported last month, the ability to use SCCs to transfer data to the U.S. hinges on a data controller being able to offer a legal guarantee that “U.S. law does not impinge on the adequate level of protection” for the transferred data.

“Whether or not you can transfer personal data on the basis of SCCs will depend on the result of your assessment, taking into account the circumstances of the transfers, and supplementary measures you could put in place,” the EDPB added.

#airbnb, #campaign, #cjeu, #data-controller, #data-protection, #data-security, #digital-rights, #ecommerce, #eu-us-privacy-shield, #europe, #european-commission, #european-data-protection-board, #european-union, #facebook, #france, #gdpr, #general-data-protection-regulation, #html, #human-rights, #ireland, #lawsuit, #max-schrems, #noyb, #oracle, #privacy, #privacy-shield, #safe-harbor, #salesforce, #sccs, #schrems-ii, #takeaway-com, #tc, #united-states, #us-department-of-commerce, #vera-jourova

0

Oracle and Salesforce hit with GDPR class action lawsuits over cookie tracking consent

The use of third party cookies for ad tracking and targeting by data broker giants Oracle and Salesforce is the focus of class action style litigation announced today in the UK and the Netherlands.

The suits will argue that mass surveillance of Internet users to carry out real-time bidding ad auctions cannot possibly be compatible with strict EU laws around consent to process personal data.

The litigants believe the collective claims could exceed €10BN, should they eventually prevail in their arguments — though such legal actions can take several years to work their way through the courts.

In the UK, the case may also face some legal hurdles given the lack of an established model for pursuing collective damages in cases relating to data rights. Though there are signs that’s changing.

Non-profit foundation, The Privacy Collective, has filed one case today with the District Court of Amsterdam, accusing the two data broker giants of breaching the EU’s General Data Protection Regulation (GDPR) in their processing and sharing of people’s information via third party tracking cookies and other adtech methods.

The Dutch case, which is being led by law-firm bureau Brandeis, is the biggest-ever class action in The Netherlands related to violation of the GDPR — with the claimant foundation representing the interests of all Dutch citizens whose personal data has been used without their consent and knowledge by Oracle and Salesforce. 

A similar case is due to be filed later this month at the High Court in London England, which will make reference to the GDPR and the UK’s PECR (Privacy of Electronic Communications Regulation) — the latter governing the use of personal data for marketing communications. The case there is being led by law firm Cadwalader

Under GDPR, consent for processing EU citizens’ personal data must be informed, specific and freely given. The regulation also confers rights on individuals around their data — such as the ability to receive a copy of their personal information.

It’s those requirements the litigation is focused on, with the cases set to argue that the tech giants’ third party tracking cookies, BlueKai and Krux — trackers that are hosted on scores of popular websites, such as Amazon, Booking.com, Dropbox, Reddit and Spotify to name a few — along with a number of other tracking techniques are being used to misuse Europeans’ data on a massive scale.

Per Oracle marketing materials, its Data Cloud and BlueKai Marketplace provider partners with access to some 2BN global consumer profiles. (Meanwhile, as we reported in June, BlueKai suffered a data breach that exposed billions of those records to the open web.)

While Salesforce claims its marketing cloud ‘interacts’ with more than 3BN browsers and devices monthly.

Both companies have grown their tracking and targeting capabilities via acquisition for years; Oracle bagging BlueKai in 2014 — and Salesforce snaffling Krux in 2016.

 

Discussing the lawsuit in a telephone call with TechCrunch, Dr Rebecca Rumbul, class representative and claimant in England & Wales, said: “There is, I think, no way that any normal person can really give informed consent to the way in which their data is going to be processed by the cookies that have been placed by Oracle and Salesforce.

“When you start digging into it there are numerous, fairly pernicious ways in which these cookies can and probably do operate — such as cookie syncing, and the aggregation of personal data — so there’s really, really serious privacy concerns there.”

The real-time-bidding (RTB) process that the pair’s tracking cookies and techniques feed, enabling the background, high velocity trading of profiles of individual web users as they browse in order to run dynamic ad auctions and serve behavioral ads targeting their interests, has, in recent years, been subject to a number of GDPR complaints, including in the UK.

These complaints argue that RTB’s handling of people’s information is a breach of the regulation because it’s inherently insecure to broadcast data to so many other entities — while, conversely, GDPR bakes in a requirement for privacy by design and default.

The UK Information Commissioner’s Office has, meanwhile, accepted for well over a year that adtech has a lawfulness problem. But the regulator has so far sat on its hands, instead of enforcing the law — leaving the complainants dangling. (Last year, Ireland’s DPC opened a formal investigation of Google’s adtech, following a similar complaint, but has yet to issue a single GDPR decision in a cross-border complaint — leading to concerns of an enforcement bottleneck.)

The two lawsuits targeting RTB aren’t focused on the security allegation, per Rumbul, but are mostly concerned with consent and data access rights.

She confirms they opted to litigate rather than trying to try a regulatory complaint route as a way of exercising their rights given the “David vs Goliath” nature of bringing claims against the tech giants in question.

“If I was just one tiny person trying to complaint to Oracle and trying to use the UK Information Commissioner to achieve that… they simply do not have the resources to direct at one complaint from one person against a company like Oracle — in terms of this kind of scale,” Rumbul told TechCrunch.

“In terms of being able to demonstrate harm, that’s quite a lot of work and what you get back in recompense would probably be quite small. It certainly wouldn’t compensate me for the time I would spend on it… Whereas doing it as a representative class action I can represent everyone in the UK that has been affected by this.

“The sums of money then work — in terms of the depths of Oracle’s pockets, the costs of litigation, which are enormous, and the fact that, hopefully, doing it this way, in a very large-scale, very public forum it’s not just about getting money back at the end of it; it’s about trying to achieve more standardized change in the industry.”

“If Salesforce and Oracle are not successful in fighting this then hopefully that send out ripples across the adtech industry as a whole — encouraging those that are using these quite pernicious cookies to change their behaviours,” she added.

The litigation is being funded by Innsworth, a litigation funder which is also funding Walter Merricks’ class action for 46 million consumers against Mastercard in London courts. And the GDPR appears to be helping to change the class action landscape in the UK — as it allows individuals to take private legal action. The framework can also support third parties to bring claims for redress on behalf of individuals. While changes to domestic consumer rights law also appear to be driving class actions.

Commenting in a statement, Ian Garrard, managing director of Innsworth Advisors, said: “The development of class action regimes in the UK and the availability of collective redress in the EU/EEA mean Innsworth can put money to work enabling access to justice for millions of individuals whose personal data has been misused.”

A separate and still ongoing lawsuit in the UK, which is seeking damages from Google on behalf of Safari users whose privacy settings it historically ignored, also looks to have bolstered the prospects of class action style legal actions related to data issues.

While the courts initially tossed the suit last year, the appeals court overturned that ruling — rejecting Google’s argument that UK and EU law requires “proof of causation and consequential damage” in order to bring a claim related to loss of control of data.

The judge said the claimant did not need to prove “pecuniary loss or distress” to recover damages, and also allowed the class to proceed without all the members having the same interest.

Discussing that case, Rumbul suggests a pending final judgement there (likely next year) may have a bearing on whether the lawsuit she’s involved with can be taken forward in the UK.

“I’m very much hoping that the UK judiciary are open to seeing these kind of cases come forward because without these kinds of things as very large class actions it’s almost like closing the door on this whole sphere of litigation. If there’s a legal ruling that says that case can’t go forward and therefore this case can’t go forward I’d be fascinated to understand how the judiciary think we’d have any recourse to these private companies for these kind of actions,” she said.

Asked why the litigation has focused on Oracle and Saleforce, given there are so many firms involved in the adtech pipeline, she said: “I am not saying that they are necessarily the worst or the only companies that are doing this. They are however huge, huge international multimillion-billion dollar companies. And they specifically went out and purchased different bits of adtech software, like BlueKai, in order to bolster their presence in this area — to bolster their own profits.

“This was a strategic business decision that they made to move into this space and become massive players. So in terms of the adtech marketplace they are very, very big players. If they are able to be held to account for this then it will hopefully change the industry as a whole. It will hopefully reduce the places to hide for the other more pernicious cookie manufacturers out there. And obviously they have huge, huge revenues so in terms of targeting people who are doing a lot of harm and that can afford to compensate people these are the right companies to be targeting.”

Rumbul also told us The Privacy Collective is looking to collect stories from web users who feel they have experienced harm related to online tracking.

“There’s plenty of evidence out there to show that how these cookies work means you can have very, very egregious outcomes for people at an individual level,” she added. “Whether that can be related to personal finance, to manipulation of addictive behaviors, whatever, these are all very, very possible — and they cover every aspect of our lives.”

Consumers in England and Wales and the Netherlands are being encouraged to register their support of the actions via The Privacy Collective’s website.

In a statement, Christiaan Alberdingk Thijm, lead lawyer at Brandeis, said: “Your data is being sold off in real-time to the highest bidder, in a flagrant violation of EU data protection regulations. This ad-targeting technology is insidious in that most people are unaware of its impact or the violations of privacy and data rights it entails. Within this adtech environment, Oracle and Salesforce perform activities which violate European privacy rules on a daily basis, but this is the first time they are being held to account. These cases will draw attention to astronomical profits being made from people’s personal information, and the risks to individuals and society of this lack of accountability.”

“Thousands of organisations are processing billions of bid requests each week with at best inconsistent application of adequate technical and organisational measures to secure the data, and with little or no consideration as to the requirements of data protection law about international transfers of personal data. The GDPR gives us the tool to assert individuals’ rights. The class action means we can aggregate the harm done,” added partner Melis Acuner from Cadwalader in another supporting statement.

We reached out to Oracle and Salesforce for comment on the litigation.

Oracle EVP and general counsel, Dorian Daley, said:

The Privacy Collective knowingly filed a meritless action based on deliberate misrepresentations of the facts.  As Oracle previously informed the Privacy Collective, Oracle has no direct role in the real-time bidding process (RTB), has a minimal data footprint in the EU, and has a comprehensive GDPR compliance program. Despite Oracle’s fulsome explanation, the Privacy Collective has decided to pursue its shake-down through litigation filed in bad faith.  Oracle will vigorously defend against these baseless claims.

A spokeswoman for Salesforce sent us this statement:

At Salesforce, Trust is our #1 value and nothing is more important to us than the privacy and security of our corporate customers’ data. We design and build our services with privacy at the forefront, providing our corporate customers with tools to help them comply with their own obligations under applicable privacy laws — including the EU GDPR — to preserve the privacy rights of their own customers.

Salesforce and another Data Management Platform provider, have received a privacy related complaint from a Dutch group called The Privacy Collective. The claim applies to the Salesforce Audience Studio service and does not relate to any other Salesforce service.

Salesforce disagrees with the allegations and intends to demonstrate they are without merit.

Our comprehensive privacy program provides tools to help our customers preserve the privacy rights of their own customers. To read more about the tools we provide our corporate customers and our commitment to privacy, visit salesforce.com/privacy/products/

#advertising-tech, #amazon, #bluekai, #booking-com, #consumer-rights-law, #data-protection, #data-protection-law, #data-security, #digital-rights, #dropbox, #europe, #european-union, #gdpr, #general-data-protection-regulation, #google, #information-commissioners-office, #ireland, #krux, #law, #lawsuit, #london, #marketing, #mastercard, #netherlands, #online-tracking, #oracle, #privacy, #salesforce, #spotify, #tc, #united-kingdom

0

TikTok found to have tracked Android users’ MAC addresses until late last year

Until late last year social video app TikTok was using an extra layer of encryption to conceal a tactic for tracking Android users via the MAC address of their device which skirted Google’s policies and did not allow users to opt out, The Wall Street Journal reports. Users were also not informed of this form of tracking, per its report.

Its analysis found that this concealed tracking ended in November as US scrutiny of the company dialled up, after at least 15 months during which TikTok had been gathering the fixed identifier without users’ knowledge.

A MAC address is a unique and fixed identifier assigned to an Internet connected device — which means it can be repurposed for tracking the individual user for profiling and ad targeting purposes, including by being able to re-link a user who has cleared their advertising ID back to the same device and therefore to all the prior profiling they wanted to jettison.

TikTok appears to have exploited a known bug on Android to gather users’ MAC addresses which Google has still failed to plug, per the WSJ.

A spokeswoman for TikTok did not deny the substance of its report, nor engage with specific questions we sent — including regarding the purpose of this opt-out-less tracking. Instead she sent the below statement, attributed to a spokesperson, in which company reiterates what has become a go-to claim that it has never given US user data to the Chinese government:

Under the leadership of our Chief Information Security Officer (CISO) Roland Cloutier, who has decades of experience in law enforcement and the financial services industry, we are committed to protecting the privacy and safety of the TikTok community. We constantly update our app to keep up with evolving security challenges, and the current version of TikTok does not collect MAC addresses. We have never given any US user data to the Chinese government nor would we do so if asked.

“We always encourage our users to download the most current version of TikTok,” the statement added.

With all eyes on TikTok, as the latest target of the Trump administration’s war on Chinese tech firms, scrutiny of the social video app’s handling of user data has inevitably dialled up.

And while no popular social app platform has its hands clean when it comes to user tracking and profiling for ad targeting, TikTok being owned by China’s ByteDance means its flavor of surveillance capitalism has earned it unwelcome attention from the US president — who has threatened to ban the app unless it sells its US business to a US company within a matter of weeks.

Trump’s fixation on China tech, generally, is centered on the claim that the tech firms pose threats to national security in the West via access to Western networks and/or user data.

The US government is able to point to China’s Internet security law which requires firms to provide the Chinese Communist Party with access to user data — hence TikTok’s emphatic denial of passing data. But the existence of the law makes such claims difficult to stick.

TikTok’s problems with user data don’t stop there, either. Yesterday it emerged that France’s data protection watchdog has been investigating TikTok since May, following a user complaint.

The CNIL’s concerns about how the app handled a user request to delete a video have since broadened to encompass issues related to how transparently it communicates with users, as well as to transfers of user data outside the EU — which, in recent weeks, have become even more legally complex in the region.

Compliance with EU rules on data access rights for users and the processing of minors’ information are other areas of stated concern for the regulator.

Under EU law any fixed identifier (e.g. a MAC address) is treated as personal data — meaning it falls under the bloc’s GDPR data protection framework, which places strict conditions on how such data can be processed, including requiring companies to have a legal basis to collect it in the first place.

If TikTok was concealing its tracking of MAC addresses from users it’s difficult to imagine what legal basis it could claim — consent would certainly not be possible. The penalties for violating GDPR can be substantial (France’s CNIL slapped Google with a $57M fine last year under the same framework, for example).

The WSJ’s report notes that the FTC has said MAC addresses are considered personally identifiable information under the Children’s Online Privacy Protection Act — implying the app could also face a regulatory probe on that front, to add to its pile of US problems.

Presented with the WSJ’s findings, Senator Josh Hawley (R., Mo.) told the newspaper that Google should remove TikTok’s app from its store. “If Google is telling users they won’t be tracked without their consent and knowingly allows apps like TikTok to break its rules by collecting persistent identifiers, potentially in violation of our children’s privacy laws, they’ve got some explaining to do,” he said.

We’ve reached out to Google for comment.

#android, #apps, #bytedance, #china, #chinese-communist-party, #encryption, #european-union, #federal-trade-commission, #france, #general-data-protection-regulation, #google, #josh-hawley, #mac-address, #privacy, #security, #social, #targeted-advertising, #tc, #tiktok, #trump-administration, #united-states, #us-government

0

TikTok is being investigated by France’s data watchdog

More worries for TikTok: A European data watchdog that’s landed the biggest blow on a tech giant to date — slapping Google with a $57M fine last year (upheld in June) — now has an open investigation into the social video app du jour, TechCrunch has confirmed.

A spokeswoman for France’s CNIL told us it opened an investigation into how the app handles user data in May 2020, following a complaint related to a request to delete a video. Its probe of the video sharing platform was reported earlier by Politico.

Under the European Union’s data protection framework, citizens who have given consent for their data to be processed continue to hold a range of rights attached to their personal data, including the ability to request a copy or deletion of the information, or ask for their data in a portable form.

Additional requirements under the EU’s GDPR (General Data Protection Regulation) include transparency obligations to ensure accountability with the framework. Which means data controllers must provide data subjects with clear information on the purposes of processing — including in order to obtain legally valid consent to process the data.

The CNIL’s spokeswoman told us its complaint-triggered investigation into TikTok has since widened to include issues related to transparency requirements about how it processes user data; users’ data access rights; transfers of user data outside the EU; and steps the platform takes to ensure the data of minors is adequately protected — a key issue, given the app’s popularity with teens.

French data protection law lets children consent to the processing of their data for information social services such as TikTok at aged 15 (or younger with parental consent).

As regards the original complaint, the CNIL’s spokeswoman said the person in question has since been “invited to exercise his rights with TikTok under the GDPR, which he had not taken beforehand” (via Google Translate).

We’ve reached out to TikTok for comment on the CNIL investigation.

One question mark is it’s not clear whether the French watchdog will be able to see its investigation of TikTok to full conclusion.

In further emailed remarks its spokeswoman noted the company is seeking to designate Ireland’s Data Protection Commission (DPC) as its lead authority in Europe — and is setting up an establishment in Ireland for that purpose. (Related: Last week TikTok announced a plan to open its first data center in Europe, which will eventually hold all EU users’ data, also in Ireland.)

If TikTok is able to satisfy the legal conditions it may be able to move any GDPR investigation to the DPC — which has gained a reputation for being painstakingly slow to enforce complex cross-border GDPR cases. Though in late May it finally submitted a first draft decision (on a Twitter case) to the other EU data watchdogs for review. A final decision in that case is still pending.

“The [TikTok] investigations could therefore ultimately be the sole responsibility of the Irish protection authority, which will have to deal with the case in cooperation with the other European data protection authorities,” the CNIL’s spokeswoman noted, before emphasizing there is a standard of proof it will have to meet.

“To come under the sole jurisdiction of the Irish authority and not of each of the authorities, Tiktok will nevertheless have to prove that its establishment in Ireland fulfils the conditions of a ‘principal establishment’ within the meaning of the GDPR.”

Under Europe’s GDPR framework, national data watchdogs have powers to issue penalties of up to 4% of a company’s global annual turnover and can also order infringing data processing to cease. But to do any of that they have to first investigate and go through the process of issuing a decision. In cross-border cases where multiple watchdogs have an interest, there’s also a requirement to liaise with other regulators to ensure there’s broad consensus on any decision.

#apps, #cnil, #data-portability, #data-protection, #data-protection-law, #europe, #european-union, #france, #gdpr, #general-data-protection-regulation, #google, #ireland, #privacy, #social, #tc, #tiktok

0

US tech needs a pivot to survive

Last month, American tech companies were dealt two of the most consequential legal decisions they have ever faced. Both of these decisions came from thousands of miles away, in Europe. While companies are spending time and money scrambling to understand how to comply with a single decision, they shouldn’t miss the broader ramification: Europe has different operating principles from the U.S., and is no longer passively accepting American rules of engagement on tech.

In the first decision, Apple objected to and was spared a $15 billion tax bill the EU said was due to Ireland, while the European Commission’s most vocal anti-tech crusader Margrethe Vestager was dealt a stinging defeat. In the second, and much more far-reaching decision, Europe’s courts struck a blow at a central tenet of American tech’s business model: data storage and flows.

American companies have spent decades bundling stores of user data and convincing investors of its worth as an asset. In Schrems, Europe’s highest court ruled that masses of free-flowing user data is, instead, an enormous liability, and sows doubt about the future of the main method that companies use to transfer data across the Atlantic.

On the surface, this decision appears to be about data protection. But there is a choppier undertow of sentiment swirling in legislative and regulatory circles across Europe. Namely that American companies have amassed significant fortunes from Europeans and their data, and governments want their share of the revenue.

What’s more, the fact that European courts handed victory to an individual citizen while also handing defeat to one of the commission’s senior leaders shows European institutions are even more interested in protecting individual rights than they are in propping up commission positions. This particular dynamic bodes poorly for the lobbying and influence strategies that many American companies have pursued in their European expansion.

After the Schrems ruling, companies will scramble to build legal teams and data centers that can comply with the court’s decision. They will spend large sums of money on pre-built solutions or cloud providers that can deliver a quick and seamless transition to the new legal reality. What companies should be doing, however, is building a comprehensive understanding of the political, judicial and social realities of the European countries where they do business — because this is just the tip of the iceberg.

American companies need to show Europeans — regularly and seriously — that they do not take their business for granted.

Europe is an afterthought no more

For many years, American tech companies have treated Europe as a market that required minimal, if any, meaningful adaptations for success. If an early-stage company wanted to gain market share in Germany, it would translate its website, add a notice about cookies and find a convenient way to transact in euros. Larger companies wouldn’t add many more layers of complexity to this strategy; perhaps it would establish a local sales office with a European from HQ, hire a German with experience in U.S. companies or sign a local partnership that could help it distribute or deliver its product. Europe, for many small and medium-sized tech firms, was little more than a bigger Canada in a tougher time zone.

Only the largest companies would go to the effort of setting up public policy offices in Brussels, or meaningfully try to understand the noncommercial issues that could affect their license to operate in Europe. The Schrems ruling shows how this strategy isn’t feasible anymore.

American tech must invest in understanding European political realities the same way they do in emerging markets like India, Russia or China, where U.S. tech companies go to great lengths to adapt products to local laws or pull out where they cannot comply. Europe is not just the European Commission, but rather 27 different countries that vote and act on different interests at home and in Brussels.

Governments in Beijing or Moscow refused to accept a reality of U.S. companies setting conditions for them from the outset. After underestimating Europe for years, American companies now need to dedicate headspace to considering how business is materially affected by Europe’s different views on data protection, commerce, taxation and other issues.

This is not to say that American and European values on the internet differ as dramatically as they do with China’s values, for instance. But Europe, from national governments to the EU and to courts, is making it clear that it will not accept a reality where U.S. companies assume that they have license to operate the same way they do at home. Where U.S. companies expect light taxation, European governments expect revenue for economic activity. Where U.S. companies expect a clear line between state and federal legislation, Europe offers a messy patchwork of national and international regulation. Where U.S. companies expect that their popularity alone is proof that consumers consent to looser privacy or data protection, Europe reminds them that (across the pond) the state has the last word on the matter.

Many American tech companies understand their commercial risks inside and out but are not prepared for managing the risks that are out of their control. From reputation risk to regulatory risk, they can no longer treat Europe as a like-for-like market with the U.S., and the winners will be those companies that can navigate the legal and political changes afoot. Having a Brussels strategy isn’t enough. Instead American companies will need to build deeper influence in the member states where they operate. Specifically, they will need to communicate their side of the argument early and often to a wider range of potential allies, from local and national governments in markets where they operate, to civil society activists like Max Schrems .

The world’s offline differences are obvious, and the time when we could pretend that the internet erased them rather than magnified them is quickly ending.

#canada, #column, #data-protection, #data-security, #europe, #european-commission, #european-union, #general-data-protection-regulation, #government, #ireland, #max-schrems, #opinion, #policy, #privacy, #startups, #tc, #united-states, #venture-capital

0