Perspectives on tackling Big Tech’s market power

The need for markets-focused competition watchdogs and consumer-centric privacy regulators to think outside their respective ‘legal silos’ and find creative ways to work together to tackle the challenge of big tech market power was the impetus for a couple of fascinating panel discussions organized by the Centre for Economic Policy Research (CEPR), which were livestreamed yesterday but are available to view on-demand here.

The conversations brought together key regulatory leaders from Europe and the US — giving a glimpse of what the future shape of digital markets oversight might look like at a time when fresh blood has just been injected to chair the FTC so regulatory change is very much in the air (at least around tech antitrust).

CEPR’s discussion premise is that integration, not merely intersection, of competition and privacy/data protection law is needed to get a proper handle on platform giants that have, in many cases, leveraged their market power to force consumers to accept an abusive ‘fee’ of ongoing surveillance.

That fee both strips consumers of their privacy and helps tech giants perpetuate market dominance by locking out interesting new competition (which can’t get the same access to people’s data so operates at a baked in disadvantage).

A running theme in Europe for a number of years now, since a 2018 flagship update to the bloc’s data protection framework (GDPR), has been the ongoing under-enforcement around the EU’s ‘on-paper’ privacy rights — which, in certain markets, means regional competition authorities are now actively grappling with exactly how and where the issue of ‘data abuse’ fits into their antitrust legal frameworks.

The regulators assembled for CEPR’s discussion included, from the UK, the Competition and Markets Authority’s CEO Andrea Coscelli and the information commissioner, Elizabeth Denham; from Germany, the FCO’s Andreas Mundt; from France, Henri Piffaut, VP of the French competition authority; and from the EU, the European Data Protection Supervisor himself, Wojciech Wiewiórowski, who advises the EU’s executive body on data protection legislation (and is the watchdog for EU institutions’ own data use).

The UK’s CMA now sits outside the EU, of course — giving the national authority a higher profile role in global mergers & acquisition decisions (vs pre-brexit), and the chance to help shape key standards in the digital sphere via the investigations and procedures it chooses to pursue (and it has been moving very quickly on that front).

The CMA has a number of major antitrust probes open into tech giants — including looking into complaints against Apple’s App Store and others targeting Google’s plan to depreciate support for third party tracking cookies (aka the so-called ‘Privacy Sandbox’) — the latter being an investigation where the CMA has actively engaged the UK’s privacy watchdog (the ICO) to work with it.

Only last week the competition watchdog said it was minded to accept a set of legally binding commitments that Google has offered which could see a quasi ‘co-design’ process taking place, between the CMA, the ICO and Google, over the shape of the key technology infrastructure that ultimately replaces tracking cookies. So a pretty major development.

Germany’s FCO has also been very active against big tech this year — making full use of an update to the national competition law which gives it the power to take proactive inventions around large digital platforms with major competitive significance — with open procedures now against Amazon, Facebook and Google.

The Bundeskartellamt was already a pioneer in pushing to loop EU data protection rules into competition enforcement in digital markets in a strategic case against Facebook, as we’ve reported before. That closely watched (and long running) case — which targets Facebook’s ‘superprofiling’ of users, based on its ability to combine user data from multiple sources to flesh out a single high dimension per-user profile — is now headed to Europe’s top court (so likely has more years to run).

But during yesterday’s discussion Mundt confirmed that the FCO’s experience litigating that case helped shape key amendments to the national law that’s given him beefier powers to tackle big tech. (And he suggested it’ll be a lot easier to regulate tech giants going forward, using these new national powers.)

“Once we have designated a company to be of ‘paramount significance’ we can prohibit certain conduct much more easily than we could in the past,” he said. “We can prohibit, for example, that a company impedes other undertaking by data processing that is relevant for competition. We can prohibit that a use of service depends on the agreement to data collection with no choice — this is the Facebook case, indeed… When this law was negotiated in parliament parliament very much referred to the Facebook case and in a certain sense this entwinement of competition law and data protection law is written in a theory of harm in the German competition law.

“This makes a lot of sense. If we talk about dominance and if we assess that this dominance has come into place because of data collection and data possession and data processing you need a parameter in how far a company is allowed to gather the data to process it.”

“The past is also the future because this Facebook case… has always been a big case. And now it is up to the European Court of Justice to say something on that,” he added. “If everything works well we might get a very clear ruling saying… as far as the ECN [European Competition Network] is concerned how far we can integrate GDPR in assessing competition matters.

“So Facebook has always been a big case — it might get even bigger in a certain sense.”

France’s competition authority and its national privacy regulator (the CNIL), meanwhile, have also been joint working in recent years.

Including over a competition complaint against Apple’s pro-user privacy App Tracking Transparency feature (which last month the antitrust watchdog declined to block) — so there’s evidence there too of respective oversight bodies seeking to bridge legal silos in order to crack the code of how to effectively regulate tech giants whose market power, panellists agreed, is predicated on earlier failures of competition law enforcement that allowed tech platforms to buy up rivals and sew up access to user data, entrenching advantage at the expense of user privacy and locking out the possibility of future competitive challenge.

The contention is that monopoly power predicated upon data access also locks consumers into an abusive relationship with platform giants which can then, in the case of ad giants like Google and Facebook, extract huge costs (paid not in monetary fees but in user privacy) for continued access to services that have also become digital staples — amping up the ‘winner takes all’ characteristic seen in digital markets (which is obviously bad for competition too).

Yet, traditionally at least, Europe’s competition authorities and data protection regulators have been focused on separate workstreams.

The consensus from the CEPR panels was very much that that is both changing and must change if civil society is to get a grip on digital markets — and wrest control back from tech giants to that ensure consumers and competitors aren’t both left trampled into the dust by data-mining giants.

Denham said her motivation to dial up collaboration with other digital regulators was the UK government entertaining the idea of creating a one-stop-shop ‘Internet’ super regulator. “What scared the hell out of me was the policymakers the legislators floating the idea of one regulator for the Internet. I mean what does that mean?” she said. “So I think what the regulators did is we got to work, we got busy, we become creative, got our of our silos to try to tackle these companies — the likes of which we have never seen before.

“And I really think what we have done in the UK — and I’m excited if others think it will work in their jurisdictions — but I think that what really pushed us is that we needed to show policymakers and the public that we had our act together. I think consumers and citizens don’t really care if the solution they’re looking for comes from the CMA, the ICO, Ofcom… they just want somebody to have their back when it comes to protection of privacy and protection of markets.

“We’re trying to use our regulatory levers in the most creative way possible to make the digital markets work and protect fundamental rights.”

During the earlier panel, the CMA’s Simeon Thornton, a director at the authority, made some interesting remarks vis-a-vis its (ongoing) Google ‘Privacy Sandbox’ investigation — and the joint working it’s doing with the ICO on that case — asserting that “data protection and respecting users’ rights to privacy are very much at the heart of the commitments upon which we are currently consulting”.

“If we accept the commitments Google will be required to develop the proposals according to a number of criteria including impacts on privacy outcomes and compliance with data protection principles, and impacts on user experience and user control over the use of their personal data — alongside the overriding objective of the commitments which is to address our competition concerns,” he went on, adding: “We have worked closely with the ICO in seeking to understand the proposals and if we do accept the commitments then we will continue to work closely with the ICO in influencing the future development of those proposals.”

“If we accept the commitments that’s not the end of the CMA’s work — on the contrary that’s when, in many respects, the real work begins. Under the commitments the CMA will be closely involved in the development, implementation and monitoring of the proposals, including through the design of trials for example. It’s a substantial investment from the CMA and we will be dedicating the right people — including data scientists, for example, to the job,” he added. “The commitments ensure that Google addresses any concerns that the CMA has. And if outstanding concerns cannot be resolved with Google they explicitly provide for the CMA to reopen the case and — if necessary — impose any interim measures necessary to avoid harm to competition.

“So there’s no doubt this is a big undertaking. And it’s going to be challenging for the CMA, I’m sure of that. But personally I think this is the sort of approach that is required if we are really to tackle the sort of concerns we’re seeing in digital markets today.”

Thornton also said: “I think as regulators we do need to step up. We need to get involved before the harm materializes — rather than waiting after the event to stop it from materializing, rather than waiting until that harm is irrevocable… I think it’s a big move and it’s a challenging one but personally I think it’s a sign of the future direction of travel in a number of these sorts of cases.”

Also speaking during the regulatory panel session was FTC commissioner Rebecca Slaughter — a dissenter on the $5BN fine it hit Facebook with back in 2019 for violating an earlier consent order (as she argued the settlement provided no deterrent to address underlying privacy abuse, leaving Facebook free to continue exploiting users’ data) — as well as Chris D’Angelo, the chief deputy AG of the New York Attorney General, which is leading a major states antitrust case against Facebook.

Slaughter pointed out that the FTC already combines a consumer focus with attention on competition but said that historically there has been separation of divisions and investigations — and she agreed on the need for more joined-up working.

She also advocated for US regulators to get out of a pattern of ineffective enforcement in digital markets on issues like privacy and competition where companies have, historically, been given — at best — what amounts to wrist slaps that don’t address root causes of market abuse, perpetuating both consumer abuse and market failure. And be prepared to litigate more.

As regulators toughen up their stipulations they will need to be prepared for tech giants to push back — and therefore be prepared to sue instead of accepting a weak settlement.

“That is what is most galling to me that even where we take action, in our best faith good public servants working hard to take action, we keep coming back to the same questions, again and again,” she said. “Which means that the actions we are taking isn’t working. We need different action to keep us from having the same conversation again and again.”

Slaughter also argued that it’s important for regulators not to pile all the burden of avoiding data abuses on consumers themselves.

“I want to sound a note of caution around approaches that are centered around user control,” she said. “I think transparency and control are important. I think it is really problematic to put the burden on consumers to work through the markets and the use of data, figure out who has their data, how it’s being used, make decisions… I think you end up with notice fatigue; I think you end up with decision fatigue; you get very abusive manipulation of dark patterns to push people into decisions.

“So I really worry about a framework that is built at all around the idea of control as the central tenant or the way we solve the problem. I’ll keep coming back to the notion of what instead we need to be focusing on is where is the burden on the firms to limit their collection in the first instance, prohibit their sharing, prohibit abusive use of data and I think that that’s where we need to be focused from a policy perspective.

“I think there will be ongoing debates about privacy legislation in the US and while I’m actually a very strong advocate for a better federal framework with more tools that facilitate aggressive enforcement but I think if we had done it ten years ago we probably would have ended up with a notice and consent privacy law and I think that that would have not been a great outcome for consumers at the end of the day. So I think the debate and discussion has evolved in an important way. I also think we don’t have to wait for Congress to act.”

As regards more radical solutions to the problem of market-denting tech giants — such as breaking up sprawling and (self-servingly) interlocking services empires — the message from Europe’s most ‘digitally switched on’ regulators seemed to be don’t look to us for that; we are going to have to stay in our lanes.

So tl;dr — if antitrust and privacy regulators’ joint working just sums to more intelligent fiddling round the edges of digital market failure, and it’s break-ups of US tech giants that’s what’s really needed to reboot digital markets, then it’s going to be up to US agencies to wield the hammers. (Or, as Coscelli elegantly phrased it: “It’s probably more realistic for the US agencies to be in the lead in terms of structural separation if and when it’s appropriate — rather than an agency like ours [working from inside a mid-sized economy such as the UK’s].”)

The lack of any representative from the European Commission on the panel was an interesting omission in that regard — perhaps hinting at ongoing ‘structural separation’ between DG Comp and DG Justice where digital policymaking streams are concerned.

The current competition chief, Margrethe Vestager — who also heads up digital strategy for the bloc, as an EVP — has repeatedly expressed reluctance to impose radical ‘break up’ remedies on tech giants. She also recently preferred to waive through another Google digital merger (its acquisition of fitness wearable Fitbit) — agreeing to accept a number of ‘concessions’ and ignoring major mobilization by civil society (and indeed EU data protection agencies) urging her to block it.

Yet in an earlier CEPR discussion session, another panellist — Yale University’s Dina Srinivasan — pointed to the challenges of trying to regulate the behavior of companies when there are clear conflicts of interest, unless and until you impose structural separation as she said has been necessary in other markets (like financial services).

“In advertising we have an electronically traded market with exchanges and we have brokers on both sides. In a competitive market — when competition was working — you saw that those brokers were acting in the best interest of buyers and sellers. And as part of carrying out that function they were sort of protecting the data that belonged to buyers and sellers in that market, and not playing with the data in other ways — not trading on it, not doing conduct similar to insider trading or even front running,” she said, giving an example of how that changed as Google gained market power.

“So Google acquired DoubleClick, made promises to continue operating in that manner, the promises were not binding and on the record — the enforcement agencies or the agencies that cleared the merger didn’t make Google promise that they would abide by that moving forward and so as Google gained market power in that market there’s no regulatory requirement to continue to act in the best interests of your clients, so now it becomes a market power issue, and after they gain enough market power they can flip data ownership and say ‘okay, you know what before you owned this data and we weren’t allowed to do anything with it but now we’re going to use that data to for example sell our own advertising on exchanges’.

“But what we know from other markets — and from financial markets — is when you flip data ownership and you engage in conduct like that that allows the firm to now build market power in yet another market.”

The CMA’s Coscelli picked up on Srinivasan’s point — saying it was a “powerful” one, and that the challenges of policing “very complicated” situations involving conflicts of interests is something that regulators with merger control powers should be bearing in mind as they consider whether or not to green light tech acquisitions.

(Just one example of a merger in the digital space that the CMA is still scrutizing is Facebook’s acquisition of animated GIF platform Giphy. And it’s interesting to speculate whether, had brexit happened a little faster, the CMA might have stepped in to block Google’s Fitibit merger where the EU wouldn’t.)

Coscelli also flagged the issue of regulatory under-enforcement in digital markets as a key one, saying: “One of the reasons we are today where we are is partially historic under-enforcement by competition authorities on merger control — and that’s a theme that is extremely interesting and relevant to us because after the exit from the EU we now have a bigger role in merger control on global mergers. So it’s very important to us that we take the right decisions going forward.”

“Quite often we intervene in areas where there is under-enforcement by regulators in specific areas… If you think about it when you design systems where you have vertical regulators in specific sectors and horizontal regulators like us or the ICO we are more successful if the vertical regulators do their job and I’m sure they are more success if we do our job properly.

“I think we systematically underestimate… the ability of companies to work through whatever behavior or commitments or arrangement are offered to us, so I think these are very important points,” he added, signalling that a higher degree of attention is likely to be applied to tech mergers in Europe as a result of the CMA stepping out from the EU’s competition regulation umbrella.

Also speaking during the same panel, the EDPS warned that across Europe more broadly — i.e. beyond the small but engaged gathering of regulators brought together by CEPR — data protection and competition regulators are far from where they need to be on joint working, implying that the challenge of effectively regulating big tech across the EU is still a pretty Sisyphean one.

It’s true that the Commission is not sitting on hands in the face of tech giant market power.

At the end of last year it proposed a regime of ex ante regulations for so-called ‘gatekeeper’ platforms, under the Digital Markets Act. But the problem of how to effectively enforce pan-EU laws — when the various agencies involved in oversight are typically decentralized across Member States — is one key complication for the bloc. (The Commission’s answer with the DMA was to suggest putting itself in charge of overseeing gatekeepers but it remains to be seen what enforcement structure EU institutions will agree on.)

Clearly, the need for careful and coordinated joint working across multiple agencies with different legal competencies — if, indeed, that’s really what’s needed to properly address captured digital markets vs structural separation of Google’s search and adtech, for example, and Facebook’s various social products — steps up the EU’s regulatory challenge in digital markets.

“We can say that no effective competition nor protection of the rights in the digital economy can be ensured when the different regulators do not talk to each other and understand each other,” Wiewiórowski warned. “While we are still thinking about the cooperation it looks a little bit like everybody is afraid they will have to trade a little bit of its own possibility to assess.”

“If you think about the classical regulators isn’t it true that at some point we are reaching this border where we know how to work, we know how to behave, we need a little bit of help and a little bit of understanding of the other regulator’s work… What is interesting for me is there is — at the same time — the discussion about splitting of the task of the American regulators joining the ones on the European side. But even the statements of some of the commissioners in the European Union saying about the bigger role the Commission will play in the data protection and solving the enforcement problems of the GDPR show there is no clear understanding what are the differences between these fields.”

One thing is clear: Big tech’s dominance of digital markets won’t be unpicked overnight. But, on both sides of the Atlantic, there are now a bunch of theories on how to do it — and growing appetite to wade in.

#advertising-tech, #amazon, #andreas-mundt, #competition-and-markets-authority, #competition-law, #congress, #data-processing, #data-protection, #data-protection-law, #data-security, #digital-markets-act, #digital-rights, #doubleclick, #elizabeth-denham, #europe, #european-commission, #european-court-of-justice, #european-union, #facebook, #federal-trade-commission, #financial-services, #fitbit, #france, #general-data-protection-regulation, #germany, #human-rights, #margrethe-vestager, #policy, #privacy, #uk-government, #united-kingdom, #united-states, #yale-university

0

Pornhub sued for allegedly serving “under-age, non-consensual” videos

A Pornhub logo at the company's booth during an industry conference.

Enlarge / A Pornhub logo at the company’s booth during the 2018 AVN Adult Expo on January 25, 2018, in Las Vegas, Nevada. (credit: Getty Images | Gabe Ginsberg )

Pornhub was sued yesterday by 34 women alleging that the site hosted videos without their consent and profited from other nonconsensual content involving rape, child sexual abuse, and human trafficking.

Of the victims involved in the lawsuit, 14 said they were victims of people charged with or convicted of sex crimes, and 14 said they were underage in the videos served on Pornhub.

“It is time for the companies and individuals who have profited off of nonconsensual and illegal content be held liable for their crime,” one of the plaintiffs said in a conference call reported by CNN. “I joined the lawsuit because I seek justice for myself and the countless victims who don’t come forward.”

Read 9 remaining paragraphs | Comments

#lawsuit, #policy, #pornhub, #pornography, #privacy

0

Apple and Google’s AI wizardry promises privacy—at a cost

Apple and Google’s AI wizardry promises privacy—at a cost

Enlarge (credit: Getty Images)

Since the dawn of the iPhone, many of the smarts in smartphones have come from elsewhere: the corporate computers known as the cloud. Mobile apps sent user data cloudward for useful tasks like transcribing speech or suggesting message replies. Now Apple and Google say smartphones are smart enough to do some crucial and sensitive machine learning tasks like those on their own.

At Apple’s WWDC event this month, the company said its virtual assistant Siri will transcribe speech without tapping the cloud in some languages on recent and future iPhones and iPads. During its own I/O developer event last month, Google said the latest version of its Android operating system has a feature dedicated to secure, on-device processing of sensitive data, called the Private Compute Core. Its initial uses include powering the version of the company’s Smart Reply feature built into its mobile keyboard that can suggest responses to incoming messages.

Apple and Google both say on-device machine learning offers more privacy and snappier apps. Not transmitting personal data cuts the risk of exposure and saves time spent waiting for data to traverse the internet. At the same time, keeping data on devices aligns with the tech giants’ long-term interest in keeping consumers bound into their ecosystems. People that hear their data can be processed more privately might become more willing to agree to share more data.

Read 16 remaining paragraphs | Comments

#ai, #apple, #google, #ok-google, #policy, #privacy, #siri, #tech

0

UK’s ICO warns over ‘big data’ surveillance threat of live facial recognition in public

The UK’s chief data protection regulator has warned over reckless and inappropriate use of live facial recognition (LFR) in public places.

Publishing an opinion today on the use of this biometric surveillance in public — to set out what is dubbed as the “rules of engagement” — the information commissioner, Elizabeth Denham, also noted that a number of investigations already undertaken by her office into planned applications of the tech have found problems in all cases.

“I am deeply concerned about the potential for live facial recognition (LFR) technology to be used inappropriately, excessively or even recklessly. When sensitive personal data is collected on a mass scale without people’s knowledge, choice or control, the impacts could be significant,” she warned in a blog post.

“Uses we’ve seen included addressing public safety concerns and creating biometric profiles to target people with personalised advertising.

“It is telling that none of the organisations involved in our completed investigations were able to fully justify the processing and, of those systems that went live, none were fully compliant with the requirements of data protection law. All of the organisations chose to stop, or not proceed with, the use of LFR.”

“Unlike CCTV, LFR and its algorithms can automatically identify who you are and infer sensitive details about you. It can be used to instantly profile you to serve up personalised adverts or match your image against known shoplifters as you do your weekly grocery shop,” Denham added.

“In future, there’s the potential to overlay CCTV cameras with LFR, and even to combine it with social media data or other ‘big data’ systems — LFR is supercharged CCTV.”

The use of biometric technologies to identify individuals remotely sparks major human rights concerns, including around privacy and the risk of discrimination.

Across Europe there are campaigns — such as Reclaim your Face — calling for a ban on biometric mass surveillance.

In another targeted action, back in May, Privacy International and others filed legal challenges at the controversial US facial recognition company, Clearview AI, seeking to stop it from operating in Europe altogether. (Some regional police forces have been tapping in — including in Sweden where the force was fined by the national DPA earlier this year for unlawful use of the tech.)

But while there’s major public opposition to biometric surveillance in Europe, the region’s lawmakers have so far — at best — been fiddling around the edges of the controversial issue.

A pan-EU regulation the European Commission presented in April, which proposes a risk-based framework for applications of artificial intelligence, included only a partial prohibition on law enforcement’s use of biometric surveillance in public places — with wide ranging exemptions that have drawn plenty of criticism.

There have also been calls for a total ban on the use of technologies like live facial recognition in public from MEPs across the political spectrum. The EU’s chief data protection supervisor has also urged lawmakers to at least temporarily ban the use of biometric surveillance in public.

The EU’s planned AI Regulation won’t apply in the UK, in any case, as the country is now outside the bloc. And it remains to be seen whether the UK government will seek to weaken the national data protection regime.

A recent report it commissioned to examine how the UK could revise its regulatory regime, post-Brexit, has — for example — suggested replacing the UK GDPR with a new “UK framework” — proposing changes to “free up data for innovation and in the public interest”, as it puts it, and advocating for revisions for AI and “growth sectors”. So whether the UK’s data protection regime will be put to the torch in a post-Brexit bonfire of ‘red tape’ is a key concern for rights watchers.

(The Taskforce on Innovation, Growth and Regulatory Reform report advocates, for example, for the complete removal of Article 22 of the GDPR — which gives people rights not to be subject to decisions based solely on automated processing — suggesting it be replaced with “a focus” on “whether automated profiling meets a legitimate or public interest test”, with guidance on that envisaged as coming from the Information Commissioner’s Office (ICO). But it should also be noted that the government is in the process of hiring Denham’s successor; and the digital minister has said he wants her replacement to take “a bold new approach” that “no longer sees data as a threat, but as the great opportunity of our time”. So, er, bye-bye fairness, accountability and transparency then?)

For now, those seeking to implement LFR in the UK must comply with provisions in the UK’s Data Protection Act 2018 and the UK General Data Protection Regulation (aka, its implementation of the EU GDPR which was transposed into national law before Brexit), per the ICO opinion, including data protection principles set out in UK GDPR Article 5, including lawfulness, fairness, transparency, purpose limitation, data minimisation, storage limitation, security and accountability.

Controllers must also enable individuals to exercise their rights, the opinion also said.

“Organisations will need to demonstrate high standards of governance and accountability from the outset, including being able to justify that the use of LFR is fair, necessary and proportionate in each specific context in which it is deployed. They need to demonstrate that less intrusive techniques won’t work,” wrote Denham. “These are important standards that require robust assessment.

“Organisations will also need to understand and assess the risks of using a potentially intrusive technology and its impact on people’s privacy and their lives. For example, how issues around accuracy and bias could lead to misidentification and the damage or detriment that comes with that.”

The timing of the publication of the ICO’s opinion on LFR is interesting in light of wider concerns about the direction of UK travel on data protection and privacy.

If, for example, the government intends to recruit a new, ‘more pliant’ information commissioner — who will happily rip up the rulebook on data protection and AI, including in areas like biometric surveillance — it will at least be rather awkward for them to do so with an opinion from the prior commissioner on the public record that details the dangers of reckless and inappropriate use of LFR.

Certainly, the next information commissioner won’t be able to say they weren’t given clear warning that biometric data is particularly sensitive — and can be used to estimate or infer other characteristics, such as their age, sex, gender or ethnicity.

Or that ‘Great British’ courts have previously concluded that “like fingerprints and DNA [a facial biometric template] is information of an ‘intrinsically private’ character”, as the ICO opinion notes, while underlining that LFR can cause this super sensitive data to be harvested without the person in question even being aware it’s happening. 

Denham’s opinion also hammers hard on the point about the need for public trust and confidence for any technology to succeed, warning that: “The public must have confidence that its use is lawful, fair, transparent and meets the other standards set out in data protection legislation.”

The ICO has previously published an Opinion into the use of LFR by police forces — which she said also sets “a high threshold for its use”. (And a few UK police forces — including the Met in London — have been among the early adopters of facial recognition technology, which has in turn led some into legal hot water on issues like bias.)

Disappointingly, though, for human rights advocates, the ICO opinion shies away from recommending a total ban on the use of biometric surveillance in public by private companies or public organizations — with the commissioner arguing that while there are risks with use of the technology there could also be instances where it has high utility (such as in the search for a missing child).

“It is not my role to endorse or ban a technology but, while this technology is developing and not widely deployed, we have an opportunity to ensure it does not expand without due regard for data protection,” she wrote, saying instead that in her view “data protection and people’s privacy must be at the heart of any decisions to deploy LFR”.

Denham added that (current) UK law “sets a high bar to justify the use of LFR and its algorithms in places where we shop, socialise or gather”.

“With any new technology, building public trust and confidence in the way people’s information is used is crucial so the benefits derived from the technology can be fully realised,” she reiterated, noting how a lack of trust in the US has led to some cities banning the use of LFR in certain contexts and led to some companies pausing services until rules are clearer.

“Without trust, the benefits the technology may offer are lost,” she also warned.

There is one red line that the UK government may be forgetting in its unseemly haste to (potentially) gut the UK’s data protection regime in the name of specious ‘innovation’. Because if it tries to, er, ‘liberate’ national data protection rules from core EU principles (of lawfulness, fairness, proportionality, transparency, accountability and so on) — it risks falling out of regulatory alignment with the EU, which would then force the European Commission to tear up a EU-UK data adequacy arrangement (on which the ink is still drying).

The UK having a data adequacy agreement from the EU is dependent on the UK having essentially equivalent protections for people’s data. Without this coveted data adequacy status UK companies will immediately face far greater legal hurdles to processing the data of EU citizens (as the US now does, in the wake of the demise of Safe Harbor and Privacy Shield). There could even be situations where EU data protection agencies order EU-UK data flows to be suspended altogether…

Obviously such a scenario would be terrible for UK business and ‘innovation’ — even before you consider the wider issue of public trust in technologies and whether the Great British public itself wants to have its privacy rights torched.

Given all this, you really have to wonder whether anyone inside the UK government has thought this ‘regulatory reform’ stuff through. For now, the ICO is at least still capable of thinking for them.

 

#artificial-intelligence, #biometrics, #clearview-ai, #data-protection, #data-protection-law, #elizabeth-denham, #europe, #european-commission, #european-union, #facial-recognition, #general-data-protection-regulation, #information-commissioners-office, #law-enforcement, #privacy, #privacy-international, #safe-harbor, #surveillance, #tc, #uk-government, #united-kingdom

0

Apple AirTags UX teardown: The trade-off between privacy and user experience

Apple’s location devices — called AirTags — have been out for more than a month now. The initial impressions were good, but as we concluded back in April: “It will be interesting to see these play out once AirTags are out getting lost in the wild.”

That’s exactly what our resident UX analyst, Peter Ramsey, has been doing for the last month — intentionally losing AirTags to test their user experience at the limits.

This Extra Crunch exclusive is a simplified conversation around this Built for Mars article, which helps bridge the gap between Apple’s mistakes and how you can make meaningful changes to your product’s UX.

For an industry that’s often soured by privacy concerns, Apple has an unusually strong stance on keeping your data private.

AirTag not reachable

There are two primary purposes of an error message:

  1. To notify the user what has gone wrong (and how it affects them).
  2. To help the user resolve the issue.

Most businesses do a decent job at the first one, but it’s rare that a product will proactively obsess over the second.

Typically, Apple is one of the few examples that do — it’s indisputably one of the leaders in intuitive design. Which is why I was surprised to see Apple’s error message when an AirTag is not reachable:

Image Credits: Built for Mars screenshot

There’s a huge amount of ambiguity in the statement “move around to connect,” and it fails to mention that this error could be because the AirTag’s batteries have been removed.

Instead, Apple should make this message clickable, which opens a modal to learn more about this issue.

#airtag, #apple, #apps, #column, #ec-column, #ec-consumer-applications, #iphone, #privacy, #tc, #user-experience

0

He Warned Apple About the Risks in China. Then They Became Reality.

Doug Guthrie, once one of America’s leading China bulls, rang the alarm on doing business there. He spoke about his time at Apple.

#apple-inc, #censorship, #china, #communist-party-of-china, #computer-security, #computers-and-the-internet, #cook-timothy-d, #factories-and-manufacturing, #foxconn-technology, #guthrie-doug-1969, #international-trade-and-world-market, #iphone, #ipod, #privacy, #xi-jinping

0

Internxt gets $1M to be ‘the Coinbase of decentralized storage’

Valencia-based startup Internxt has been quietly working on an ambitious plan to make decentralized cloud storage massively accessible to anyone with an Internet connection.

It’s just bagged $1M in seed funding led by Angels Capital, a European VC fund owned by Juan Roig (aka Spain’s richest grocer and second wealthiest billionaire), and Miami-based The Venture City. It had previously raised around half a million dollars via a token sale to help fund early development.

The seed funds will be put towards its next phase of growth — its month-to-month growth rate is 30% and it tells us it’s confident it can at least sustain that — including planning a big boost to headcount so it can accelerate product development.

The Spanish startup has spent most of its short life to date developing a decentralized infrastructure that it argues is both inherently more secure and more private than mainstream cloud-based apps (such as those offered by tech giants like Google).

This is because files are not only encrypted in a way that means it cannot access your data but information is also stored in a highly decentralized way, split into tiny shards which are then distributed across multiple storage locations, with users of the network contributing storage space (and being recompensed for providing that capacity with — you guessed it — crypto).

“It’s a distributed architecture, we’ve got servers all over the world,” explains founder and CEO Fran Villalba Segarra. “We leverage and use the space provided by professionals and individuals. So they connect to our infrastructure and start hosting data shards and we pay them for the data they host — which is also more affordable because we are not going through the traditional route of just renting out a data center and paying them for a fixed amount of space.

“It’s like the Airbnb model or Uber model. We’ve kind of democratized storage.”

Internxt clocked up three years of R&D, beginning in 2017, before launching its first cloud-based apps: Drive (file storage), a year ago — and now Photos (a Google Photos rival).

So far it’s attracting around a million active users without paying any attention to marketing, per Villalba Segarra.

Internxt Mail is the next product in its pipeline — to compete with Gmail and also ProtonMail, a pro-privacy alternative to Google’s freemium webmail client (and for more on why it believes it can offer an edge there read on).

Internxt Send (file transfer) is another product billed as coming soon.

“We’re working on a G-Suite alternative to make sure we’re at the level of Google when it comes to competing with them,” he adds.

The issue Internxt’s architecture is designed to solve is that files which are stored in just one place are vulnerable to being accessed by others. Whether that’s the storage provider itself (who may, like Google, have a privacy-hostile business model based on mining users’ data); or hackers/third parties who manage to break the provider’s security — and can thus grab and/or otherwise interfere with your files.

Security risks when networks are compromised can include ransomeware attacks — which have been on an uptick in recent years — whereby attackers that have penetrated a network and gained access to stored files then hold the information to ransom by walling off the rightful owner’s access (typically by applying their own layer of encryption and demanding payment to unlock the data).

The core conviction driving Internxt’s decentralization push is that files sitting whole on a server or hard drive are sitting ducks.

Its answer to that problem is an alternative file storage infrastructure that combines zero access encryption and decentralization — meaning files are sharded, distributed and mirrored across multiple storage locations, making them highly resilient against storage failures or indeed hack attacks and snooping.

The approach ameliorates cloud service provider-based privacy concerns because Internxt itself cannot access user data.

To make money its business model is simple, tiered subscriptions: With (currently) one plan covering all its existing and planned services — based on how much data you need. (It is also freemium, with the first 10GB being free.)

Internxt is by no means the first to see key user value in rethinking core Internet architecture.

Scotland’s MaidSafe has been trying to build an alternative decentralized Internet for well over a decade at this point — only starting alpha testing its alt network (aka, the Safe Network) back in 2016, after ten years of testing. Its long term mission to reinvent the Internet continues.

Another (slightly less veteran) competitor in the decentralized cloud storage space is Storj, which is targeting enterprise users. There’s also Filecoin and Sia — both also part of the newer wave of blockchain startups that sprung up after Bitcoin sparked entrepreneurial interest in cryptocurrencies and blockchain/decentralization.

How, then, is what Internxt’s doing different to these rival decentralized storage plays — all of which have been at this complex coal face for longer?

“We’re the only European based startup that’s doing this [except for MaidSafe, although it’s UK not EU based],” says Villalba Segarra, arguing that the European Union’s legal regime around data protection and privacy lends it an advantage vs U.S. competitors. “All the others, Storj, plus Sia, Filecoin… they’re all US-based companies as far as I’m aware.”

The other major differentiating factor he highlights is usability — arguing that the aforementioned competitors have been “built by developers for developers”. Whereas he says Internxt’s goal is be the equivalent of ‘Coinbase for decentralized storage’; aka, it wants to make a very complex technology highly accessible to non-technical Internet users.

“It’s a huge technology but in the blockchain space we see this all the time — where there’s huge potential but it’s very hard to use,” he tells TechCrunch. “That’s essentially what Coinbase is also trying to do — bringing blockchain to users, making it easier to use, easier to invest in cryptocurrency etc. So that’s what we’re trying to do at Internxt as well, bringing blockchain for cloud storage to the people. Making it easy to use with a very easy to use interface and so forth.

“It’s the only service in the distributed cloud space that’s actually usable — that’s kind of our main differentiating factor from Storj and all these other companies.”

“In terms of infrastructure it’s actually pretty similar to that of Sia or Storj,” he goes on — further likening Internxt’s ‘zero access’ encryption to Proton Drive’s architecture (aka, the file storage product from the makers of end-to-end encrypted email service ProtonMail) — which also relies on client side encryption to give users a robust technical guarantee that the service provider can’t snoop on your stuff. (So you don’t have to just trust the company not to violate your privacy.)

But while it’s also touting zero access encryption (it seems to be using off-the-shelf AES-256 encryption; it says it uses “military grade”, client-side, open source encryption that’s been audited by Spain’s S2 Grupo, a major local cybersecurity firm), Internxt takes the further step of decentralizing the encrypted bits of data too. And that means it can tout added security benefits, per Villalba Segarra.

“On top of that what we do is we fragment data and then distribute it around the world. So essentially what servers host are encrypted data shards — which is much more secure because if a hacker was ever to access one of these servers what they would find is encrypted data shards which are essentially useless. Not even we can access that data.

“So that adds a huge layer of security against hackers or third party [access] in terms of data. And then on top of that we build very nice interfaces with which the user is very used to using — pretty much similar to those of Google… and that also makes us very different from Storj and Sia.”

Storage space for Internxt users’ files is provided by users who are incentivized to offer up their unused capacity to host data shards with micropayments of crypto for doing so. This means capacity could be coming from an individual user connecting to Internxt with just their laptop — or a datacenter company with large amounts of unused storage capacity. (And Villalba Segarra notes that it has a number of data center companies, such as OVH, are connected to its network.)

“We don’t have any direct contracts [for storage provision]… Anyone can connect to our network — so datacenters with available storage space, if they want to make some money on that they can connect to our network. We don’t pay them as much as we would pay them if we went to them through the traditional route,” he says, likening this portion of the approach to how Airbnb has both hosts and guests (or Uber needs drivers and riders).

“We are the platform that connects both parties but we don’t host any data ourselves.”

Internxt uses a reputation system to manage storage providers — to ensure network uptime and quality of service — and also applies blockchain ‘proof of work’ challenges to node operators to make sure they’re actually storing the data they claim.

“Because of the decentralized nature of our architecture we really need to make sure that it hits a certain level of reliability,” he says. “So for that we use blockchain technology… When you’re storing data in your own data center it’s easier in terms of making sure it’s reliable but when you’re storing it in a decentralized architecture it brings a lot of benefits — such as more privacy or it’s also more affordable — but the downside is you need to make sure that for example they’re actually storing data.”

Payments to storage capacity providers are also made via blockchain tech — which Villalba Segarra says is the only way to scale and automate so many micropayments to ~10,000 node operators all over the world.

Discussing the issue of energy costs — given that ‘proof of work’ blockchain-based technologies are facing increased scrutiny over the energy consumption involved in carrying out the calculations — he suggests that Internxt’s decentralized architecture can be more energy efficient than traditional data centers because data shards are more likely to be located nearer to the requesting user — shrinking the energy required to retrieve packets vs always having to do so from a few centralized global locations.

“What we’ve seen in terms of energy consumption is that we’re actually much more energy efficient than a traditional cloud storage service. Why? Think about it, we mirror files and we store them all over the world… It’s actually impossible to access a file from Dropbox that is sent out from [a specific location]. Essentially when you access Dropbox or Google Drive and you download a file they’re going to be sending it out from their data center in Texas or wherever. So there’s a huge data transfer energy consumption there — and people don’t think about it,” he argues.

“Data center energy consumption is already 2%* of the whole world’s energy consumption if I’m not mistaken. So being able to use latency and being able to send your files from [somewhere near the user] — which is also going to be faster, which is all factored into our reputation system — so our algorithms are going to be sending you the files that are closer to you so that we save a lot of energy from that. So if you multiple that by millions of users and millions of terabytes that actually saves a lot of energy consumption and also costs for us.”

What about latency from the user’s point of view? Is there a noticeable lag when they try to upload or retrieve and access files stored on Internxt vs — for example — Google Drive?

Villalba Segarra says being able to store file fragments closer to the user also helps compensate for any lag. But he also confirms there is a bit of a speed difference vs mainstream cloud storage services.

“In terms of upload and download speed we’re pretty close to Google Drive and Dropbox,” he suggests. “Again these companies have been around for over ten years and their services are very well optimized and they’ve got a traditional cloud architecture which is also relatively simpler, easier to build and they’ve got thousands of [employees] so their services are obviously much better than our service in terms of speed and all that. But we’re getting really close to them and we’re working really fast towards bringing our speed [to that level] and also as many features as possible to our architecture and to our services.”

“Essentially how we see it is we’re at the level of Proton Drive or Tresorit in terms of usability,” he adds on the latency point. “And we’re getting really close to Google Drive. But an average user shouldn’t really see much of a difference and, as I said, we’re literally working as hard as possible to make our services as useable as those of Google. But we’re ages ahead of Storj, Sia, MaidSafe and so forth — that’s for sure.”

Internxt is doing all this complex networking with a team of just 20 people currently. But with the new seed funding tucked in its back pocket the plan now is to ramp up hiring over the next few months — so that it can accelerate product development, sustain its growth and keep pushing its competitive edge.

“By the time we do a Series A we should be around 100 people at Internxt,” says Villalba Segarra. “We are already preparing our Series A. We just closed our seed round but because of how fast we’re growing we are already being reached out to by a few other lead VC funds from the US and London.

“It will be a pretty big Series A. Potentially the biggest in Spain… We plan on growing until the Series A at at least a 30% month-to-month rate which is what we’ve been growing up until now.”

He also tells TechCrunch that the intention for the Series A is to do the funding at a $50M valuation.

“We were planning on doing it a year from now because we literally just closed our [seed] round but because of how many VCs are reaching out to us we may actually do it by the end of this year,” he says, adding: “But timeframe isn’t an issue for us. What matters most is being able to reach that minimum valuation.”

*Per the IEA, data centres and data transmission networks each accounted for around 1% of global electricity use in 2019

#angels-capital, #blockchain, #cloud-computing, #cloud-storage, #coinbase, #cryptocurrencies, #decentralization, #dropbox, #encryption, #energy-consumption, #europe, #european-union, #fundings-exits, #gmail, #internxt, #privacy, #recent-funding, #spain, #startups, #storage, #tc, #the-venture-city, #valencia

0

Apple‘s Tim Cook: Sideloading is “not in the best interests of the user”

Apple CEO Tim Cook being interviewed remotely by Brut.

Enlarge / Apple CEO Tim Cook being interviewed remotely by Brut. (credit: Brut.)

Apple has been under a mountain of scrutiny lately from legislators, developers, judges, and users. Amidst all that, CEO Tim Cook sat with publication Brut. to discuss Apple’s strategy and policies. The short but wide-ranging interview offered some insight into where Apple plans to go in the future.

As is so common when Tim Cook speaks publicly, privacy was a major focus. His response to a question about its importance was the same one we’ve heard from him many times: “we see it as a basic human right, a fundamental human right.” Noting Apple has been focused on privacy for a long time.

He explained:

Read 10 remaining paragraphs | Comments

#app-store, #apple, #apple-app-store, #apple-watch, #apps, #ar, #augmented-reality, #ios, #iphone, #privacy, #sideloading, #tech, #tim-cook

0

On a growth tear, DuckDuckGo reveals it picked up $100M in secondary investment last year

Privacy tech continues cooking on gas. To wit: Non-tracking search engine DuckDuckGo has just revealed that it beefed up its balance sheet at the back end of last year with $100 million+ in “mainly secondary investment” — from a mix of existing and new investors.

Its blog post name-checks Omers Ventures, Thrive, GP Bullhound, Impact America Fund, and also WhatsApp founder Brian Acton; inventor of the world wide web Tim Berners-Lee; VC and diversity activist Freada Kapor Klein; and entrepreneur Mitch Kapor as being among the participating investors. So quite the line up.

DuckDuckGo said the secondary investment allowed some of its early employees and investors to cash out a chunk of their equity while bolstering its financial position.

Although it also says its business — which has been profitable since 2014 — is “thriving”, reporting that revenues are now running at $100M+ a year. Hence it not needing to keep dipping into an external investor pot.

Its last VC raise was in 2018 when it took in $10M after being actively pursued by Omers Ventures — who convinced it to take the money to help support growth objectives (especially internationally).

DDG has a few other metrics to throw around now: Over the last 12 months it said its apps were downloaded over 50M times — more than in all prior years combined.

It’s also revealed that its monthly search traffic increased 55% and says marketshare trackers indicate that it grabbed the #2 spot for search engine on mobile in a number of countries, including the U.S., Canada, Australia, and the Netherlands. (StatCounter/Wikipedia).

“We don’t track our users so we can’t say for sure how many we have, but based on market share estimates, download numbers, and national surveys, we believe there are between 70-100 million DuckDuckGo users,” it added.

A looming shift to Google’s Android choice screen in Europe, where regulators have forced the company to present users of mobile devices that run its OS with rival options when they’re setting a default search engine, looks likely to further boost DuckDuckGo’s regional fortunes.

Google will be ditching the current paid auction model — so rivals which have a valuable alternative proposition for users (like privacy) combined with strong brand awareness (and, well, everyone likes ducks… ) have the best chance yet to take slices out of Google’s marketshare.

DuckDuckGo’s blog post confirms it’ll be dialling up its marketing in Europe and other regions.

“Our thriving business also gives us the resources to tell more people there is a simple solution for online privacy they can use right now. Over the last month, we’ve rolled out billboard, radio, and TV ads in 175 metro areas across the U.S., with additional efforts planned for Europe and other countries around the world,” it notes.

So it look like a good chunk of DDG’s secondary funding will be spent on growth marketing — as it seeks to capitalize on rising public attention to online privacy, tracking and creepy ads, itself fuelled by years of data scandals.

Awareness is also now being actively driven by Apple’s recent switch to inform iOS users of third party app tracking and give people a simple way to say no — which includes slick, Cupertino-funded ad campaigns (such as the one below) which are clearly intended to turn and engage mainstream heads…

It’s fair to say it’s probably never been easier to craft a simple and compelling marketing message around privacy — and that’s also a testament to how far privacy tech has come in terms of usability and accessibility.

So, yes, DuckDuckGo’s business sure looks like it’s sitting pretty at this juncture of the web’s evolution. And its blog post talks about “becoming a household name for simple privacy protection”. So the scale of its ambition is clear.

“Privacy skeptics have dominated the discussion about online privacy for too long. “Sure people care about privacy, but they’ll never do anything about it.” It’s time to lay this bad take to rest,” it adds.

More products are also on the slate from the 13-year veteran privacy player.

It already bolted on tracker-blocking back in 2018 but is looking to go further — saying that it will be rolling out additional privacy features to what it bills as its “all-in-one privacy bundle”, including an email protection tool that will be launched in beta “in a few weeks” and which it says will “give users more privacy without having to get a new inbox”.

“Later this summer, app tracker blocking will be available in beta for Android devices, allowing users to block app trackers and providing more transparency on what’s happening behind the scenes on their device. And Before the end of the year, we also plan to release a brand-new desktop version of our existing mobile app which people can use as a primary browser,” it goes on, adding: “By continuing to expand our simple and seamless privacy bundle, we continue to make our product vision, ‘Privacy, simplified.’ a reality.”

That’s another trend we’re seeing in privacy tech: Innovators who have carefully and credibly built up a solid reputation around one type of tech tool (such as search or email) find themselves — as usage grows — perfectly positioned to branch out into offering a whole bundle or suite of apps they can wrap in the same protective promise.

Another player, ProtonMail, for example, has morphed into Proton, a privacy-centric company which offers freemium tools for not just end-to-end encrypted email but also cloud storage, calendar and a VPN — all neatly positioned under its pro-privacy umbrella.

Expect more development momentum as privacy tech continues to accelerate into the mainstream.

 

#android, #brian-acton, #duckduckgo, #europe, #freada-kapor-klein, #fundings-exits, #google, #gp-bullhound, #impact-america-fund, #mitch-kapor, #omers-ventures, #online-privacy, #privacy, #search-engine, #tim-berners-lee, #whatsapp

0

Adtech ‘data breach’ GDPR complaint is headed to court in EU

New York-based IAB Tech Labs, a standards body for the digital advertising industry, is being taken to court in Germany by the Irish Council for Civil Liberties (ICCL) in a piece of privacy litigation that’s targeted at the high speed online ad auction process known as real-time bidding (RTB).

While that may sound pretty obscure the case essentially loops in the entire ‘data industrial complex’ of adtech players, large and small, which make money by profiling Internet users and selling access to their attention — from giants like Google and Facebook to other household names (the ICCL’s PR also name-checks Amazon, AT&T, Twitter and Verizon, the latter being the parent company of TechCrunch — presumably because all participate in online ad auctions that can use RTB); as well as the smaller (typically non-household name) adtech entities and data brokers which also also involved in handling people’s data to run high velocity background auctions that target behavioral ads at web users.

The driving force behind the lawsuit is Dr Johnny Ryan, a former adtech insider turned whistleblower who’s now a senior fellow a the ICCL — and who has dubbed RTB the biggest data breach of all time.

He points to the IAB Tech Lab’s audience taxonomy documents which provide codes for what can be extremely sensitive information that’s being gathered about Internet users, based on their browsing activity, such as political affiliation, medical conditions, household income, or even whether they may be a parent to a special needs child.

The lawsuit contends that other industry documents vis-a-vis the ad auction system confirm there are no technical measures to limit what companies can do with people’s data, nor who they might pass it on to.

The lack of security inherent to the RTB process also means other entities not directly involved in the adtech bidding chain could potentially intercept people’s information — when it should, on the contrary, be being protected from unauthorized access, per EU law…

Ryan and others have been filing formal complaints against RTB security issue for years, arguing the system breaches a core principle of Europe’s General Data Protection Regulation (GDPR) — which requires that personal data be “processed in a manner that ensures appropriate security… including protection against unauthorised or unlawful processing and against accidental loss” — and which, they contend, simply isn’t possible given how RTB functions.

The problem is that Europe’s data protection agencies have failed to act. Which is why Ryan, via the ICCL, has decided to take the more direct route of filing a lawsuit.

“There aren’t many DPAs around the union that haven’t received evidence of what I think is the biggest data breach of all time but it started with the UK and Ireland — neither of which took, I think it’s fair to say, any action. They both said they were doing things but nothing has changed,” he tells TechCrunch, explaining why he’s decided to take the step of litigating.

“I want to take the most efficient route to protection people’s rights around data,” he adds.

Per Ryan, the Irish Data Protection Commission (DPC) has still not sent a statement of issues relating to the RTB complaint he lodged with them back in 2018 — so years later. In May 2019 the DPC did announce it was opening a formal investigation into Google’s adtech, following the RTB complaints, but the case remains open and unresolved. (We’ve contacted the DPC with questions about its progress on the investigation and will update with any response.)

Since the GDPR came into application in Europe in May 2018 there has been growth in privacy lawsuits  — including class action style suits — so litigation funders may be spying an opportunity to cash in on the growing enforcement gap left by resource-strapped and, well, risk-averse data protection regulators.

A similar complaint about RTB lodged with the UK’s Information Commissioner’s Office (ICO) also led to a lawsuit being filed last year — albeit in that case it was against the watchdog itself for failing to take any action. (The ICO’s last missive to the adtech industry told it to — uhhhh — expect audits.)

“The GDPR was supposed to create a situation where the average person does not need to wear a tin-foil hat, they do not need to be paranoid or take action to become well informed. Instead, supervisory authorities protect them. And these supervisory authorities — paid for by the tax payer — have very strong powers. They can gain admission to any documents and any premises. It’s not about fines I don’t think, just. They can tell the biggest most powerful companies in the world to stop doing what they’re doing with our data. That’s the ultimate power,” says Ryan. “So GDPR sets up these guardians — these potentially very empowered guardians — but they’ve not used those powers… That’s why we’re acting.”

“I do wish that I’d litigated years ago,” he adds. “There’s lots of reasons why I didn’t do that — I do wish, though, that this litigation was unnecessary because supervisory authorities protected me and you. But they didn’t. So now, as Irish politics like to say in the middle of a crisis, we are where we are. But this is — hopefully — several nails in the coffin [of RTB’s use of personal data].”

The lawsuit has been filed in Germany as Ryan says they’ve been able to establish that IAB Tech Labs — which is NY-based and has no official establishment in Europe — has representation (a consultancy it hired) that’s based in the country. Hence they believe there is a clear route to litigate the case at the Landgerichte, Hamburg.

While Ryan has been indefatigably sounding the alarm about RTB for years he’s prepared to clock up more mileage going direct through the courts to see the natter through.

And to keep hammering home his message to the adtech industry that it must clean up its act and that recent attempts to maintain the privacy-hostile status quo — by trying to rebrand and repackage the same old data shuffle under shiny new claims of ‘privacy’ and ‘responsibility’ — simply won’t wash. So the message is really: Reform or die.

“This may very well end up at the ECJ [European Court of Justice]. And that would take a few years but long before this ends up at the ECJ I think it’ll be clear to the industry now that it’s time to reform,” he adds.

IAB Tech Labs has been contacted for comment on the ICCL’s lawsuit.

Ryan is by no means the only person sounding the alarm over adtech. Last year the European Parliament called for tighter controls on behavioral ads to be baked into reforms of the region’s digital rules — calling for regulation to favor less intrusive, contextual forms of advertising which do not rely on mass surveillance of Internet users.

While even Google has said it wants to depreciate support for tracking cookies in favor of a new stack of technology proposals that it dubs ‘Privacy Sandbox’ (although its proposed alternative — targeting groups of Internet users based on interests derived from tracking their browsing habits — has been criticized as potentially amplifying problems of predatory and exploitative ad targeting, so may not represent a truly clean break with the rights-hostile adtech status quo).

The IAB is also facing another major privacy law challenge in Europe — where complaints against a widely used framework it designed for websites to obtain Internet users’ consent to being tracked for ads online led to scrutiny by Belgium’s data protection agency.

Last year its investigatory division found that the IAB Europe’s Transparency and Consent Framework (TCF) fails to meet the required standards of data protection under the GDPR.

The case went in front of the litigation chamber last week. A verdict — and any enforcement action by the Belgian DPA over the IAB Europe’s TCF — remains pending.

#adtech, #advertising-tech, #amazon, #articles, #att, #computing, #data-protection, #europe, #european-court-of-justice, #european-union, #facebook, #general-data-protection-regulation, #germany, #hamburg, #information-commissioners-office, #ireland, #johnny-ryan, #new-york, #online-advertising, #privacy, #real-time-bidding, #techcrunch, #terms-of-service, #twitter, #united-kingdom, #verizon, #world-wide-web

0

CJEU ruling could open big tech to more privacy litigation in Europe

A long running privacy fight between Belgium’s data protection authority and Facebook — over the latter’s use of online trackers like pixels and social plug-ins to snoop on web users — has culminated in a ruling by Europe’s top court today that could have wider significance on how cross-border cases against tech giants are enforced in the region.

The Court of Justice of the European Union has affirmed that, in certain circumstances, national DPAs can pursue action even when they are not the lead data supervisor under the General Data Protection Regulation (GDPR)’s one-stop-shop mechanism (OSS) — opening up the possibility of litigation by watchdogs in Member States which aren’t the lead regulator for a particular company but where the local agency believes there is an urgent need to act.

The OSS was included in the GDPR with the idea of simplifying enforcement for businesses operating in more than one EU market — which would only need to deal directly with one ‘lead’ data protection authority. However the mechanism has been criticized for contributing to a bottleneck effect whereby multiple GDPR complaints are stacking up on the desks of a couple of DPAs (most notably Ireland and Luxembourg) — EU Member States which attract large numbers of multinationals (typically for tax reasons, such as Ireland’s 12.5% corporate tax rate).

Enforcement of the EU’s flagship data protection regime against tech giant has thus been hampered by a perception of ‘forum shopping’ — whereby a handful of EU DPAs have a disproportionately large number of major, cross-border cases to deal with vs the (inevitably limited) resources provided for them by their national governments. The resulting bottleneck looks convenient for those companies that face delayed GDPR enforcement.

Some EU DPAs are also considered more active in enforcement of the bloc’s privacy rules than others — and it’s fair to say that Ireland is not among them. (Albeit, it defends the pace of its investigations and enforcement record by saying that it must do due diligence to ensure decisions stand up to any legal challenges.)

Indeed, Ireland has been criticized for (among other things) the length of time it’s taken to investigate GDPR complaints; for procedural issues (how it’s gone about investigating or indeed not investigating complaints); and for its enforcement record against tech giants — which to date is limited to just one $550k penalty issued against Twitter issued at the end of last year.

The Irish Data Protection Commission (DPC) had originally wanted to give Twitter an even lower fine but other EU DPAs disputed its draft decision — forcing it to increase the penalty slightly.

As it stands, scores of cases remain open on the DPC’s desk, including major complaints against Facebook and Google — which are now over three years old.

This has led to calls for the Commission to step in and take action over Ireland’s perceived inaction. Although, for now, the EU’s executive has limited its intervention to a few words urging Ireland to, essentially, hurry up and get on with the job.

Today’s CJEU ruling may alleviate a little of the blockage around GDPR enforcement — in some narrow situations — by enabling national DPAs to take up the baton to litigate over users’ rights when a lead agency isn’t acting on complaints.

However the ruling does not look set to completely unblock the OSS mechanism, per Luca Tosoni, a research fellow at the Norwegian Research Center for Computers and Law at the University of Oslo who has been following the case closely — and whose work was cited by the CJEU’s advocate general in an earlier opinion on the case.

“The Court has essentially confirmed the views that the Advocate General had expressed in his opinion: Under the GDPR’ one-stop-shop system, those data protection authorities that are not the ‘lead authority’ may start enforcement actions against big tech companies only in very limited circumstances, including in case of urgency,” he told TechCrunch.

“However, unfortunately, the Court’s ruling does not elaborate on the criteria to be followed to assess the urgency of an enforcement action. In particular, the Court has not expressly seconded the advocate general’s view that a failure to act promptly from the part of the lead authority may justify the adoption of interim urgent measures by other data protection authorities. Thus, this important point remains partially unclear, and further litigation might be necessary to clarify this issue.

“Therefore, today’s ruling is unlikely to completely settle the ‘Irish issue’.”

Article 56 of the GDPR allows for non-lead DPAs to pursue action at a national level in the case of complaints that relate to an issue that substantially affects only users under their jurisdiction, and where they believe there is a need to act urgently (as a lead authority has not). So it does seem fairly narrow.

One recent example of a non-lead DPA intervention is the Italian DPA’s emergency action against TikTok — related to child safety on the platform after the death of a local girl who had been reported to have participated in a challenge on the platform.

“An authority’s wish to adopt a ‘go-it-alone’ approach… with regard to the (judicial) enforcement of the GDPR, without cooperating with the other authorities, cannot be reconciled with either the letter or the spirit of that regulation,” runs one paragraph of today’s judgement, underlining the court’s view that the GDPR requires careful and balanced joint-working between DPAs.

The ruling does go into some detailed discussion of the “dangers” of under-enforcement of the GDPR — as the concern was raised with the CJEU — but the court takes the view that it’s too soon to say whether such a concern affects the regulation or not.

“If, however, [under-enforcement were to] be evidenced by facts and robust arguments – then I do not believe that the Court would turn a blind eye to any gap which might thereby emerge in the protection of fundamental rights guaranteed by the Charter and their effective enforcement by the competent regulators,” the CJEU goes on. “Whether that would then still be an issue for a Charter-conform interpretation of provisions of secondary law, or an issue of validity of the relevant provisions, or even sections of a secondary law instrument, is a question for another case.”

The ruling, while narrow, may at least unblock the Belgian DPA’s long-running litigation against Facebook’s tracking of non-users via cookies and social plug-ins which was the route for the referral of questions over the scope of the OSS to the CJEU.

Although the court also notes that it will be for a Belgian court to determine whether the DPA’s intervention meets the GDPR’s bar for starting such proceedings or not.

Contacted for comment on the CJEU judgement, Facebook welcomed the ruling.

“We are pleased that the CJEU has upheld the value and principles of the one-stop-shop mechanism, and highlighted its importance in ensuring the efficient and consistent application of GDPR across the EU,” said Jack Gilbert, associate general counsel at Facebook in a statement.

#belgium, #cjeu, #data-protection, #europe, #european-union, #facebook, #general-data-protection-regulation, #ireland, #luxembourg, #online-trackers, #policy, #privacy

0

Google’s Privacy Backpedal Shows Why It’s So Hard Not to Be Evil

Why Google thought twice about restoring your privacy.

#amazon-com-inc, #antitrust-laws-and-competition-issues, #computers-and-the-internet, #consumer-protection, #data-mining-and-database-marketing, #google-inc, #privacy

0

Supreme Court revives LinkedIn case to protect user data from web scrapers

The Supreme Court has given LinkedIn another chance to stop a rival company from scraping personal information from users’ public profiles, a practice LinkedIn says should be illegal but one that could have broad ramifications for internet researchers and archivists.

LinkedIn lost its case against Hiq Labs in 2019 after the U.S. Ninth Circuit Court of Appeals ruled that the CFAA does not prohibit a company from scraping data that is publicly accessible on the internet.

The Microsoft-owned social network argued that the mass scraping of its users’ profiles was in violation of the Computer Fraud and Abuse Act, or CFAA, which prohibits accessing a computer without authorization.

Hiq Labs, which uses public data to analyze employee attrition, argued at the time that a ruling in LinkedIn’s favor “could profoundly impact open access to the Internet, a result that Congress could not have intended when it enacted the CFAA over three decades ago.” (Hiq Labs has also been sued by Facebook, which it claims scraped public data across Facebook and Instagram, but also Amazon Twitter, and YouTube.)

The Supreme Court said it would not take on the case, but instead ordered the appeal’s court to hear the case again in light of its recent ruling, which found that a person cannot violate the CFAA if they improperly access data on a computer they have permission to use.

The CFAA was once dubbed the “worst law” in the technology law books by critics who have long argued that its outdated and vague language failed to keep up with the pace of the modern internet.

Journalists and archivists have long scraped public data as a way to save and archive copies of old or defunct websites before they shut down. But other cases of web scraping have sparked anger and concerns over privacy and civil liberties. In 2019, a security researcher scraped millions of Venmo transactions, which the company does not make private by default. Clearview AI, a controversial facial recognition startup, claimed it scraped over 3 billion profile photos from social networks without their permission.

 

#amazon, #clearview-ai, #computer-fraud-and-abuse-act, #congress, #facebook, #facial-recognition, #hacking, #linkedin, #microsoft, #privacy, #security, #social-network, #social-networks, #supreme-court, #twitter, #venmo, #web-scraping

0

The demise of browser cookies could create a Golden Age of digital marketing

Depending on whom you ask, the digital advertising industry is either counting down the minutes to doomsday or entering an exciting new era for engaging with consumers. Apple’s iOS 14.5 update — which effectively ends automatic opt-ins to online tracking and data collection — is finally at hand, and Google aims to phase out third-party cookies next year.

The future could see a wave of innovations that help consumers opt out of data collection. So it’s up to the advertising industry to find ways to get these educated, empowered consumers to opt back in.

Whether these changes set digital advertisers back 15 years or pave the way to more fruitful interactions with customers remains to be seen. But one thing is clear: This is big. Allowing users to decide what browsing data can be collected, by whom and under what circumstances is a move that will change the direction of the advertising industry.

But the new direction does not have to lead digital marketers to oblivion, failure or poverty. In fact, it’s quite the opposite.

With a few changes to short-term strategy — and a longer-term plan that takes into account the fact that people are awakening to the value of their online data — advertisers can form a new type of relationship with consumers. It can be built upon trust and open exchange of value.

It’s up to advertisers to grasp, accept and reap the benefits of the upcoming changes. Because with iOS 14.5, cookie deprecation, and regulations like GDPR and CCPA, one era is ending and a new one is beginning. There’s a new seat at the table in the great bargaining session between advertisers and technology giants. It’s occupied — for the first time — by the user.

The short-term strategy

Advertisers can weather big changes in the short term by implementing several steps.

For starters, developers should update their application SDKs to support Apple’s new SKAdNetwork solution and then verify attribution across each channel. For example, after SDK updates, verify that the number of installs reported from your Facebook Ads matches up to the number of installs you’re seeing reported in the App Store developer console or your preferred analytics provider.


Have you worked with a talented individual or agency who helped you find and keep more users?
Respond to our survey and help us find the best startup growth marketers!


This can become more complicated the more channels you’re on, but it is important to verify all of your advertising channels’ reporting. Also important is setting your conversion value, because this is the key to getting granular information on your ad campaigns and ensuring the right entity controls the flow of information.

#advertising-tech, #column, #digital-marketing, #ec-column, #ec-marketing-tech, #facebook, #google, #marketing, #online-advertising, #online-tracking, #privacy, #social-media, #targeted-advertising, #tc

0

Europe needs to back browser-level controls to fix cookie consent nightmares, says privacy group

European privacy group noyb, which recently kicked off a major campaign targeting rampant abuse of the region’s cookie consent rules, has followed up by publishing a technical proposal for an automated browser-level signal it believes could go even further to tackle the friction generated by endless ‘your data choices’ pop-ups.

Its proposal is for an automated signal layer that would enable users to configure advanced consent choices — such as only being asked to allow cookies if they frequently visit a website; or being able to whitelist lists of sites for consent (if, for example, they want to support quality journalism by allowing their data to be used for ads in those specific cases).

The approach would offer a route to circumvent the user experience nightmare flowing from all the dark pattern design that’s made cookie consent collection so cynical, confusing and tedious — by simply automating the yeses and noes, thereby keeping interruptions to a user-defined minimum.

In the European Union cookie consent banners mushroomed in the wake of a 2018 update to the bloc’s privacy rules (GDPR) — especially on websites that rely on targeted advertising to generate revenue. And in recent years it has not been unusual to find cookie pop-ups that contain a labyrinthine hell of opacity — culminating (if you don’t just click ‘agree’) — to vast menus of ‘trusted partners’ all after your data. Some of which are pre-set to share information and require the user to individually toggle each and every one off.

Such stuff is a mockery of compliance, rather than the truly simple choice envisage by the law. So noyb’s earlier campaign is focused on filing scores of complaints against sites it believes aren’t complying with requirements to provide users with a clear and free choice to say no to their data being used for ads (and it’s applying a little automation tech there too to help scale up the number of complaint it can file).

Its follow-up here — showing how an advanced control layer that signals user choices in the background could work — shares the same basic approach as the ‘Do Not Track’ proposals originally proposed for baking into web browsers all the way back in 2009 but which failed to get industry buy-in. There has also been a more recent US-based push to revive the idea of browser-level privacy control — buoyed by California’s California Consumer Privacy Act (CCPA), which took effect at the start of last year, and includes a requirement that businesses respect user opt-out preferences via a signal from their browser.

However noyb’s version of browser-level privacy control seeks to go further by enabling more granular controls — which it says it necessary to better mesh with the EU’s nuanced legal framework around data protection.

It points out that Article 21(5) of the GDPR already allows for automatic signals from the browser to inform websites in the background whether a user is consenting to data processing or not.

The ePrivacy Regulation proposal, a much delayed reform of the bloc’s rules around electronic privacy has also included such a provision.

However noyb says development to establish such a signal hasn’t happened yet — suggesting that cynically manipulative consent management platforms may well have been hampering privacy-focused innovation.

But it also sees a chance for the necessary momentum to build behind the idea.

For example, it points to how Apple has recently been dialling up the notification and control it offers users of its mobile platform, iOS, to allow people to both know which third party apps want to track them and allow or deny access to their data — including giving users a super simple ‘deny all third party tracking’ option backed into iOS’ settings.

So, well, why should Internet users who happen to be browsing on a desktop device not have a set of similarly advanced privacy controls too?

EU lawmakers are also still debating the ePrivacy Regulation reform — which deals centrally with cookies — so the campaign group wants to demonstrate how automated control tech could be a key piece of the answer to so-called ‘cookie consent fatigue’; by giving users a modern toolset to shrink consent friction without compromising their ability to control what happens with their data.

In order to work as intended automated signals would need to be legally binding (to prevent adtech companies just ignoring them) — and having a clear legal basis set out in the ePrivacy Regulation is one way that could happen within fairly short order.

The chance at least is there.

There have been concerns that the ePrivacy reform — which was stalled for years — could end up weakening the EU’s data protection framework in the face of massive adtech industry lobbying. And the negotiation process to reach a final text remains ongoing. So it’s still not clear where it’s going to end up.

But, earlier this year, the European Council agreed its negotiating mandate with the other EU institutions. And, on cookies, the Council said they want companies to find ways to reduce ‘cookie consent fatigue’ among users — such as by whitelisting types of cookies/providers in their browser settings. So there is at least a potential path to legislate for an effective browser-level control layer in Europe.

For now, noyb has published a prototype and a technology specification for what it’s calling the ADPC (aka Advanced Data Protection Control). The work on the framework has been carried out by noyb working with the Sustainable Computing Lab at the Vienna University of Economics and Business.

The proposal envisages web pages sending privacy requests in a machine-readable way and the ADPC allowing the response to be transmitted using header signals or via Java Script. noyb likens the intelligent management of queries and automatic responses such a system could support to an email spam filter.

Commenting in a statement, chairman Max Schrems said: “For Europe, we need more than just an ‘opt-out’ so that it fits into our legal framework. That’s why we call the prototype ‘Advanced’ Data Protection Control, because it’s much more flexible and specific than previous approaches.

“ADPC allows intelligent management of privacy requests. A user could say, for example, ‘please ask me only after I’ve been to the site several times’ or ‘ask me again after 3 months.’ It is also possible to answer similar requests centrally. ADPC thus allows the flood of data requests to be managed in a meaningful way.”

“With ADPC, we also want to show the European legislator that such a signal is feasible and brings advantages for all sides,” he added. “We hope that the negotiators of the member states and the European Parliament will ensure a solid legal basis here, which could be applicable law in a short time. What California has done already, the EU should be able to do as well.”

The Commission has been contacted for comment on noyb’s ADPC.

While there are wider industry shifts afoot to depreciate tracking cookies altogether — with Google proposing to replace current adtech infrastructure supported by Chrome with an alternative stack of (it claims) more privacy respecting alternatives (aka its Privacy Sandbox) — there’s still plenty of uncertainty over what will ultimately happen to third party cookies.

Google’s move to end support for tracking cookies is being closely scrutinized by regional antitrust regulators. And just last week the UK’s Competition and Markets Authority (CMA), which is investigating a number of complaints about the plan, said it’s minded to accept concessions from Google that would mean the regulator could order it not to switch off tracking cookies.

Moreover, even if tracking cookies do finally crumble there is still the question of what exactly they get replaced with — and how alternative adtech infrastructure could impact user privacy?

Google’s so-called ‘Privacy Sandbox’ proposal to target ads at cohorts of users (based on bucketed ‘interests’ its technology will assign them via on-device analysis of their browsing habits) has raised fresh concerns about the risks of exploitative and predatory advertising. So it may be no less important for users to have meaningful browser-level controls over their privacy choices in the future — even if the tracking cookie itself goes away.

A browser-level signal could offer a way for a web user to say ‘no’ to being stuck in an ‘interest bucket’ for ad targeting purposes, for example — signalling that they prefer to see only contextual ads instead, say.

tl;dr: The issue of consent does not only affect cookies — and it’s telling that Google has avoided running the first trials of its replacement tech for tracking cookies (FLoCs, or federated learning of cohorts) in Europe.

 

#advertising-tech, #competition-and-markets-authority, #data-processing, #data-protection, #do-not-track, #eprivacy-regulation, #europe, #european-parliament, #european-union, #max-schrems, #noyb, #privacy, #tc, #web-browsers

0

Google will let enterprises store their Google Workspace encryption keys

As ubiquitous as Google Docs has become in the last year alone, a major criticism often overlooked by the countless workplaces who use it is that it isn’t end-to-end encrypted, allowing Google — or any requesting government agency — access to a company’s files. But Google is finally addressing that key complaint with a round of updates that will let customers shield their data by storing their own encryption keys.

Google Workspace, the company’s enterprise offering that includes Google Docs, Slides and Sheets, is adding client-side encryption so that a company’s data will be indecipherable to Google.

Companies using Google Workspace can store their encryption keys with one of four partners for now: Flowcrypt, Futurex, Thales, or Virtru, which are compatible with Google’s specifications. The move is largely aimed at regulated industries — like finance, healthcare, and defense — where intellectual property and sensitive data are subject to intense privacy and compliance rules.

(Image: Google / supplied)

The real magic lands later in the year when Google will publish details of an API that will let enterprise customers build their own in-house key service, allowing workplaces to retain direct control of their encryption keys. That means if the government wants that company’s data, they have to knock on their front door — and not sneak around the back by serving the key holder with a legal demand.

Google published technical details of how the client-side encryption feature works, and will roll out as a beta in the coming weeks.

Tech companies giving their corporate customers control of their own encryption keys has been a growing trend in recent years. Slack and cloud vendor Egnyte bucked the trend by allowing their enterprise users to store their own encryption keys, effectively cutting themselves out of the surveillance loop. But Google has dragged its feet on encryption for so long that startups are working to build alternatives that bake in encryption from the ground up.

Google said it’s also pushing out new trust rules for how files are shared in Google Drive to give administrators more granularity on how different levels of sensitive files can be shared, and new data classification labels to mark documents with a level of sensitivity such as “secret” or “internal”.

The company said it’s improving its malware protection efforts by now blocking phishing and malware shared from within organizations. The aim is to help cut down on employees mistakenly sharing malicious documents.

#api, #cloud-storage, #computing, #cryptography, #data-protection, #data-security, #egnyte, #encryption, #end-to-end-encryption, #finance, #google, #google-workspace, #google-drive, #healthcare, #privacy, #security, #technology, #thales

0

Apple’s Bet on China

When the technology giant first started doing business in China, it thought it would change the country. Decades later, the reverse is true.

#apple-inc, #china, #communist-party-of-china, #cook-timothy-d, #data-storage, #jobs-steven-p, #privacy

0

7 new security features Apple quietly announced at WWDC

Apple went big on privacy during its Worldwide Developer Conference (WWDC) keynote this week, showcasing features from on-device Siri audio processing to a new privacy dashboard for iOS that makes it easier than ever to see which apps are collecting your data and when.

While typically vocal about security during the Memoji-filled, two-hour-long(!) keynote, the company also quietly introduced several new security and privacy-focused features during its WWDC developer sessions. We’ve rounded up some of the most interesting — and important.

Passwordless login with iCloud Keychain

Apple is the latest tech company taking steps to ditch the password. During its “Move beyond passwords” developer session, it previewed Passkeys in iCloud Keychain, a method of passwordless authentication powered by WebAuthn, and Face ID and Touch ID.

The feature, which will ultimately be available in both iOS 15 and macOS Monterey, means you no longer have to set a password when creating an account or a website or app. Instead, you’ll simply pick a username, and then use Face ID or Touch ID to confirm it’s you. The passkey is then stored in your keychain and then synced across your Apple devices using iCloud — so you don’t have to remember it, nor do you have to carry around a hardware authenticator key.

“Because it’s just a single tap to sign in, it’s simultaneously easier, faster and more secure than almost all common forms of authentication today,” said Garrett Davidson, an Apple authentication experience engineer. 

While it’s unlikely to be available on your iPhone or Mac any time soon — Apple says the feature is still in its ‘early stages’ and it’s currently disabled by default — the move is another sign of the growing momentum behind eliminating passwords, which are prone to being forgotten, reused across multiple services, and — ultimately — phishing attacks. Microsoft previously announced plans to make Windows 10 password-free, and Google recently confirmed that it’s working towards “creating a future where one day you won’t need a password at all”.

Microphone indicator in macOS

macOS has a new indicator to tell you when the microhpone is on. (Image: Apple)

Since the introduction of iOS 14, iPhone users have been able to keep an eye on which apps are accessing their microphone via a green or orange dot in the status bar. Now it’s coming to the desktop too.

In macOS Monterey, users will be able to see which apps are accessing their Mac’s microphone in Control Center, MacRumors reports, which will complement the existing hardware-based green light that appears next to a Mac’s webcam when the camera is in use.

Secure paste

iOS 15, which will include a bunch of privacy-bolstering tools from Mail Privacy Protection to App Privacy Reports, is also getting a feature called Secure Paste that will help to shield your clipboard data from other apps.

This feature will enable users to paste content from one app to another, without the second app being able to access the information on the clipboard until you paste it. This is a significant improvement over iOS 14, which would notify when an app took data from the clipboard but did nothing to prevent it from happening.

With secure paste, developers can let users paste from a different app without having access to what was copied until the user takes action to paste it into their app,” Apple explains. “When developers use secure paste, users will be able to paste without being alerted via the [clipboard] transparency notification, helping give them peace of mind.”

While this feature sounds somewhat insignificant, it’s being introduced following a major privacy issue that came to light last year. In March 2020, security researchers revealed that dozens of popular iOS apps — including TikTok — were “snooping” on users’ clipboard without their consent, potentially accessing highly sensitive data.

Advanced Fraud Protection for Apple Card

Payments fraud is more prevalent than ever as a result of the pandemic, and Apple is looking to do something about it. As first reported by 9to5Mac, the company has previewed Advanced Fraud Protection, a feature that will let Apple Card users generate new card numbers in the Wallet app.

While details remain thin — the feature isn’t live in the first iOS 15 developer beta — Apple’s explanation suggests that Advanced Fraud Protection will make it possible to generate new security codes — the three-digit number you enter at checkout – when making online purchases. 

“With Advanced Fraud Protection, Apple Card users can have a security code that changes regularly to make online Card Number transactions even more secure,” the brief explainer reads. We’ve asked Apple for some more information. 

‘Unlock with Apple Watch’ for Siri requests

As a result of the widespread mask-wearing necessitated by the pandemic, Apple introduced an ‘Unlock with Apple Watch’ in iOS 14.5 that let enabled users to unlock their iPhone and authenticate Apple Pay payments using an Apple Watch instead of Face ID.

The scope of this feature is expanding with iOS 15, as the company has confirmed that users will soon be able to use this alternative authentication method for Siri requests, such as adjusting phone settings or reading messages. Currently, users have to enter a PIN, password or use Face ID to do so.

“Use the secure connection to your Apple Watch for Siri requests or to unlock your iPhone when an obstruction, like a mask, prevents Face ID from recognizing your Face,” Apple explains. Your watch must be passcode protected, unlocked, and on your wrist close by.”

Standalone security patches

To ensure iPhone users who don’t want to upgrade to iOS 15 straight away are up to date with security updates, Apple is going to start decoupling patches from feature updates. When iOS 15 lands later this year, users will be given the option to update to the latest version of iOS or to stick with iOS 14 and simply install the latest security fixes. 

“iOS now offers a choice between two software update versions in the Settings app,” Apple explains (via MacRumors). “You can update to the latest version of iOS 15 as soon as it’s released for the latest features and most complete set of security updates. Or continue on ‌iOS 14‌ and still get important security updates until you’re ready to upgrade to the next major version.”

This feature sees Apple following in the footsteps of Google, which has long rolled out monthly security patches to Android users.

‘Erase all contents and settings’ for Mac

Wiping a Mac has been a laborious task that has required you to erase your device completely then reinstall macOS. Thankfully, that’s going to change. Apple is bringing the “erase all contents and settings” option that’s been on iPhones and iPads for years to macOS Monterey.

The option will let you factory reset your MacBook with just a click. “System Preferences now offers an option to erase all user data and user-installed apps from the system, while maintaining the operating system currently installed,” Apple says. “Because storage is always encrypted on Mac systems with Apple Silicon or the T2 chip, the system is instantly and securely ‘erased’ by destroying the encryption keys.”

#android, #apple, #apple-inc, #clipboard, #computing, #control-center, #encryption, #face-id, #google, #icloud, #ios, #ios-14, #ipads, #iphone, #keychain, #microsoft, #microsoft-windows, #online-purchases, #operating-system, #operating-systems, #privacy, #security, #siri, #software

0

Justice Dept. Watchdog and Senate Open Inquiries Into Seizure of Democrats’ Data

Democrats denounced the Trump administration’s seizure of lawmakers’ data as an abuse of power and called on Republicans to back the congressional inquiry.

#apple-inc, #barr-william-p, #biden-joseph-r-jr, #classified-information-and-state-secrets, #democratic-party, #ethics-and-official-misconduct, #freedom-of-the-press, #horowitz-michael-e, #house-committee-on-intelligence, #house-committee-on-the-judiciary, #inspectors-general, #justice-department, #microsoft-corp, #privacy, #schiff-adam-b, #senate, #subpoenas, #trump-donald-j, #united-states-politics-and-government

0

In Leak Investigation, Tech Giants Are Caught Between Courts and Customers

Apple, under fire for turning over the data of two lawmakers to the Trump Justice Dept., said it did so unknowingly, while Google fought a request for New York Times data because it related to a corporate client.

#apple-inc, #classified-information-and-state-secrets, #computers-and-the-internet, #corporate-social-responsibility, #enterprise-computing, #google-inc, #house-committee-on-intelligence, #justice-department, #news-and-news-media, #privacy, #subpoenas, #united-states-politics-and-government

0

Google won’t end support for tracking cookies unless UK’s competition watchdog agrees

Well this is big. The UK’s competition regulator looks set to get an emergency brake that will allow it to stop Google ending support for third party cookies, a technology that’s currently used for targeting online ads, if it believes competition would be harmed by the depreciation going ahead.

The development follows an investigation opened by the Competition and Markets Authority (CMA) into Google’s self-styled ‘Privacy Sandbox’ earlier this year.

The regulator will have the power to order a standstill of at least 60 days on any move by Google to remove support for cookies from Chrome if it accepts a set of legally binding commitments the latter has offered — and which the regulator has today issued a notification of intention to accept.

The CMA could also reopen a fuller investigation if it’s not happy with how things are looking at the point it orders any standstill to stop Google crushing tracking cookies.

It follows that the watchdog could also block Google’s wider ‘Privacy Sandbox’ technology transition entirely — if it decides the shift cannot be done in a way that doesn’t harm competition. However the CMA said today it takes the “provisional” view that the set of commitments Google has offered will address competition concerns related to its proposals.

It’s now opened a consultation to see if the industry agrees — with the feedback line open until July 8.

Commenting in a statement, Andrea Coscelli, the CMA’s chief executive, said:

“The emergence of tech giants such as Google has presented competition authorities around the world with new challenges that require a new approach.

“That’s why the CMA is taking a leading role in setting out how we can work with the most powerful tech firms to shape their behaviour and protect competition to the benefit of consumers.

“If accepted, the commitments we have obtained from Google become legally binding, promoting competition in digital markets, helping to protect the ability of online publishers to raise money through advertising and safeguarding users’ privacy.”

In a blog post sketching what it’s pledged — under three broad headlines of ‘Consultation and collaboration’; ‘No data advertising advantage for Google products’; and ‘No self-preferencing’ — Google writes that if the CMA accepts its commitments it will “apply them globally”, making the UK’s intervention potentially hugely significant.

It’s perhaps one slightly unexpected twist of Brexit that it’s put the UK in a position to be taking key decisions about the rules for global digital advertising. (The European Union is also working on new rules for how platform giants can operate but the CMA’s intervention on Privacy Sandbox does not yet have a direct equivalent in Brussels.)

That Google is choosing to offer to turn a UK competition intervention into a global commitment is itself very interesting. It may be there in part as an added sweetener — nudging the CMA to accept the offer so it can feel like a global standard setter.

At the same time, businesses do love operational certainty. So if Google can hash out a set of rules that are accepted by one (fairly) major market, because they’ve been co-designed with national oversight bodies, and then scale those rules everywhere it may create a shortcut path to avoiding any more regulator-enforced bumps in the future.

So Google may see this as a smoother path toward the sought for transition for its adtech business to a post-cookie future. Of course it also wants to avoid being ordered to stop entirely.

More broadly, engaging with the fast-paced UK regulator could be a strategy for Google to try to surf over the political deadlocks and risks which can characterize discussions on digital regulation in other markets (especially its home turf of the U.S. — where there has been a growing drumbeat of calls to break up tech giants; and where Google specifically now faces a number of antitrust investigations).

The outcome it may be hoping for is being able to point to regulator-stamped ‘compliance’ — in order that it can claim it as evidence there’s no need for its ad empire to be broken up.

Google’s offering of commitments also signifies that regulators who move fastest to tackle the power of tech giants will be the ones helping to define and set the standards and conditions that apply for web users everywhere. At least unless or until any more radical interventions rain down on big tech.

What is Privacy Sandbox?

Privacy Sandbox is a complex stack of interlocking technology proposals for replacing current ad tracking methods (which are widely seen as horrible for user privacy) with alternative infrastructure that Google claims will be better for individual privacy and also still allow the adtech and publishing industries to generate (it claims much the same) revenue by targeting ads at cohorts of web users — who will be put into ‘interest buckets’ based on what they look at online.

The full details of the proposals (which include components like FLoCs, aka Google’s proposed new ad ID based on federated learning of cohorts; and Fledge/Turtledove, Google’s suggested new ad delivery technology) have not yet been set in stone.

Nonetheless, Google announced in January 2020 that it intended to end support for third party cookies within two years — so that rather nippy timeframe has likely concentrated opposition, with pushback coming from the adtech industry and (some) publishers who are concerned it will have a major impact on their ad revenues when individual-level ad targeting goes away.

The CMA began to look into Google’s planned depreciating of tracking cookies after complaints that the transition to a new infrastructure of Google’s devising will merely increase Google’s market power — by locking down third parties’ ability to track Internet users for ad targeting while leaving Google with a high dimension view of what people get up to online as a result of its expansive access to first party data (gleaned through its dominance for consumer web services).

The executive summary of today’s CMA notice lists its concerns that, without proper regulatory oversight, Privacy Sandbox might:

  • distort competition in the market for the supply of ad inventory and in the market for the supply of ad tech services, by restricting the functionality associated with user tracking for third parties while retaining this functionality for Google;
  • distort competition by the self-preferencing of Google’s own advertising products and services and owned and operated ad inventory; and
  • allow Google to exploit its apparent dominant position by denying Chrome web users substantial choice in terms of whether and how their personal data is used for the purpose of targeting and delivering advertising to them.

At the same time, privacy concerns around the ad tracking and targeting of Internet users are undoubtedly putting pressure on Google to retool Chrome (which ofc dominates web browser marketshare) — given that other web browsers have been stepping up efforts to protect their users from online surveillance by doing stuff like blocking trackers for years.

Web users hate creepy ads — which is why they’ve been turning to ad blockers in droves. Numerous major data scandals have also increased awareness of privacy and security. And — in Europe and elsewhere — digital privacy regulations have been toughened up or introduced in recent years. So the line of ‘what’s acceptable’ for ad businesses to do online has been shifting.

But the key issue here is how privacy and competition regulation interacts — and potentially conflicts — with the very salient risk that ill-thought through and overly blunt competition interventions could essentially lock in privacy abuses of web users (as a result of a legacy of weak enforcement around online privacy, which allowed for rampant, consent-less ad tracking and targeting of Internet users to develop and thrive in the first place).

Poor privacy enforcement coupled with banhammer-wielding competition regulators does not look like a good recipe for protecting web users’ rights.

However there is cautious reason for optimism here.

Last month the CMA and the UK’s Information Commissioner’s Office (ICO) issued a joint statement in which they discussed the importance of having competition and data protection in digital markets — citing the CMA’s Google Privacy Sandbox probe as a good example of a case that requires nuanced joint working.

Or, as they put it then: “The CMA and the ICO are working collaboratively in their engagement with Google and other market participants to build a common understanding of Google’s proposals, and to ensure that both privacy and competition concerns can be addressed as the proposals are developed in more detail.”

Although the ICO’s record on enforcement against rights-trampling adtech is, well, non-existent. So its preference for regulatory inaction in the face of adtech industry lobbying should off-set any quantum of optimism derived from the bald fact of the UK’s privacy and competition regulators’ ‘joint working’.

(The CMA, by contrast, has been very active in the digital space since gaining, post-Brexit, wider powers to pursue investigations. And in recent years took a deep dive look at competition in the digital ad market, so it’s armed with plenty of knowledge. It is also in the process of configuring a new unit that will oversee a pro-competition regime which the UK explicitly wants to clip the wings of big tech.)

What has Google committed to?

The CMA writes that Google has made “substantial and wide-ranging” commitments vis-a-vis Privacy Sandbox — which it says include:

  • A commitment to develop and implement the proposals in a way that avoids distortions to competition and the imposition of unfair terms on Chrome users. This includes a commitment to involve the CMA and the ICO in the development of the Proposals to ensure this objective is met.
  • Increased transparency from Google on how and when the proposals will be taken forward and on what basis they will be assessed. This includes a commitment to publicly disclose the results of tests of the effectiveness of alternative technologies.
  • Substantial limits on how Google will use and combine individual user data for the purposes of digital advertising after the removal of third-party cookies.
  • A commitment that Google will not discriminate against its rivals in favour of its own advertising and ad-tech businesses when designing or operating the alternatives to third-party cookies.
  • A standstill period of at least 60 days before Google proceeds with the removal of third party cookies giving the CMA the opportunity, if any outstanding concerns cannot be resolved with Google, to reopen its investigation and, if necessary, impose any interim measures necessary to avoid harm to competition.

Google also writes that: “Throughout this process, we will engage the CMA and the industry in an open, constructive and continuous dialogue. This includes proactively informing both the CMA and the wider ecosystem of timelines, changes and tests during the development of the Privacy Sandbox proposals, building on our transparent approach to date.”

“We will work with the CMA to resolve concerns and develop agreed parameters for the testing of new proposals, while the CMA will be getting direct input from the ICO,” it adds.

Google’s commitments cover a number of areas directly related to competition — such as self-preferencing, non-discrimination, and stipulations that it will not combine user data from specific sources that might give it an advantage vs third parties.

However privacy is also being explicitly baked into the competition consideration, here, per the CMA — which writes that the commitments will [emphasis ours]:

Establish the criteria that must be taken into account in designing, implementing and evaluating Google’s Proposals. These include the impact of the Privacy Sandbox Proposals on: privacy outcomes and compliance with data protection principles; competition in digital advertising and in particular the risk of distortion to competition between Google and other market participants; the ability of publishers to generate revenue from ad inventory; and user experience and control over the use of their data.

An ICO spokeswoman was also keen to point out that one of the first commitments obtained from Google under the CMA’s intervention “focuses on privacy and data protection”.

In a statement, the data watchdog added:

“The commitments obtained mark a significant moment in the assessment of the Privacy Sandbox proposals. They demonstrate that consumer rights in digital markets are best protected when competition and privacy are considered together.

“As we outlined in our recent joint statement with the CMA, we believe consumers benefit when their data is used lawfully and responsibly, and digital innovation and competition are supported. We are continuing to build upon our positive and close relationship with the CMA, to ensure that consumer interests are protected as we assess the proposals.”

This development in the CMA’s investigation raises plenty of questions, large and small — most pressingly over the future of key web infrastructure and what the changes being hashed out here between Google and UK regulators might mean for Internet users everywhere.

The really big issue is whether ‘co-design’ with oversight bodies is the best way to fix the market power imbalance flowing from a single tech giant being able to combine massive dominance in consumer digital services with duopoly dominance in adtech.

Others would say that breaking up Google’s consumer tech and Google’s adtech is the only way to fix the abuse — and eveything else is just fiddling while Rome burns.

Google, for instance, is still in charge of proposing the changes itself — regardless of how much pre-implementation consultation and tweaking goes on. It’s still steering the ship and there are plenty of people who believe that’s not an acceptable governance model for the open web.

But, for now at least, the CMA wants to try to fiddle.

It should be noted that, in parallel, the UK government and CMA are speccing out a wider pro-competition regime that could result in deeper interventions into how Google and other platform giants operate in the future. So more interventions are all but guaranteed.

For now, though, Google is probably feeling pretty happy for the opportunity to work with UK regulators. If it can pull oversight bodies deep down in the detail of the changes it wants to make that’s likely a far more comfortable spot for Mountain View vs being served with an order to break its business up — something the CMA has previously taken feedback on.

Google has been contacted with questions on its Privacy Sandbox commitments.

#ad-blocking, #advertising-tech, #competition-and-markets-authority, #digital-advertising, #europe, #google, #information-commissioners-office, #marketing, #online-advertising, #online-privacy, #policy, #privacy, #privacy-sandbox, #tc, #uk-government, #united-kingdom, #united-states, #web-browsers, #web-infrastructure

0

InfoSum outs an identity linking tool that’s exciting marketing firms like Experian

InfoSum, a startup which takes a federated approach to third party data enrichment, has launched a new product (called InfoSum Bridge) that it says significantly expands the customer identity linking capabilities of its platform.

“InfoSum Bridge incorporates multiple identity providers across every identity type — both online and offline, in any technical framework — including deterministic, probabilistic, and cohort-level matches,” it writes in a press release.

It’s also disclosing some early adopters of the product — naming data-for-ads and data-aggregator giants Merkle, MMA and Experian as dipping in.

Idea being they can continue to enrich (first party) data by being able to make linkages, via InfoSum’s layer, with other ‘trusted partners’ who may have gleaned more tidbits of info on those self-same users.

InfoSum says it has 50 enterprise customers using InfoSum Bridge at this point. The three companies it’s named in the release all play in the digital marketing space.

The 2016-founded startup (then called CognitiveLogic) sells customers a promise of ‘privacy-safe’ data enrichment run via a technical architecture that allows queries to be run — and insights gleaned — across multiple databases yet maintains each pot as a separate silo. This means the raw data isn’t being passed around between interested entities. 

Why is that important? Third party data collection is drying up, after one (thousand) too many privacy scandals in recent years — combing with the legal risk attached to background trading of people’s data as a result of data protection regimes like Europe’s General Data Protection Regulation.

That puts the spotlight squarely on first party data. However businesses whose models have been dependent on access to big data about people — i.e. being able to make scores of connections by joining up information on people from different databases/sources (aka profiling) — are unlikely to be content with relying purely on what they’ve been able to learn by themselves.

This is where InfoSum comes in, billing itself as a “neutral data collaboration platform”.

Companies that may have been accustomed to getting their hands on lashings of personal data in years past, as a result of rampant, industry-wide third party data collection (via technologies like tracking cookies) combined with (ehem) lax data governance — are having to cast around for alternatives. And that appears to be stoking InfoSum’s growth.

And on the marketing front, remember, third party cookies are in the process of going away as Google tightens that screw

“We are growing faster than Slack (at equivalent stage e.g. Series A->B) because we are the one solution that is replacing the old way of doing things,” founder Nick Halstead tells TechCrunch. “Experian, Liveramp, Axciom, TransUnion, they all offer solutions to take your data. InfoSum is offering the equivalent of the ‘Cisco router for customer data’ — we don’t own the data we are just selling boxes to make it all connect.”

“The announcement today — ‘InfoSum Bridge’ — is the next generation of building the ultimate network to ‘Bridge the industry chasm’ it has right now of 100’s of competing ID’s, technical solutions and identity types, bringing a infrastructure approach,” he adds.

We took a deep dive into InfoSum’s first product back in 2018 — when it was just offering early adopters a glimpse of the “art of the possible”, as it put it then.

Three+ years on it’s touting a significantly expansion of its pipeline, having baked in support for multiple ID vendors/types, as well as adding probabilistic capabilities (to do matching on users where there is no ID).

Per a spokesman: “InfoSum Bridge is an extension of our existing and previous infrastructure. It enables a significant expansion of both our customer identity linking, and the limits of what is possible for data collaboration in a secure and privacy-focused manner. This is a combination of new product enhancements and announcement of partnerships. We’ve built capabilities to support across all ID vendors and types but also probabilistic and support for those publishers with unauthenticated audiences.”

InfoSum bills its platform as “the future of identity connectivity”. Although, as Halstead notes, there is now growing competition for that concept, as the adtech industry scrambles to build out alternative tracking systems and ID services ahead of Google crushing their cookies for good.

But it’s essentially making a play to be the trusted, independent layer that can link them all.

Exactly what this technical wizardry means for Internet users’ privacy is difficult to say. If, for example, it continues to enable manipulative microtargeting that’s hardly going to sum to progress.

InfoSum has previously told us its approach is designed to avoid individuals being linked and identified via the matching — with, for exmaple, limits placed on the bin sizes. Although its platform is also configurable (which puts privacy levers in its customers hands). Plus there could be edge cases where overlapped datasets result in a 100% match for an individual. So a lot is unclear.

The security story looks cleaner, though.

If the data is properly managed by InfoSum (and it touts “comprehensive independent audits”, as well as pointing to the decentralized architecture as an advantage) that’s a big improvement on — at least — one alternative scenario of whole databases being passed around between businesses which may be (to put it politely) disinterested in securing people’s data themselves.

InfoSum’s PR includes the three canned quotes (below) from the trio of marketing industry users it’s disclosing today.

All of whom sound very happy indeed that they’ve found a way to keep their “data-driven” marketing alive while simultaneously getting to claim it’s “privacy-safe”…

John Lee, Global Chief Strategy Officer, Merkle: “The conversation around identity is continuing to be top of mind for marketers across the industry, and as the landscape rapidly changes, it’s essential that brands have avenues to work together using first-party identity and data in a privacy-safe way. The InfoSum Bridge solution provides our clients and partners a way to collaborate using their first-party data, resolved to Merkury IDs and data, with even greater freedom and confidence than with traditional clean room or safe haven approaches.”

Lou Paskalis, Chairman, MMA Global Media and Data Board: “As marketers struggle to better leverage their first-party data in the transition from the cookie era to the consent era, I would have expected more innovative solutions to emerge.  One bright spot is InfoSum, which offers a proprietary technology to connect data, yet never share that data. This is the most customer-friendly and compliant technology that I’ve seen that enables marketers to fully realize the true potential of their first party data. What InfoSum has devised is an elegant way to respect consumers’ privacy choices while enabling marketers to realize the full benefit of their first party data.”

Colin Grieves, Managing Director Experian: “At Experian we are committed to a culture of customer-centric data innovation, helping develop more meaningful and seamless connections between brands and their audiences. InfoSum Bridge gives us a scalable environment for secure, data connectivity and collaboration. Bridge is at the core of the Experian Match offering, which allows brands and publishers alike the ability to understand and engage the right consumers in the digital arena at scale, whilst safeguarding consumer data and privacy.”

Thing is, clever technical architecture that enables big data fuelled modelling and profiling of people to continue, via pattern matching to identify ‘lookalike’ customers who can (for example) be bucketed and targeted with ads, doesn’t actually sum to privacy as most people would understand it… But, for sure, impressive tech architecture guys.

The same issue attaches to FloCs, Google’s proposed replacement for tracking cookies — which also relies on federation (and which the EFF has branded a “terrible idea”, warning that such an approach actually risks amplifying predatory targeting).

The tenacity with which the marketing industry seeks to cling to microtargeting does at least underline why rights-focused regulatory oversight of adtech is going to be essential if we’re to stamp out systematic societal horrors like ads that scale bias by discriminating against protected groups, or the anti-democratic manipulation of voters that’s enabled by opaque targeting and hyper-targeted messaging, circumventing the necessary public scrutiny.

Tl;dr: Privacy is not just important for the individual. It’s a collective good. And keeping that collective commons safe from those who would seek to exploit it — for a quick buck or worse — is going to require a whole other type of oversight architecture.

#advertising-tech, #big-data, #data-protection, #digital-marketing, #europe, #experian, #google, #infosum, #liveramp, #marketing, #merkle, #mma, #nick-halstead, #online-advertising, #privacy, #targeted-advertising

0