UK’s MarketFinance secures $383M to fuel its online loans platform for SMBs

Small and medium businesses regularly face cashflow problems. But if that’s an already-inconvenient predicament, it has been exacerbated to the breaking point for too many during the Covid-19 pandemic. Now, a UK startup called MarketFinance — which has built a loans platform to help SMBs stay afloat through those leaner times — is announcing a big funding infusion of £280 million ($383 million) as it gears up for a new wave of lending requests.

“It’s a good time to lend, at the start of the economic cycle,” CEO and founder Anil Stocker said in an interview.

The funding is coming mostly in the form of debt — money loaned to MarketFinance to in turn loan out to its customers as an approved partner of the UK government’s Recovery Loan Scheme; and £10 million ($14 million) of it is equity that MarketInvoice will be using to continue enhancing its platform.

Italian bank Intesa Sanpaolo S.p.A. and an unnamed “global investment firm” are providing the debt, while the equity portion is being led by Black River Ventures (which has also backed Marqeta, Upgrade, Coursera and Digital Ocean) with participation from existing backer, Barclays Bank PLC. Barclays is a strategic investor: MarketFinance powers the bank’s online SMB loans service. Other investors in the startup include Northzone.

We understand that the company’s valuation is somewhere in the region of under $500 million, but more than $250 million, although officially it is not disclosing any numbers.

Stocker said that MarketFinance has been profitable since 2018, one reason why it’s didn’t give up much equity in this current tranche of funding.

“We are building a sustainable business, and the equity we did raise was to unlock better debt at better prices,” he said. “It can help to post more equity on the balance sheet.” He said the money will be “going into our reserves” and used for new product development, marketing and to continue building out its API connectivity.

That last development is important: it taps into the big wave of “embedded finance” plays we are seeing today, where third parties offer, on their own platforms, loans to customers — with the loan product powered by MarketFinance, similar to what Barclays does currently. The range of companies tapping into this is potentially as vast as the internet itself. The promise of embedded finance is that any online brand that already does business with SMEs could potentially offer those SMEs loans to… do more business together.

MarketFinance began life several years ago as MarketInvoice, with its basic business model focused on providing short-term loans to a given SMB against the value of its unpaid invoices — a practice typically described as invoice finance. The idea at the time was to solve the most immediate cashflow issue faced by SMBs by leveraging the thing (unpaid invoices, which typically would eventually get paid, just not immediately) that caused the cashflow issue in the first place.

A lot of the financing that SMBs get against invoices, though, is mainly in the realm of working capital, helping companies make payroll and pay their own monthly bills. But Stocker said that over time, the startup could see a larger opportunity in providing financing that was of bigger sums and covered more ambitious business expansion goals. That was two years ago, and MarketInvoice rebranded accordingly to MarketFinance. (It still very much offers the invoice-based product.)

The timing turned out to be fortuitous, even if the reason definitely has not been lucky: Covid-19 came along and completely overturned how much of the world works. SMEs have been at the thin edge of that wedge not least because of those cashflow issues and the fact that they simply are less geared to diversification and pivoting due to shifting market forces because of their size.

This presented a big opportunity for MarketInvoice, it turned out.

Stocker said that the early part of the Covid-19 pandemic saw the bulk of loans being taken out to manage business interruptions due to Covid-19. Interruptions could mean business closures, or they could mean simply customers no longer coming as they did before, and so on. “The big theme was frictionless access to funding,” he said, using technology to better and more quickly assess applications digitally with “no meetings with bank managers” and reducing the response time to days from the typical 4-6 weeks that SMBs would have traditionally expected.

If last year was more about “panicking, shoring up or pivoting,” in Stocker’s words, “now what we’re seeing are a bunch of them struggling with supply chain issues, Brexit exacerbations and labor shortages. It’s really hard for them to manage all that.”

He said that the number of loan applications has been through the roof, so no shortage of demand. He estimates that monthly loan requests have been as high as $500 million, a huge sum for one small startup in the UK. It’s selective in what it lends: “We choose to support those we thought will return the money,” he said.

#api, #bank, #barclays, #ceo, #corporate-finance, #coursera, #digital-ocean, #economy, #embedded-finance, #europe, #finance, #funding, #invoice, #loans, #marketfinance, #marketinvoice, #marqeta, #money, #partner, #short-term-loans, #startup-company, #uk-government, #united-kingdom

As UK Gov reaches out to tech, investors threaten to ‘pull capital’ over M&A regulator over-reach

UK competition regulators are spooking tech investors in the country with an implied threat to clamp down on startup M&A, according to a new survey of the industry.

As the UK’s Chancellor of the Exchequer engaged with the tech industry at a ‘Chatham House’ style event today, the Coalition for a Digital Economy (Coadec) think-tank released a survey of over 50 key investors which found startup investors are prepared to pull capital over the prospect of the Competition and Markets Authority’s (CMA) new Digital Markets Unit (DMU) becoming a “whole-economy regulator by accident”. Investors are concerned after the CMA recommended the DMU be given ‘expanded powers’ regarding its investigations of M&A deals.

Controversy has been stirring up around the DMU, as the prospect of it blocking tech startup acquisitions – especially by US firms, sometimes on the grounds of national security – has gradually risen.

In the Coadec survey, half of investors said they would significantly reduce the amount they invested in UK startups if the ability to exit was restricted, and a further 22.5% said they would stop investing in UK startups completely under a stricter regulatory environment.

Furthermore, 60% of investors surveyed said they felt UK regulators only had a “basic understanding” of the startup market, and 22.2% felt regulators didn’t understand the tech startup market at all.

Coadec said its conservative estimates showed that the UK Government’s DMU proposals could create a £2.2bn drop in venture capital going into the UK, potentially reducing UK economic growth by £770m.

Commenting on the report, Dom Hallas, Executive Director of Coadec, said: “Startups thrive in competitive markets. But nurturing an ecosystem means knowing where to intervene and when not to. The data shows that not only is there a risk that the current proposals could miss some bad behavior in some areas like B2B markets whilst creating unnecessary barriers in others like M&A. Just as crucially, there’s frankly not a lot of faith in the regulators proposing them either.”

The survey results emerged just as Chancellor Rishi Sunak convened the “Treasury Connect” conference in London today which brought together some of the CEOs of the UK’s biggest tech firms and VCs in a ‘listening process’ designed to reach out to the industry.

However, at a press conference after the event, Sunak pushed back on the survey results, citing research by Professor Jason Furman, Chair, of the Digital Competition Expert Panel, which has found that “not a single acquisition” had been blocked by the DMU, and there are “no false positives” in decision making to date. Sunak said the “system looks at this in order to get the balance right.”

In addition, a statement from the Treasury, out today, said more than one-fifth of people in the UK’s biggest cities are now employed in the tech sector, which also saw £11.2 billion invested last year, setting a new investment record, it claimed.

Sunak also said the Future Fund, which backed UK-based tech firms with convertible loans during the pandemic, handed UK taxpayers with stakes in more than 150 high-growth firms.

These include Vaccitech PLC, which co-invented the COVID-19 vaccine with the University of Oxford and is better known as the AstraZeneca vaccine which went to 170 countries worldwide. The Future fund also invested in Century Tech, an EdTEch startup that uses AI to personalize learning for children.

The UK government’s £375 million ‘Future Fund: Breakthrough’ initiative continued from July this year, aiming at high-growth, R&D-intensive companies.

Coadec’s survey also found 70% of investors felt UK regulators “only thought about large incumbent firms” when designing competition rules, rather than startups or future innovation.

However, the survey found London was still rated as highly as California as an attractive destination for startups and investors.

#artificial-intelligence, #astrazeneca, #california, #chair, #coalition-for-a-digital-economy, #competition-and-markets-authority, #corporate-finance, #digital-markets-unit, #economy, #entrepreneurship, #europe, #finance, #jason-furman, #london, #money, #private-equity, #startup-company, #tc, #uk-government, #united-kingdom, #united-states, #venture-capital

UK dials up the spin on data reform, claiming ‘simplified’ rules will drive ‘responsible’ data sharing

The U.K. government has announced a consultation on plans to shake up the national data protection regime, as it looks at how to diverge from European Union rules following Brexit.

It’s also a year since the U.K. published a national data strategy in which said it wanted pandemic levels of data sharing to become Britain’s new normal.

The Department for Digital, Culture, Media and Sport (DCPS) has today trailed an incoming reform of the information commissioner’s office — saying it wants to broaden the ICO’s remit to “champion sectors and businesses that are using personal data in new, innovative and responsible ways to benefit people’s lives”; and promising “simplified” rules to encourage the use of data for research which “benefit’s people’s lives”, such as in the field of healthcare.

It also wants a new structure for the regulator — including the creation of an independent board and chief executive for the ICO, to mirror the governance structures of other regulators such as the Competition and Markets Authority, Financial Conduct Authority and Ofcom.

Additionally, it said the data reform consultation will consider how the new regime can help mitigate the risks around algorithmic bias — something the EU is already moving to legislate on, setting out a risk-based proposal for regulating applications of AI back in April.

Which means the U.K. risks being left lagging if it’s only going to concern itself with a narrow focus on “bias mitigation”, rather than considering the wider sweep of how AI is intersecting with and influencing its citizens’ lives.

In a press release announcing the consultation, DCMS highlights an artificial intelligence partnership involving Moorfields Eye Hospital and the University College London Institute of Ophthalmology, which kicked off back in 2016, as an example of the kinds of beneficial data sharing it wants to encourage. Last year the researchers reported that their AI had been able to predict the development of wet age-related macular degeneration more accurately than clinicians.

The partnership also involved (Google-owned) DeepMind and now Google Health — although the government’s PR doesn’t make mention of the tech giant’s involvement. It’s an interesting omission, given that DeepMind’s name is also attached to a notorious U.K. patient data-sharing scandal, which saw another London-based NHS Trust (the Royal Free) sanctioned by the ICO, in 2017, for improperly sharing patient data with the Google-owned company during the development phase of a clinician support app (which Google is now in the process of discontinuing).

DCMS may be keen to avoid spelling out that its goal for the data reforms — aka to “remove unnecessary barriers to responsible data use” — could end up making it easier for commercial entities like Google to get their hands on U.K. citizens’ medical records.

The sizeable public backlash over the most recent government attempt to requisition NHS users’ medical records — for vaguely defined “research” purposes (aka the “General Practice Data for Planning and Research”, or GPDPR, scheme) — suggests that a government-enabled big-health-data-free-for-all might not be so popular with U.K. voters.

“The government’s data reforms will provide clarity around the rules for the use of personal data for research purposes, laying the groundwork for more scientific and medical breakthroughs,” is how DCMS’ PR skirts the sensitive health data sharing topic.

Elsewhere there’s talk of “reinforc[ing] the responsibility of businesses to keep personal information safe, while empowering them to grow and innovate” — so that sounds like a yes to data security but what about individual privacy and control over what happens to your information?

The government seems to be saying that will depend on other aims — principally economic interests attached to the U.K.’s ability to conduct data-driven research or secure trade deals with other countries that don’t have the same (current) high U.K. standards of data protection.

There are some purely populist flourishes here too — with DCMS couching its ambition for a data regime “based on common sense, not box ticking” — and flagging up plans to beef up penalties for nuisance calls and text messages. Because, sure, who doesn’t like the sound of a crackdown on spam?

Except spam text messages and nuisance calls are a pretty quaint concern to zero in on in an era of apps and data-driven, democracy-disrupting mass surveillance — which was something the outgoing information commissioner raised as a major issue of concern during her tenure at the ICO.

The same populist anti-spam messaging has already been deployed by ministers to attack the need to obtain internet users’ consent for dropping tracking cookies — which the digital minister Oliver Dowden recently suggested he wants to do away with — for all but “high risk” purposes.

Having a system of rights wrapping people’s data that gives them a say over (and a stake in) how it can be used appears to be being reframed in the government’s messaging as irresponsible or even non-patriotic — with DCMS pushing the notion that such rights stand in the way of more important economic or highly generalized “social” goals.

Not that it has presented any evidence for that — or even that the U.K.’s current data protection regime got in the way of (the very ample) data sharing during COVID-19… While negative uses of people’s information are being condensed in DCMS’ messaging to the narrowest possible definition — of spam that’s visible to an individual — never mind how that person got targeted with the nuisance calls/spam texts in the first place.

The government is taking its customary “cake and eat it” approach to spinning its reform plan — claiming it will both “protect” people’s data while also trumpeting the importance of making it really easy for citizens’ information to be handed off to anyone who wants it, so long as they can claim they’re doing some kind of “innovation”, while also larding its PR with canned quotes dubbing the plan “bold” and “ambitious”.

So while DCMS’ announcement says the reform will “maintain” the U.K.’s (currently) world-leading data protection standards, it directly rows back — saying the new regime will (merely) “build on” a few broad-brush “key elements” of the current rules (specifically it says it will keep “principles around data processing, people’s data rights and mechanisms for supervision and enforcement”).

Clearly the devil will be in the detail of the proposals which are due to be published tomorrow morning. So expect more analysis to debunk the spin soon.

But in one specific trailed change DCMS says it wants to move away from a “one-size-fits-all” approach to data protection compliance — and “allow organisations to demonstrate compliance in ways more appropriate to their circumstances, while still protecting citizens’ personal data to a high standard”.

That implies that smaller data-mining operations — DCMS’s PR uses the example of a hairdresser’s but plenty of startups can employ fewer staff than the average barber’s shop — may be able to expect to get a pass to ignore those ‘high standards’ in the future.

Which suggests the U.K.’s “high standards” may, under Dowden’s watch, end up resembling more of a Swiss Cheese…

Data protection is a “how to, not a don’t do”…

The man who is likely to become the U.K.’s next information commissioner, New Zealand’s privacy commissioner John Edwards, was taking questions from a parliamentary committee earlier today, as MPs considered whether to support his appointment to the role.

If he’s confirmed in the job, Edwards will be responsible for implementing whatever new data regime the government cooks up.

Under questioning, he rejected the notion that the U.K.’s current data protection regime presents a barrier to data sharing — arguing that laws like GDPR should rather be seen as a “how to” and an “enabler” for innovation.

“I would take issue with the dichotomy that you presented [about privacy vs data-sharing],” he told the committee chair. “I don’t believe that policymakers and businesses and governments are faced with a choice of share or keep faith with data protection. Data protection laws and privacy laws would not be necessary if it wasn’t necessary to share information. These are two sides of the same coin.

“The UK DPA [data protection act] and UK GDPR they are a ‘how to’ — not a ‘don’t do’. And I think the UK and many jurisdictions have really finally learned that lesson through the COVID-19 crisis. It has been absolutely necessary to have good quality information available, minute by minute. And to move across different organizations where it needs to go, without friction. And there are times when data protection laws and privacy laws introduce friction and I think that what you’ve seen in the UK is that when it needs to things can happen quickly.”

He also suggested that plenty of economic gains could be achieved for the U.K. with some minor tweaks to current rules, rather than a more radical reboot being necessary. (Though clearly setting the rules won’t be up to him; his job will be enforcing whatever new regime is decided.)

“If we can, in the administration of a law which at the moment looks very much like the UK GDPR, that gives great latitude for different regulatory approaches — if I can turn that dial just a couple of points that can make the difference of billions of pounds to the UK economy and thousands of jobs so we don’t need to be throwing out the statute book and starting again — there is plenty of scope to be making improvements under the current regime,” he told MPs. “Let alone when we start with a fresh sheet of paper if that’s what the government chooses to do.”

TechCrunch asked another Edwards (no relation) — Newcastle University’s Lilian Edwards, professor of law, innovation and society — for her thoughts on the government’s direction of travel, as signalled by DCMS’ pre-proposal-publication spin, and she expressed similar concerns about the logic driving the government to argue it needs to rip up the existing standards.

“The entire scheme of data protection is to balance fundamental rights with the free flow of data. Economic concerns have never been ignored, and the current scheme, which we’ve had in essence since 1998, has struck a good balance. The great things we did with data during COVID-19 were done completely legally — and with no great difficulty under the existing rules — so that isn’t a reason to change them,” she told us.

She also took issue with the plan to reshape the ICO “as a quango whose primary job is to ‘drive economic growth’ ” — pointing out that DCMS’ PR fails to include any mention of privacy or fundamental rights, and arguing that “creating an entirely new regulator isn’t likely to do much for the ‘public trust’ that’s seen as declining in almost every poll.”

She also suggested the government is glossing over the real economic damage that would hit the U.K. if the EU decides its “reformed” standards are no longer essentially equivalent to the bloc’s. “[It’s] hard to see much concern for adequacy here; which will, for sure, be reviewed, to our detriment — prejudicing 43% of our trade for a few low value trade deals and some hopeful sell offs of NHS data (again, likely to take a wrecking ball to trust judging by the GPDPR scandal).”

She described the goal of regulating algorithmic bias as “applaudable” — but also flagged the risk of the U.K. falling behind other jurisdictions which are taking a broader look at how to regulate artificial intelligence.

Per DCMS’ press release, the government seems to be intending for an existing advisory body, called the Centre for Data Ethics and Innovation (CDEI), to have a key role in supporting its policymaking in this area — saying that the body will focus on “enabling trustworthy use of data and AI in the real-world”. However it has still not appointed a new CDEI chair to replace Roger Taylor — with only an interim chair appointment (and some new advisors) announced today.

“The world has moved on since CDEI’s work in this area,” argued Edwards. “We realise now that regulating the harmful effects of AI has to be considered in the round with other regulatory tools not just data protection. The proposed EU AI Regulation is not without flaw but goes far further than data protection in mandating better quality training sets, and more transparent systems to be built from scratch. If the UK is serious about regulating it has to look at the global models being floated but right now it looks like its main concerns are insular, short-sighted and populist.”

Patient data privacy advocacy group MedConfidential, which has frequently locked horns with the government over its approach to data protection, also queried DCMS’ continued attachment to the CDEI for shaping policymaking in such a crucial area — pointing to last year’s biased algorithm exam grading scandal, which happened under Taylor’s watch.

(NB: Taylor was also the Ofqual chair, and his resignation from that post in December cited a “difficult summer”, even as his departure from the CDEI leaves an awkward hole now… )

“The culture and leadership of CDEI led to the A-Levels algorithm, why should anyone in government have any confidence in what they say next?” said MedConfidential’s Sam Smith.

#artificial-intelligence, #data-processing, #dcms, #deep-learning, #deepmind, #europe, #google, #google-health, #healthcare, #information-commissioners-office, #john-edwards, #moorfields-eye-hospital, #nhs, #oliver-dowden, #policy, #privacy, #uk-gdpr, #uk-government, #united-kingdom

UK offers cash for CSAM detection tech targeted at e2e encryption

The UK government is preparing to spend over half a million dollars to encourage the development of detection technologies for child sexual exploitation material (CSAM) that can be bolted on to end-to-end encrypted messaging platforms to scan for the illegal material, as part of its ongoing policy push around Internet and child safety.

In a joint initiative today, the Home Office and the Department for Digital, Media, Culture and Sport (DCMS) announced a “Tech Safety Challenge Fund” — which will distribute up to £425,000 (~$584k) to five organizations (£85k/$117k each) to develop “innovative technology to keep children safe in environments such as online messaging platforms with end-to-end encryption”.

A Challenge statement for applicants to the program adds that the focus is on solutions that can be deployed within e2e encrypted environments “without compromising user privacy”.

“The problem that we’re trying to fix is essentially the blindfolding of law enforcement agencies,” a Home Office spokeswoman told us, arguing that if tech platforms go ahead with their “full end-to-end encryption plans, as they currently are… we will be completely hindered in being able to protect our children online”.

While the announcement does not name any specific platforms of concern, Home Secretary Priti Patel has previously attacked Facebook’s plans to expand its use of e2e encryption — warning in April that the move could jeopardize law enforcement’s ability to investigate child abuse crime.

Facebook-owned WhatsApp also already uses e2e encryption so that platform is already a clear target for whatever ‘safety’ technologies might result from this taxpayer-funded challenge.

Apple’s iMessage and FaceTime are among other existing mainstream messaging tools which use e2e encryption.

So there is potential for very widespread application of any ‘child safety tech’ developed through this government-backed challenge. (Per the Home Office, technologies submitted to the Challenge will be evaluated by “independent academic experts”. The department was unable to provide details of who exactly will assess the projects.)

Patel, meanwhile, is continuing to apply high level pressure on the tech sector on this issue — including aiming to drum up support from G7 counterparts.

Writing in paywalled op-ed in Tory-friendly newspaper, The Telegraph, she trails a meeting she’ll be chairing today where she says she’ll push the G7 to collectively pressure social media companies to do more to address “harmful content on their platforms”.

“The introduction of end-to-end encryption must not open the door to even greater levels of child sexual abuse. Hyperbolic accusations from some quarters that this is really about governments wanting to snoop and spy on innocent citizens are simply untrue. It is about keeping the most vulnerable among us safe and preventing truly evil crimes,” she adds.

“I am calling on our international partners to back the UK’s approach of holding technology companies to account. They must not let harmful content continue to be posted on their platforms or neglect public safety when designing their products. We believe there are alternative solutions, and I know our law enforcement colleagues agree with us.”

In the op-ed, the Home Secretary singles out Apple’s recent move to add a CSAM detection tool to iOS and macOS to scan content on user’s devices before it’s uploaded to iCloud — welcoming the development as a “first step”.

“Apple state their child sexual abuse filtering technology has a false positive rate of 1 in a trillion, meaning the privacy of legitimate users is protected whilst those building huge collections of extreme child sexual abuse material are caught out. They need to see th[r]ough that project,” she writes, urging Apple to press ahead with the (currently delayed) rollout.

Last week the iPhone maker said it would delay implementing the CSAM detection system — following a backlash led by security experts and privacy advocates who raised concerns about vulnerabilities in its approach, as well as the contradiction of a ‘privacy-focused’ company carrying out on-device scanning of customer data. They also flagged the wider risk of the scanning infrastructure being seized upon by governments and states who might order Apple to scan for other types of content, not just CSAM.

Patel’s description of Apple’s move as just a “first step” is unlikely to do anything to assuage concerns that once such scanning infrastructure is baked into e2e encrypted systems it will become a target for governments to widen the scope of what commercial platforms must legally scan for.

However the Home Office’s spokeswoman told us that Patel’s comments on Apple’s CSAM tech were only intended to welcome its decision to take action in the area of child safety — rather than being an endorsement of any specific technology or approach. (And Patel does also write: “But that is just one solution, by one company. Greater investment is essential.”)

The Home Office spokeswoman wouldn’t comment on which types of technologies the government is aiming to support via the Challenge fund, either, saying only that they’re looking for a range of solutions.

She told us the overarching goal is to support ‘middleground’ solutions — denying the government is trying to encourage technologists to come up with ways to backdoor e2e encryption.

In recent years in the UK GCHQ has also floated the controversial idea of a so-called ‘ghost protocol’ — that would allow for state intelligence or law enforcement agencies to be invisibly CC’d by service providers into encrypted communications on a targeted basis. That proposal was met with widespread criticism, including from the tech industry, which warned it would undermine trust and security and threaten fundamental rights.

It’s not clear if the government has such an approach — albeit with a CSAM focus — in mind here now as it tries to encourage the development of ‘middleground’ technologies that are able to scan e2e encrypted content for specifically illegal stuff.

In another concerning development, earlier this summer, guidance put out by DCMS for messaging platforms recommended that they “prevent” the use of e2e encryption for child accounts altogether.

Asked about that, the Home Office spokeswoman told us the tech fund is “not too different” and “is trying to find the solution in between”.

“Working together and bringing academics and NGOs into the field so that we can find a solution that works for both what social media companies want to achieve and also make sure that we’re able to protect children,” said said, adding: “We need everybody to come together and look at what they can do.”

There is not much more clarity in the Home Office guidance to suppliers applying for the chance to bag a tranche of funding.

There it writes that proposals must “make innovative use of technology to enable more effective detection and/or prevention of sexually explicit images or videos of children”.

“Within scope are tools which can identify, block or report either new or previously known child sexual abuse material, based on AI, hash-based detection or other techniques,” it goes on, further noting that proposals need to address “the specific challenges posed by e2ee environments, considering the opportunities to respond at different levels of the technical stack (including client-side and server-side).”

General information about the Challenge — which is open to applicants based anywhere, not just in the UK — can be found on the Safety Tech Network website.

The deadline for applications is October 6.

Selected applicants will have five months, between November 2021 and March 2022 to deliver their projects.

When exactly any of the tech might be pushed at the commercial sector isn’t clear — but the government may be hoping that by keeping up the pressure on the tech sector platform giants will develop this stuff themselves, as Apple has been.

The Challenge is just the latest UK government initiative to bring platforms in line with its policy priorities — back in 2017, for example, it was pushing them to build tools to block terrorist content — and you could argue it’s a form of progress that ministers are not simply calling for e2e encryption to be outlawed, as they frequently have in the past.

That said, talk of ‘preventing’ the use of e2e encryption — or even fuzzy suggestions of “in between” solutions — may not end up being so very different.

What is different is the sustained focus on child safety as the political cudgel to make platforms comply. That seems to be getting results.

Wider government plans to regulate platforms — set out in a draft Online Safety bill, published earlier this year — have yet to go through parliamentary scrutiny. But in one already baked in change, the country’s data protection watchdog is now enforcing a children’s design code which stipulates that platforms need to prioritize kids’ privacy by default, among other recommended standards.

The Age Appropriate Design Code was appended to the UK’s data protection bill as an amendment — meaning it sits under wider legislation that transposed Europe’s General Data Protection Regulation (GDPR) into law, which brought in supersized penalties for violations like data breaches. And in recent months a number of social media giants have announced changes to how they handle children’s accounts and data — which the ICO has credited to the code.

So the government may be feeling confident that it has finally found a blueprint for bringing tech giants to heel.

#apple, #csam, #csam-detection, #e2e-encryption, #encrypted-communications, #encryption, #end-to-end-encryption, #europe, #facebook, #g7, #general-data-protection-regulation, #home-office, #law-enforcement, #policy, #privacy, #social-media, #tc, #uk-government, #united-kingdom, #whatsapp

After years of inaction against adtech, UK’s ICO calls for browser-level controls to fix ‘cookie fatigue’

In the latest quasi-throwback toward ‘do not track‘, the UK’s data protection chief has come out in favor of a browser- and/or device-level setting to allow Internet users to set “lasting” cookie preferences — suggesting this as a fix for the barrage of consent pop-ups that continues to infest websites in the region.

European web users digesting this development in an otherwise monotonously unchanging regulatory saga, should be forgiven — not only for any sense of déjà vu they may experience — but also for wondering if they haven’t been mocked/gaslit quite enough already where cookie consent is concerned.

Last month, UK digital minister Oliver Dowden took aim at what he dubbed an “endless” parade of cookie pop-ups — suggesting the government is eyeing watering down consent requirements around web tracking as ministers consider how to diverge from European Union data protection standards, post-Brexit. (He’s slated to present the full sweep of the government’s data ‘reform’ plans later this month so watch this space.)

Today the UK’s outgoing information commissioner, Elizabeth Denham, stepped into the fray to urge her counterparts in G7 countries to knock heads together and coalesce around the idea of letting web users express generic privacy preferences at the browser/app/device level, rather than having to do it through pop-ups every time they visit a website.

In a statement announcing “an idea” she will present this week during a virtual meeting of fellow G7 data protection and privacy authorities — less pithily described in the press release as being “on how to improve the current cookie consent mechanism, making web browsing smoother and more business friendly while better protecting personal data” — Denham said: “I often hear people say they are tired of having to engage with so many cookie pop-ups. That fatigue is leading to people giving more personal data than they would like.

“The cookie mechanism is also far from ideal for businesses and other organisations running websites, as it is costly and it can lead to poor user experience. While I expect businesses to comply with current laws, my office is encouraging international collaboration to bring practical solutions in this area.”

“There are nearly two billion websites out there taking account of the world’s privacy preferences. No single country can tackle this issue alone. That is why I am calling on my G7 colleagues to use our convening power. Together we can engage with technology firms and standards organisations to develop a coordinated approach to this challenge,” she added.

Contacted for more on this “idea”, an ICO spokeswoman reshuffled the words thusly: “Instead of trying to effect change through nearly 2 billion websites, the idea is that legislators and regulators could shift their attention to the browsers, applications and devices through which users access the web.

“In place of click-through consent at a website level, users could express lasting, generic privacy preferences through browsers, software applications and device settings – enabling them to set and update preferences at a frequency of their choosing rather than on each website they visit.”

Of course a browser-baked ‘Do not track’ (DNT) signal is not a new idea. It’s around a decade old at this point. Indeed, it could be called the idea that can’t die because it’s never truly lived — as earlier attempts at embedding user privacy preferences into browser settings were scuppered by lack of industry support.

However the approach Denham is advocating, vis-a-vis “lasting” preferences, may in fact be rather different to DNT — given her call for fellow regulators to engage with the tech industry, and its “standards organizations”, and come up with “practical” and “business friendly” solutions to the regional Internet’s cookie pop-up problem.

It’s not clear what consensus — practical or, er, simply pro-industry — might result from this call. If anything.

Indeed, today’s press release may be nothing more than Denham trying to raise her own profile since she’s on the cusp of stepping out of the information commissioner’s chair. (Never waste a good international networking opportunity and all that — her counterparts in the US, Canada, Japan, France, Germany and Italy are scheduled for a virtual natter today and tomorrow where she implies she’ll try to engage them with her big idea).

Her UK replacement, meanwhile, is already lined up. So anything Denham personally champions right now, at the end of her ICO chapter, may have a very brief shelf life — unless she’s set to parachute into a comparable role at another G7 caliber data protection authority.

Nor is Denham the first person to make a revived pitch for a rethink on cookie consent mechanisms — even in recent years.

Last October, for example, a US-centric tech-publisher coalition came out with what they called a Global Privacy Standard (GPC) — aiming to build momentum for a browser-level pro-privacy signal to stop the sale of personal data, geared toward California’s Consumer Privacy Act (CCPA), though pitched as something that could have wider utility for Internet users.

By January this year they announced 40M+ users were making use of a browser or extension that supports GPC — along with a clutch of big name publishers signed up to honor it. But it’s fair to say its global impact so far remains limited. 

More recently, European privacy group noyb published a technical proposal for a European-centric automated browser-level signal that would let regional users configure advanced consent choices — enabling the more granular controls it said would be needed to fully mesh with the EU’s more comprehensive (vs CCPA) legal framework around data protection.

The proposal, for which noyb worked with the Sustainable Computing Lab at the Vienna University of Economics and Business, is called Advanced Data Protection Control (ADPC). And noyb has called on the EU to legislate for such a mechanism — suggesting there’s a window of opportunity as lawmakers there are also keen to find ways to reduce cookie fatigue (a stated aim for the still-in-train reform of the ePrivacy rules, for example).

So there are some concrete examples of what practical, less fatiguing yet still pro-privacy consent mechanisms might look like to lend a little more color to Denham’s ‘idea’ — although her remarks today don’t reference any such existing mechanisms or proposals.

(When we asked the ICO for more details on what she’s advocating for, its spokeswoman didn’t cite any specific technical proposals or implementations, historical or contemporary, either, saying only: “By working together, the G7 data protection authorities could have an outsized impact in stimulating the development of technological solutions to the cookie consent problem.”)

So Denham’s call to the G7 does seem rather low on substance vs profile-raising noise.

In any case, the really big elephant in the room here is the lack of enforcement around cookie consent breaches — including by the ICO.

Add to that, there’s the now very pressing question of how exactly the UK will ‘reform’ domestic law in this area (post-Brexit) — which makes the timing of Denham’s call look, well, interestingly opportune. (And difficult to interpret as anything other than opportunistically opaque at this point.)

The adtech industry will of course be watching developments in the UK with interest — and would surely be cheering from the rooftops if domestic data protection ‘reform’ results in amendments to UK rules that allow the vast majority of websites to avoid having to ask Brits for permission to process their personal data, say by opting them into tracking by default (under the guise of ‘fixing’ cookie friction and cookie fatigue for them).

That would certainly be mission accomplished after all these years of cookie-fatigue-generating-cookie-consent-non-compliance by surveillance capitalism’s industrial data complex.

It’s not yet clear which way the UK government will jump — but eyebrows should raise to read the ICO writing today that it expects compliance with (current) UK law when it has so roundly failed to tackle the adtech industry’s role in cynically sicking up said cookie fatigue by failing to take any action against such systemic breaches.

The bald fact is that the ICO has — for years — avoided tackling adtech abuse of data protection, despite acknowledging publicly that the sector is wildly out of control.

Instead, it has opted for a cringing ‘process of engagement’ (read: appeasement) that has condemned UK Internet users to cookie pop-up hell.

This is why the regulator is being sued for inaction — after it closed a long-standing complaint against the security abuse of people’s data in real-time bidding ad auctions with nothing to show for it… So, yes, you can be forgiven for feeling gaslit by Denham’s call for action on cookie fatigue following the ICO’s repeat inaction on the causes of cookie fatigue…

Not that the ICO is alone on that front, however.

There has been a fairly widespread failure by EU regulators to tackle systematic abuse of the bloc’s data protection rules by the adtech sector — with a number of complaints (such as this one against the IAB Europe’s self-styled ‘transparency and consent framework’) still working, painstakingly, through the various labyrinthine regulatory processes.

France’s CNIL has probably been the most active in this area — last year slapping Amazon and Google with fines of $42M and $120M for dropping tracking cookies without consent, for example. (And before you accuse CNIL of being ‘anti-American’, it has also gone after domestic adtech.)

But elsewhere — notably Ireland, where many adtech giants are regionally headquartered — the lack of enforcement against the sector has allowed for cynical, manipulative and/or meaningless consent pop-ups to proliferate as the dysfunctional ‘norm’, while investigations have failed to progress and EU citizens have been forced to become accustomed, not to regulatory closure (or indeed rapture), but to an existentially endless consent experience that’s now being (re)branded as ‘cookie fatigue’.

Yes, even with the EU’s General Data Protection Regulation (GDPR) coming into application in 2018 and beefing up (in theory) consent standards.

This is why the privacy campaign group noyb is now lodging scores of complaints against cookie consent breaches — to try to force EU regulators to actually enforce the law in this area, even as it also finds time to put up a practical technical proposal that could help shrink cookie fatigue without undermining data protection standards. 

It’s a shining example of action that has yet to inspire the lion’s share of the EU’s actual regulators to act on cookies. The tl;dr is that EU citizens are still waiting for the cookie consent reckoning — even if there is now a bit of high level talk about the need for ‘something to be done’ about all these tedious pop-ups.

The problem is that while GDPR certainly cranked up the legal risk on paper, without proper enforcement it’s just a paper tiger. And the pushing around of lots of paper is very tedious, clearly. 

Most cookie pop-ups you’ll see in the EU are thus essentially privacy theatre; at the very least they’re unnecessarily irritating because they create ongoing friction for web users who must constantly respond to nags for their data (typically to repeatedly try to deny access if they can actually find a ‘reject all’ setting).

But — even worse — many of these pervasive pop-ups are actively undermining the law (as a number of studies have shown) because the vast majority do not meet the legal standard for consent.

So the cookie consent/fatigue narrative is actually a story of faux compliance enabled by an enforcement vacuum that’s now also encouraging the watering down of privacy standards as a result of such much unpunished flouting of the law.

There is a lesson here, surely.

‘Faux consent’ pop-ups that you can easily stumble across when surfing the ‘ad-supported’ Internet in Europe include those failing to provide users with clear information about how their data will be used; or not offering people a free choice to reject tracking without being penalized (such as with no/limited access to the content they’re trying to access), or at least giving the impression that accepting is a requirement to access said content (dark pattern!); and/or otherwise manipulating a person’s choice by making it super simple to accept tracking and far, far, far more tedious to deny.

You can also still sometimes find cookie notices that don’t offer users any choice at all — and just pop up to inform that ‘by continuing to browse you consent to your data being processed’ — which, unless the cookies in question are literally essential for provision of the webpage, is basically illegal. (Europe’s top court made it abundantly clear in 2019 that active consent is a requirement for non-essential cookies.)

Nonetheless, to the untrained eye — and sadly there are a lot of them where cookie consent notices are concerned — it can look like it’s Europe’s data protection law that’s the ass because it seemingly demands all these meaningless ‘consent’ pop-ups, which just gloss over an ongoing background data grab anyway.

The truth is regulators should have slapped down these manipulative dark patterns years ago.

The problem now is that regulatory failure is encouraging political posturing — and, in a twisting double-back throw by the ICO! — regulatory thrusting around the idea that some newfangled mechanism is what’s really needed to remove all this universally inconvenient ‘friction’.

An idea like noyb’s ADPC does indeed look very useful in ironing out the widespread operational wrinkles wrapping the EU’s cookie consent rules. But when it’s the ICO suggesting a quick fix after the regulatory authority has failed so spectacularly over the long duration of complaints around this issue you’ll have to forgive us for being sceptical.

In such a context the notion of ‘cookie fatigue’ looks like it’s being suspiciously trumped up; fixed on as a convenient scapegoat to rechannel consumer frustration with hated online tracking toward high privacy standards — and away from the commercial data-pipes that demand all these intrusive, tedious cookie pop-ups in the first place — whilst neatly aligning with the UK government’s post-Brexit political priorities on ‘data’.

Worse still: The whole farcical consent pantomime — which the adtech industry has aggressively engaged in to try to sustain a privacy-hostile business model in spite of beefed up European privacy laws — could be set to end in genuine tragedy for user rights if standards end up being slashed to appease the law mockers.

The target of regulatory ire and political anger should really be the systematic law-breaking that’s held back privacy-respecting innovation and non-tracking business models — by making it harder for businesses that don’t abuse people’s data to compete.

Governments and regulators should not be trying to dismantle the principle of consent itself. Yet — at least in the UK — that does now look horribly possible.

Laws like GDPR set high standards for consent which — if they were but robustly enforced — could lead to reform of highly problematic practices like behavorial advertising combined with the out-of-control scale of programmatic advertising.

Indeed, we should already be seeing privacy-respecting forms of advertising being the norm, not the alternative — free to scale.

Instead, thanks to widespread inaction against systematic adtech breaches, there has been little incentive for publishers to reform bad practices and end the irritating ‘consent charade’ — which keeps cookie pop-ups mushrooming forth, oftentimes with ridiculously lengthy lists of data-sharing ‘partners’ (i.e. if you do actually click through the dark patterns to try to understand what is this claimed ‘choice’ you’re being offered).

As well as being a criminal waste of web users’ time, we now have the prospect of attention-seeking, politically charged regulators deciding that all this ‘friction’ justifies giving data-mining giants carte blanche to torch user rights — if the intention is to fire up the G7 to send a collect invite to the tech industry to come up with “practical” alternatives to asking people for their consent to track them — and all because authorities like the ICO have been too risk averse to actually defend users’ rights in the first place.

Dowden’s remarks last month suggest the UK government may be preparing to use cookie consent fatigue as convenient cover for watering down domestic data protection standards — at least if it can get away with the switcheroo.

Nothing in the ICO’s statement today suggests it would stand in the way of such a move.

Now that the UK is outside the EU, the UK government has said it believes it has an opportunity to deregulate domestic data protection — although it may find there are legal consequences for domestic businesses if it diverges too far from EU standards.

Denham’s call to the G7 naturally includes a few EU countries (the biggest economies in the bloc) but by targeting this group she’s also seeking to engage regulators further afield — in jurisdictions that currently lack a comprehensive data protection framework. So if the UK moves, cloaked in rhetoric of ‘Global Britain’, to water down its (EU-based) high domestic data protection standards it will be placing downward pressure on international aspirations in this area — as a counterweight to the EU’s geopolitical ambitions to drive global standards up to its level.

The risk, then, is a race to the bottom on privacy standards among Western democracies — at a time when awareness about the importance of online privacy, data protection and information security has actually never been higher.

Furthermore, any UK move to weaken data protection also risks putting pressure on the EU’s own high standards in this area — as the regional trajectory would be down not up. And that could, ultimately, give succour to forces inside the EU that lobby against its commitment to a charter of fundamental rights — by arguing such standards undermine the global competitiveness of European businesses.

So while cookies themselves — or indeed ‘cookie fatigue’ — may seem an irritatingly small concern, the stakes attached to this tug of war around people’s rights over what can happen to their personal data are very high indeed.

#advertising-tech, #amazon, #california, #canada, #cookie-consent-notices, #cookie-fatigue, #cookies, #data-protection, #data-protection-law, #data-security, #do-not-track, #elizabeth-denham, #europe, #european-union, #france, #g7, #general-data-protection-regulation, #germany, #google, #ireland, #italy, #japan, #noyb, #oliver-dowden, #online-privacy, #online-tracking, #privacy, #tc, #tracking, #uk-government, #united-kingdom, #united-states, #web-tracking

UK now expects compliance with children’s privacy design code

In the UK, a 12-month grace period for compliance with a design code aimed at protecting children online expires today — meaning app makers offering digital services in the market which are “likely” to be accessed by children (defined in this context as users under 18 years old) are expected to comply with a set of standards intended to safeguard kids from being tracked and profiled.

The age appropriate design code came into force on September 2 last year however the UK’s data protection watchdog, the ICO, allowed the maximum grace period for hitting compliance to give organizations time to adapt their services.

But from today it expects the standards of the code to be met.

Services where the code applies can include connected toys and games and edtech but also online retail and for-profit online services such as social media and video sharing platforms which have a strong pull for minors.

Among the code’s stipulations are that a level of ‘high privacy’ should be applied to settings by default if the user is (or is suspected to be) a child — including specific provisions that geolocation and profiling should be off by default (unless there’s a compelling justification for such privacy hostile defaults).

The code also instructs app makers to provide parental controls while also providing the child with age-appropriate information about such tools — warning against parental tracking tools that could be used to silently/invisibly monitor a child without them being made aware of the active tracking.

Another standard takes aim at dark pattern design — with a warning to app makers against using “nudge techniques” to push children to provide “unnecessary personal data or weaken or turn off their privacy protections”.

The full code contains 15 standards but is not itself baked into legislation — rather it’s a set of design recommendations the ICO wants app makers to follow.

The regulatory stick to make them do so is that the watchdog is explicitly linking compliance with its children’s privacy standards to passing muster with wider data protection requirements that are baked into UK law.

The risk for apps that ignore the standards is thus that they draw the attention of the watchdog — either through a complaint or proactive investigation — with the potential of a wider ICO audit delving into their whole approach to privacy and data protection.

“We will monitor conformance to this code through a series of proactive audits, will consider complaints, and take appropriate action to enforce the underlying data protection standards, subject to applicable law and in line with our Regulatory Action Policy,” the ICO writes in guidance on its website. “To ensure proportionate and effective regulation we will target our most significant powers, focusing on organisations and individuals suspected of repeated or wilful misconduct or serious failure to comply with the law.”

It goes on to warn it would view a lack of compliance with the kids’ privacy code as a potential black mark against (enforceable) UK data protection laws, adding: “If you do not follow this code, you may find it difficult to demonstrate that your processing is fair and complies with the GDPR [General Data Protection Regulation] or PECR [Privacy and Electronics Communications Regulation].”

Tn a blog post last week, Stephen Bonner, the ICO’s executive director of regulatory futures and innovation, also warned app makers: “We will be proactive in requiring social media platforms, video and music streaming sites and the gaming industry to tell us how their services are designed in line with the code. We will identify areas where we may need to provide support or, should the circumstances require, we have powers to investigate or audit organisations.”

“We have identified that currently, some of the biggest risks come from social media platforms, video and music streaming sites and video gaming platforms,” he went on. “In these sectors, children’s personal data is being used and shared, to bombard them with content and personalised service features. This may include inappropriate adverts; unsolicited messages and friend requests; and privacy-eroding nudges urging children to stay online. We’re concerned with a number of harms that could be created as a consequence of this data use, which are physical, emotional and psychological and financial.”

“Children’s rights must be respected and we expect organisations to prove that children’s best interests are a primary concern. The code gives clarity on how organisations can use children’s data in line with the law, and we want to see organisations committed to protecting children through the development of designs and services in accordance with the code,” Bonner added.

The ICO’s enforcement powers — at least on paper — are fairly extensive, with GDPR, for example, giving it the ability to fine infringers up to £17.5M or 4% of their annual worldwide turnover, whichever is higher.

The watchdog can also issue orders banning data processing or otherwise requiring changes to services it deems non-compliant. So apps that chose to flout the children’s design code risk setting themselves up for regulatory bumps or worse.

In recent months there have been signs some major platforms have been paying mind to the ICO’s compliance deadline — with Instagram, YouTube and TikTok all announcing changes to how they handle minors’ data and account settings ahead of the September 2 date.

In July, Instagram said it would default teens to private accounts — doing so for under 18s in certain countries which the platform confirmed to us includes the UK — among a number of other child-safety focused tweaks. Then in August, Google announced similar changes for accounts on its video charing platform, YouTube.

A few days later TikTok also said it would add more privacy protections for teens. Though it had also made earlier changes limiting privacy defaults for under 18s.

Apple also recently got itself into hot water with the digital rights community following the announcement of child safety-focused features — including a child sexual abuse material (CSAM) detection tool which scans photo uploads to iCloud; and an opt in parental safety feature that lets iCloud Family account users turn on alerts related to the viewing of explicit images by minors using its Messages app.

The unifying theme underpinning all these mainstream platform product tweaks is clearly ‘child protection’.

And while there’s been growing attention in the US to online child safety and the nefarious ways in which some apps exploit kids’ data — as well as a number of open probes in Europe (such as this Commission investigation of TikTok, acting on complaints) — the UK may be having an outsized impact here given its concerted push to pioneer age-focused design standards.

The code also combines with incoming UK legislate which is set to apply a ‘duty of care’ on platforms to take a rboad-brush safety-first stance toward users, also with a big focus on kids (and there it’s also being broadly targeted to cover all children; rather than just applying to kids under 13s as with the US’ COPPA, for example).

In the blog post ahead of the compliance deadline expiring, the ICO’s Bonner sought to take credit for what he described as “significant changes” made in recent months by platforms like Facebook, Google, Instagram and TikTok, writing: “As the first-of-its kind, it’s also having an influence globally. Members of the US Senate and Congress have called on major US tech and gaming companies to voluntarily adopt the standards in the ICO’s code for children in America.”

“The Data Protection Commission in Ireland is preparing to introduce the Children’s Fundamentals to protect children online, which links closely to the code and follows similar core principles,” he also noted.

And there are other examples in the EU: France’s data watchdog, the CNIL, looks to have been inspired by the ICO’s approach — issuing its own set of right child-protection focused recommendations this June (which also, for example, encourage app makers to add parental controls with the clear caveat that such tools must “respect the child’s privacy and best interests”).

The UK’s focus on online child safety is not just making waves overseas but sparking growth in a domestic compliance services industry.

Last month, for example, the ICO announced the first clutch of GDPR certification scheme criteria — including two schemes which focus on the age appropriate design code. Expect plenty more.

Bonner’s blog post also notes that the watchdog will formally set out its position on age assurance this autumn — so it will be providing further steerage to organizations which are in scope of the code on how to tackle that tricky piece, although it’s still not clear how hard a requirement the ICO will support, with Bonner suggesting it could be actually “verifying ages or age estimation”. Watch that space. Whatever the recommendations are, age assurance services are set to spring up with compliance-focused sales pitches.

Children’s safety online has been a huge focus for UK policymakers in recent years, although the wider (and long in train) Online Safety (neé Harms) Bill remains at the draft law stage.

An earlier attempt by UK lawmakers to bring in mandatory age checks to prevent kids from accessing adult content websites — dating back to 2017’s Digital Economy Act — was dropped in 2019 after widespread criticism that it would be both unworkable and a massive privacy risk for adult users of porn.

But the government did not drop its determination to find a way to regulate online services in the name of child safety. And online age verification checks look set to be — if not a blanket, hardened requirement for all digital services — increasingly brought in by the backdoor, through a sort of ‘recommended feature’ creep (as the ORG has warned). 

The current recommendation in the age appropriate design code is that app makers “take a risk-based approach to recognising the age of individual users and ensure you effectively apply the standards in this code to child users”, suggesting they: “Either establish age with a level of certainty that is appropriate to the risks to the rights and freedoms of children that arise from your data processing, or apply the standards in this code to all your users instead.” 

At the same time, the government’s broader push on online safety risks conflicting with some of the laudable aims of the ICO’s non-legally binding children’s privacy design code.

For instance, while the code includes the (welcome) suggestion that digital services gather as little information about children as possible, in an announcement earlier this summer UK lawmakers put out guidance for social media platforms and messaging services — ahead of the planned Online Safety legislation — that recommends they prevent children from being able to use end-to-end encryption.

That’s right; the government’s advice to data-mining platforms — which it suggests will help prepare them for requirements in the incoming legislation — is not to use ‘gold standard’ security and privacy (e2e encryption) for kids.

So the official UK government messaging to app makers appears to be that, in short order, the law will require commercial services to access more of kids’ information, not less — in the name of keeping them ‘safe’. Which is quite a contradiction vs the data minimization push on the design code.

The risk is that a tightening spotlight on kids privacy ends up being fuzzed and complicated by ill-thought through policies that push platforms to monitor kids to demonstrate ‘protection’ from a smorgasbord of online harms — be it adult content or pro-suicide postings, or cyber bullying and CSAM.

The law looks set to encourage platforms to ‘show their workings’ to prove compliance — which risks resulting in ever closer tracking of children’s activity, retention of data — and maybe risk profiling and age verification checks (that could even end up being applied to all users; think sledgehammer to crack a nut). In short, a privacy dystopia.

Such mixed messages and disjointed policymaking seem set to pile increasingly confusing — and even conflicting — requirements on digital services operating in the UK, making tech businesses legally responsible for divining clarity amid the policy mess — with the simultaneous risk of huge fines if they get the balance wrong.

Complying with the ICO’s design standards may therefore actually be the easy bit.

 

#data-processing, #data-protection, #encryption, #europe, #general-data-protection-regulation, #google, #human-rights, #identity-management, #instagram, #online-harms, #online-retail, #online-safety, #policy, #privacy, #regulatory-compliance, #social-issues, #social-media, #social-media-platforms, #tc, #tiktok, #uk-government, #united-kingdom, #united-states

Google confirms it’s pulling the plug on Streams, its UK clinician support app

Google is infamous for spinning up products and killing them off, often in very short order. It’s an annoying enough habit when it’s stuff like messaging apps and games. But the tech giant’s ambitions stretch into many domains that touch human lives these days. Including, most directly, healthcare. And — it turns out — so does Google’s tendency to kill off products that its PR has previously touted as ‘life saving’.

To wit: Following a recent reconfiguration of Google’s health efforts — reported earlier by Business Insider — the tech giant confirmed to TechCrunch that it is decommissioning its clinician support app, Streams.

The app, which Google Health PR bills as a “mobile medical device”, was developed back in 2015 by DeepMind, an AI division of Google — and has been used by the UK’s National Health Service in the years since, with a number of NHS Trusts inking deals with DeepMind Health to roll out Streams to their clinicians.

At the time of writing, one NHS Trust — London’s Royal Free — is still using the app in its hospitals.

But, presumably, not for too much longer since Google is in the process of taking Streams out back to be shot and tossed into its deadpool — alongside the likes of its ill-fated social network, Google+, and Internet ballon company Loon, to name just two of a frankly endless list of now defunct Alphabet/Google products.

Other NHS Trusts we contacted which had previously rolled out Streams told us they have already stopped using the app.

University College London NHS Trust confirmed to TechCrunch that it severed ties with Google Health earlier this year.

“Our agreement with Google Health (initially DeepMind) came to an end in March 2021 as originally planned. Google Health deleted all the data it held at the end of the [Streams] project,” a UCL NHS Trust spokesperson told TechCrunch.

Imperial College Healthcare NHS Trust also told us it stopped using Streams this summer (in July) — and said patient data is in the process of being deleted.

“Following the decommissioning of Streams at the Trust earlier this summer, data that has been processed by Google Health to provide the service to the Trust will be deleted and the agreement has been terminated,” a spokesperson said.

“As per the data sharing agreement, any patient data that has been processed by Google Health to provide the service will be deleted. The deletion process is started once the agreement has been terminated,” they added, saying the contractual timeframe for Google deleting patient data is six months.

Another Trust, Taunton & Somerset, also confirmed its involvement with Streams had already ended. 

The Streams contracts DeepMind inked with the NHS Trusts were for five years — so these contracts were likely approaching the end of their terms, anyway.

Contract extensions would have had to be agreed by both parties. And Google’s decision to decommission Streams may be factoring in a lack of enthusiasm from involved Trusts to continue using the software — although if that’s the case it may, in turn, be a reflection of Trusts’ perceptions of Google’s weak commitment to the project.

Neither side is saying much publicly.

But as far as we’re aware the Royal Free is the only NHS Trust still using the clinician support app as Google prepares to cut off Stream’s life support.

No more Streams?

The Streams story has plenty of wrinkles, to put it politely.

For one thing, despite being developed by Google’s AI division — and despite DeepMind founder Mustafa Suleyman saying the goal for the project was to find ways to integrate AI into Streams so the app could generate predictive healthcare alerts — the Streams app doesn’t involve any artificial intelligence.

An algorithm in Streams alerts doctors to the risk of a patient developing acute kidney injury but relies on an existing AKI (acute kidney injury) algorithm developed by the NHS. So Streams essentially digitized and mobilized existing practice.

As a result, it always looked odd that an AI division of an adtech giant would be so interested in building, provisioning and supporting clinician support software over the long term. But then — as it panned out — neither DeepMind nor Google were in it for the long haul at the patient’s bedside.

DeepMind and the NHS Trust it worked with to develop Streams (the aforementioned Royal Free) started out with wider ambitions for their partnership — as detailed in an early 2016 memo we reported on, which set out a five year plan to bring AI to healthcare. Plus, as we noted above, Suleyman keep up the push for years — writing later in 2019 that: “Streams doesn’t use artificial intelligence at the moment, but the team now intends to find ways to safely integrate predictive AI models into Streams in order to provide clinicians with intelligent insights into patient deterioration.”

A key misstep for the project emerged in 2017 — through press reporting of a data scandal, as details of the full scope of the Royal Free-DeepMind data-sharing partnership were published by New Scientist (which used a freedom of information request to obtain contracts the pair had not made public).

The UK’s data protection watchdog went on to find that the Royal Free had not had a valid legal basis when it passed information on millions of patients’ to DeepMind during the development phase of Streams.

Which perhaps explains DeepMind’s eventually cooling ardour for a project it had initially thought — with the help of a willing NHS partner — would provide it with free and easy access to a rich supply of patient data for it to train up healthcare AIs which it would then be, seemingly, perfectly positioned to sell back into the self same service in future years. Price tbc.

No one involved in that thought had properly studied the detail of UK healthcare data regulation, clearly.

Or — most importantly — bothered to considered fundamental patient expectations about their private information.

So it was not actually surprising when, in 2018, DeepMind announced that it was stepping away from Streams — handing the app (and all its data) to Google Health — Google’s internal health-focused division — which went on to complete its takeover of DeepMind Health in 2019. (Although it was still shocking, as we opined at the time.)

It was Google Health that Suleyman suggested would be carrying forward the work to bake AI into Streams, writing at the time of the takeover that: “The combined experience, infrastructure and expertise of DeepMind Health teams alongside Google’s will help us continue to develop mobile tools that can support more clinicians, address critical patient safety issues and could, we hope, save thousands of lives globally.”

A particular irony attached to the Google Health takeover bit of the Streams saga is the fact that DeepMind had, when under fire over its intentions toward patient data, claimed people’s medical information would never be touched by its adtech parent.

Until of course it went on it hand the whole project off to Google — and then lauded the transfer as great news for clinicians and patients!

Google’s takeover of Streams meant NHS Trusts that wanted to continue using the app had to ink new contracts directly with Google Health. And all those who had rolled out the app did so. It’s not like they had much choice if they did want to continue.

Again, jump forward a couple of years and it’s Google Health now suddenly facing a major reorg — with Streams in the frame for the chop as part of Google’s perpetually reconfiguring project priorities.

It is quite the ignominious ending to an already infamous project.

DeepMind’s involvement with the NHS had previously been seized upon by the UK government — with former health secretary, Matt Hancock, trumpeting an AI research partnership between the company and Moorfield’s Eye Hospital as an exemplar of the kind of data-driven innovation he suggested would transform healthcare service provision in the UK.

Luckily for Hancock he didn’t pick Streams as his example of great “healthtech” innovation. (Moorfields confirmed to us that its research-focused partnership with Google Health is continuing.)

The hard lesson here appears to be don’t bet the nation’s health on an adtech giant that plays fast and loose with people’s data and doesn’t think twice about pulling the plug on digital medical devices as internal politics dictate another chair-shuffling reorg.

Patient data privacy advocacy group, MedConfidential — a key force in warning over the scope of the Royal Free’s DeepMind data-sharing deal — urged Google to ditch the spin and come clean about the Streams cock-up, once and for all.

“Streams is the Windows Vista of Google — a legacy it hopes to forget,” MedConfidential’s Sam Smith told us. “The NHS relies on trustworthy suppliers, but companies that move on after breaking things create legacy problems for the NHS, as we saw with wannacry. Google should admit the decision, delete the data, and learn that experimenting on patients is regulated for a reason.”

Questions over Royal Free’s ongoing app use

Despite the Information Commissioner’s Office’s 2017 finding that the Royal Free’s original data-sharing deal with DeepMind was improper, it’s notable that the London Trust stuck with Streams — continuing to pass data to DeepMind.

The original patient data-set that was shared with DeepMind without a valid legal basis was never ordered to be deleted. Nor — presumably has it since been deleted. Hence the weight of the call for Google to delete the data now.

Ironically the improperly acquired data should (in theory) finally get deleted — once contractual timeframes for any final back-up purges elapse — but only because it’s Google itself planning to switch off Streams.

And yet the Royal Free confirmed to us that it is still using Streams, even as Google spins the dial on its commercial priorities for the umpteenth time and decides it’s not interested in this particular bit of clinician support, after all.

We put a number of questions to the Trust — including about the deletion of patient data — none of which it responded to.

Instead, two days later, it sent us this one-line statement which raises plenty more questions — saying only that: “The Streams app has not been decommissioned for the Royal Free London and our clinicians continue to use it for the benefit of patients in our hospitals.”

It is not clear how long the Trust will be able to use an app Google is decommissioning. Nor how wise that might be for patient safety — such as if the app won’t get necessary security updates, for example.

We’ve also asked Google how long it will continue to support the Royal Free’s usage — and when it plans to finally switch off the service. As well as which internal group will be responsible for any SLA requests coming from the Royal Free as the Trust continues to use software Google Health is decommissioning — and will update this report with any response. (Earlier a Google spokeswoman told us the Royal Free would continue to use Streams for the ‘near future’ — but she did not offer a specific end date.)

In press reports this month on the Google Health reorg — covering an internal memo first obtained by Business Insider —  teams working on various Google health projects were reported to be being split up to other areas, including some set to report into Google’s search and AI teams.

So which Google group will take over responsibility for the handling of the SLA with the Royal Free, as a result of the Google Health reshuffle, is an interesting question.

In earlier comments, Google’s spokeswoman told us the new structure for its reconfigured health efforts — which are still being badged ‘Google Health’ — will encompass all its work in health and wellness, including Fitbit, as well as AI health research, Google Cloud and more.

On Streams specifically, she said the app hasn’t made the cut because when Google assimilated DeepMind Health it decided to focus its efforts on another digital offering for clinicians — called Care Studio — which it’s currently piloting with two US health systems (namely: Ascension & Beth Israel Deaconess Medical Center). 

And anyone who’s ever tried to use a Google messaging app will surely have strong feelings of déjà vu on reading that…

DeepMind’s co-founder, meanwhile, appears to have remained blissfully ignorant of Google’s intentions to ditch Streams in favor of Care Studio — tweeting back in 2019 as Google completed the takeover of DeepMind Health that he had been “proud to be part of this journey”, and also touting “huge progress delivered already, and so much more to come for this incredible team”.

In the end, Streams isn’t being ‘supercharged’ (or levelled up to use current faddish political parlance) with AI — as his 2019 blog post had envisaged — Google is simply taking it out of service. Like it did with Reader or Allo or Tango or Google Play Music, or…. well, the list goes on.

Suleyman’s own story contains some wrinkles, too.

He is no longer at DeepMind but has himself been ‘folded into’ Google — joining as a VP of artificial intelligence policy, after initially being placed on an extended leave of absence from DeepMind.

In January, allegations that he had bullied staff were reported by the WSJ. And then, earlier this month, Business Insider expanded on that — reporting follow up allegations that there had been confidential settlements between DeepMind and former employees who had worked under Suleyman and complained about his conduct (although DeepMind denied any knowledge of such settlements).

In a statement to Business Insider, Suleyman apologized for his past behavior — and said that in 2019 he had “accepted feedback that, as a co-founder at DeepMind, I drove people too hard and at times my management style was not constructive”, adding that he had taken time out to start working with a coach and that that process had helped him “reflect, grow and learn personally and professionally”.

We asked Google if Suleyman would like to comment on the demise of Streams — and on his employer’s decision to kill the app — given his high hopes for the project and all the years of work he put into that particular health push. But the company did not engage with the request.

We also offered Suleyman the chance to comment directly. We’ll update this story if he responds.

#alphabet, #apps, #artificial-intelligence, #deepmind, #fitbit, #google, #google-health, #health, #health-systems, #healthcare, #information-commissioners-office, #london, #matt-hancock, #medconfidential, #moorfields-eye-hospital, #mustafa-suleyman, #national-health-service, #privacy, #uk-government

UK names John Edwards as its choice for next data protection chief as gov’t eyes watering down privacy standards

The UK government has named the person it wants to take over as its chief data protection watchdog, with sitting commissioner Elizabeth Denham overdue to vacate the post: The Department of Digital, Culture, Media and Sport (DCMS) today said its preferred replacement is New Zealand’s privacy commissioner, John Edwards.

Edwards, who has a legal background, has spent more than seven years heading up the Office of the Privacy Commissioner In New Zealand — in addition to other roles with public bodies in his home country.

He is perhaps best known to the wider world for his verbose Twitter presence and for taking a public dislike to Facebook: In the wake of the 2018 Cambridge Analytica data misuse scandal Edwards publicly announced that he was deleting his account with the social media — accusing Facebook of not complying with the country’s privacy laws.

An anti-‘Big Tech’ stance aligns with the UK government’s agenda to tame the tech giants as it works to bring in safety-focused legislation for digital platforms and reforms of competition rules that take account of platform power.

If confirmed in the role — the DCMS committee has to approve Edwards’ appointment; plus there’s a ceremonial nod needed from the Queen — he will be joining the regulatory body at a crucial moment as digital minister Oliver Dowden has signalled the beginnings of a planned divergence from the European Union’s data protection regime, post-Brexit, by Boris Johnson’s government.

Dial back the clock five years and prior digital minister, Matt Hancock, was defending the EU’s General Data Protection Regulation (GDPR) as a “decent piece of legislation” — and suggesting to parliament that there would be little room for the UK to diverge in data protection post-Brexit.

But Hancock is now out of government (aptly enough after a data leak showed him breaching social distancing rules by kissing his aide inside a government building), and the government mood music around data has changed key to something far more brash — with sitting digital minister Dowden framing unfettered (i.e. deregulated) data-mining as “a great opportunity” for the post-Brexit UK.

For months, now, ministers have been eyeing how to rework the UK’s current (legascy) EU-based data protection framework — to, essentially, reduce user rights in favor of soundbites heavy on claims of slashing ‘red tape’ and turbocharging data-driven ‘innovation’. Of course the government isn’t saying the quiet part out loud; its press releases talk about using “the power of data to drive growth and create jobs while keeping high data protection standards”. But those standards are being reframed as a fig leaf to enable a new era of data capture and sharing by default.

Dowden has said that the emergency data-sharing which was waived through during the pandemic — when the government used the pressing public health emergency to justify handing NHS data to a raft of tech giantsshould be the ‘new normal’ for a post-Brexit UK. So, tl;dr, get used to living in a regulatory crisis.

A special taskforce, which was commissioned by the prime minister to investigate how the UK could reshape its data policies outside the EU, also issued a report this summer — in which it recommended scrapping some elements of the UK’s GDPR altogether — branding the regime “prescriptive and inflexible”; and advocating for changes to “free up data for innovation and in the public interest”, as it put it, including pushing for revisions related to AI and “growth sectors”.

The government is now preparing to reveal how it intends to act on its appetite to ‘reform’ (read: reduce) domestic privacy standards — with proposals for overhauling the data protection regime incoming next month.

Speaking to the Telegraph for a paywalled article published yesterday, Dowden trailed one change that he said he wants to make which appears to target consent requirements — with the minister suggesting the government will remove the legal requirement to gain consent to, for example, track and profile website visitors — all the while framing it as a pro-consumer move; a way to do away with “endless” cookie banners.

Only cookies that pose a ‘high risk’ to privacy would still require consent notices, per the report — whatever that means.

“There’s an awful lot of needless bureaucracy and box ticking and actually we should be looking at how we can focus on protecting people’s privacy but in as light a touch way as possible,” the digital minister also told the Telegraph.

The draft of this Great British ‘light touch’ data protection framework will emerge next month, so all the detail is still to be set out. But the overarching point is that the government intends to redefine UK citizens’ privacy rights, using meaningless soundbites — with Dowden touting a plan for “common sense” privacy rules — to cover up the fact that it intends to reduce the UK’s currently world class privacy standards and replace them with worse protections for data.

If you live in the UK, how much privacy and data protection you get will depend upon how much ‘innovation’ ministers want to ‘turbocharge’ today — so, yes, be afraid.

It will then fall to Edwards — once/if approved in post as head of the ICO — to nod any deregulation through in his capacity as the post-Brexit information commissioner.

We can speculate that the government hopes to slip through the devilish detail of how it will torch citizens’ privacy rights behind flashy, distraction rhetoric about ‘taking action against Big Tech’. But time will tell.

Data protection experts are already warning of a regulatory stooge.

While the Telegraph suggests Edwards is seen by government as an ideal candidate to ensure the ICO takes a “more open and transparent and collaborative approach” in its future dealings with business.

In a particularly eyebrow raising detail, the newspaper goes on to report that government is exploring the idea of requiring the ICO to carry out “economic impact assessments” — to, in the words of Dowden, ensure that “it understands what the cost is on business” before introducing new guidance or codes of practice.

All too soon, UK citizens may find that — in the ‘sunny post-Brexit uplands’ — they are afforded exactly as much privacy as the market deems acceptable to give them. And that Brexit actually means watching your fundamental rights being traded away.

In a statement responding to Edwards’ nomination, Denham, the outgoing information commissioner, appeared to offer some lightly coded words of warning for government, writing [emphasis ours]: “Data driven innovation stands to bring enormous benefits to the UK economy and to our society, but the digital opportunity before us today will only be realised where people continue to trust their data will be used fairly and transparently, both here in the UK and when shared overseas.”

The lurking iceberg for government is of course that if wades in and rips up a carefully balanced, gold standard privacy regime on a soundbite-centric whim — replacing a pan-European standard with ‘anything goes’ rules of its/the market’s choosing — it’s setting the UK up for a post-Brexit future of domestic data misuse scandals.

You only have to look at the dire parade of data breaches over in the US to glimpse what’s coming down the pipe if data protection standards are allowed to slip. The government publicly bashing the private sector for adhering to lax standards it deregulated could soon be the new ‘get popcorn’ moment for UK policy watchers…

UK citizens will surely soon learn of unfair and unethical uses of their data under the ‘light touch’ data protection regime — i.e. when they read about it in the newspaper.

Such an approach will indeed be setting the country on a path where mistrust of digital services becomes the new normal. And that of course will be horrible for digital business over the longer run. But Dowden appears to lack even a surface understanding of Internet basics.

The UK is also of course setting itself on a direct collision course with the EU if it goes ahead and lowers data protection standards.

This is because its current data adequacy deal with the bloc — which allows for EU citizens’ data to continue flowing freely to the UK — was granted only on the basis that the UK was, at the time it was inked, still aligned with the GDPR. So Dowden’s rush to rip up protections for people’s data presents a clear risk to the “significant safeguards” needed to maintain EU adequacy. Meaning the deal could topple.

Back in June, when the Commission signed off on the UK’s adequacy deal, it clearly warned that “if anything changes on the UK side, we will intervene”.

Add to that, the adequacy deal is also the first with a baked in sunset clause — meaning it will automatically expire in four years. So even if the Commission avoids taking proactive action over slipping privacy standards in the UK there is a hard deadline — in 2025 — when the EU’s executive will be bound to look again in detail at exactly what Dowden & Co. have wrought. And it probably won’t be pretty.

The longer term UK ‘plan’ (if we can put it that way) appears to be to replace domestic economic reliance on EU data flows — by seeking out other jurisdictions that may be friendly to a privacy-light regime governing what can be done with people’s information.

Hence — also today — DCMS trumpeted an intention to secure what it billed as “new multi-billion pound global data partnerships” — saying it will prioritize striking ‘data adequacy’ “partnerships” with the US, Australia, the Republic of Korea, Singapore, and the Dubai International Finance Centre and Colombia.

Future partnerships with India, Brazil, Kenya and Indonesia will also be prioritized, it added — with the government department cheerfully glossing over the fact it’s UK citizens’ own privacy that is being deprioritized here.

“Estimates suggest there is as much as £11 billion worth of trade that goes unrealised around the world due to barriers associated with data transfers,” DCMS writes in an ebullient press release.

As it stands, the EU is of course the UK’s largest trading partner. And statistics from the House of Commons library on the UK’s trade with the EU — which you won’t find cited in the DCMS release — underline quite how tiny this potential Brexit ‘data bonanza’ is, given that UK exports to the EU stood at £294 billion in 2019 (43% of all UK exports).

So even the government’s ‘economic’ case to water down citizens’ privacy rights looks to be puffed up with the same kind of misleadingly vacuous nonsense as ministers’ reframing of a post-Brexit UK as ‘Global Britain’.

Everyone hates cookies banners, sure, but that’s a case for strengthening not weakening people’s privacy — for making non-tracking the default setting online and outlawing manipulative dark patterns so that Internet users don’t constantly have to affirm they want their information protected. Instead the UK may be poised to get rid of annoying cookie consent ‘friction’ by allowing a free for all on citizens’ data.

 

#artificial-intelligence, #australia, #brazil, #colombia, #data-mining, #data-protection, #data-security, #digital-rights, #elizabeth-denham, #europe, #european-union, #facebook, #gdpr, #general-data-protection-regulation, #human-rights, #india, #indonesia, #john-edwards, #kenya, #korea, #matt-hancock, #new-zealand, #nhs, #oliver-dowden, #privacy, #singapore, #social-issues, #social-media, #uk-government, #united-kingdom, #united-states

UK tells messaging apps not to use e2e encryption for kids’ accounts

For a glimpse of the security and privacy dystopia the UK government has in store for its highly regulated ‘British Internet’, look no further than guidance put out by the Department of Digital, Media, Culture and Sport (DCMS) yesterday — aimed at social media platforms and private messaging services — which includes the suggestion that the latter should “prevent’ the use of end-to-end encryption on “child accounts”.

That’s right, the UK government is saying: ‘No end-to-end encryption for our kids please, they’re British’.

And while this is merely guidance for now, the chill is real — because legislation is already on the table.

The UK’s Online Safety Bill was published back in May, with Boris Johnson’s government setting out a sweeping plan to force platforms to regulate user generated content by imposing a legal duty to protect users from illegal (or merely just “harmful”) content.

The bill controversially bundles up requirements to report illegal stuff like child sexual exploitation content to law enforcement with far fuzzier mandates that platforms take action against a range of much-harder-to-define ‘harms’ (from cyber bullying to romance scams).

The end result looks like a sledgehammer to crack a nut. Except the ‘nut’ that could get smashed to pieces in this ministerial vice is UK Internet users’ digital security and privacy. (Not to mention any UK startups and digital businesses that aren’t on board with mass-surveillance-as-a-service.)

That’s the danger if the government follows through on its wonky idea that — on the Internet — ‘safety’ means security must be replaced with blanket surveillance in order to ‘keep kids safe’.

The Online Safety Bill is not the first wonky tech policy plan the UK has come up with. An earlier bid to force adult content providers to age verify users was dropped in 2019, having been widely criticized as unworkable as well as a massive privacy intrusion and security risk.

However, at the time, the government said it was only abandoning the ‘porn blocks’ measure because it was planning to bring forward “the most comprehensive approach possible to protecting children”. Hence the Online Safety Bill now stepping forward to push platforms to remove robust encryption in the name of ‘protecting children’.

Age verification technologies — and all sorts of content monitoring solutions (surveillance tech, doubtless badged as ‘safety’ tech) — also look likely to proliferate as a consequence of this approach.

Pushing platforms to proactively police speech and surveil usage in the hopes of preventing an ill-defined grab-bag of ‘harms’ — or, from the platforms’ perspective, to avoid the risk of eye-watering fines from the regulator if it decides they’ve failed in this ‘duty of care’ — also obviously conjures up a nightmare scenario for online freedom of expression.

Aka: ‘Watch what you type, even in the privacy of your private messaging app, because the UK Internet safety thought police are watching/might block you…’

Privacy rights for UK minors appear to be first on the chopping block, via what DCMS’ guidance refers to as “practical steps to manage the risk of online harm if your online platform allows people to interact, and to share text and other content”.

So, pretty much, if your online platform has any kind of communication layer at all then.

Letting kids have their own safe spaces to express themselves is apparently incompatible with ministers’ populist desire to brand the UK ‘the safest place to go online in the world’, as they like to spin it.

How exactly the UK will achieve safety online if government zealots force service providers to strip away robust security (e2e encryption) — torching the standard of data protection and privacy wrapping Brits’ personal information — is quite the burning question.

Albeit, it’s not one the UK government seems to have considered for even a split second.

“We’ve known for a long time that one of government’s goals for the Online Safety Bill is the restriction, if not the outright criminalisation, of the use of end-to-end encryption,” said Heather Burns, a policy manager for the digital rights organization Open Rights Group (ORG), one of many vocal critics of the government’s approach — discussing the wider implications of the policy push with TechCrunch.

“Recent messaging strategies promoted by government and the media have openly sought to associate end-to-end encryption with child abuse, and to imply that companies which use it are aiding and abetting child exploitation. So DCMS’s newly-published guidance advising the voluntary removal of encryption from children’s accounts is a precursor to it becoming a likely legal requirement.

“It’s also part of government’s drive, again as part of the Online Safety Bill, to require all services to implement mandatory age verification on all users, for all content or applications, in order to identify child users, in order to withhold encryption from them, thanks to aggressive lobbying from the age verification industry.”

That ministerial rhetoric around the Online Safety Bill is heavy on tub-thumping emotional appeals (to ‘protect our children from online nasties’) and low on sequential logic or technological coherence is not a surprise: Successive Conservative governments have, after all, had a massive bee in their bonnets about e2e encryption — dating back to the David Cameron years.

Back then ministers were typically taking aim at strong encryption on counter-terrorism grounds, arguing the tech is bad because it prevents law enforcement from catching terrorists. (And they went on to pass beefed up surveillance laws which also include powers to limit the use of robust encryption.)

However, under more recent PMs Theresa May and Boris Johnson, the child protection rhetoric has stepped up too — to the point where messaging channels are now being actively encouraged not to use e2e encryption altogether.

Next stop: State-sanctioned commercial mass surveillance. And massive risks for all UK Internet users subject to this anti-security, anti-privacy ‘safety’ regime.

“Despite government’s claim that the Bill will make the UK ‘the safest place in the world to be online’, restricting or criminalising encryption will actually make the UK an unsafe place for any company to do business,” warned Burns. “We will all need to resort to VPNs and foreign services, as happens in places like China, in order to keep our data safe. It’s likely that many essential services will block UK customers, or leave the UK altogether, rather than be compelled to act as a privatised nanny state over insecure data flows.”

In a section of the DCMS guidance entitled “protect children by limiting functionality”, the government department literally suggests that “private channels” (i.e. services like messaging apps) “prevent end-to-end encryption for child accounts”. And since accurately age identifying online users remains a challenge it follows that in-scope services may simply decide it’s less legally risky if they don’t use e2e at all.

DCMS’s guidance also follows up with an entirely bolded paragraph — in which the government then makes a point of highlighting e2e encryption as a “risk” to users, generally — and, therefore by implication, to future compliance with the forthcoming Online Safety legislation…

End-to-end encryption makes it more difficult for you to identify illegal and harmful content occurring on private channels. You should consider the risks this might pose to your users,” the UK government writes, emphasis its.

Whether anything can stop this self-destructive policy train now it’s left the Downing Street station is unclear. Johnson has a whopping majority in parliament — and years left before he has to call a general election.

The only thing that could derail the most harmful elements of the Online Safety Bill is if the UK public wakes up to the dangers it poses to everyone’s security and privacy — and if enough MPs take notice and push for amendments.

Earlier this month the ORG, along with some 30 other digital and humans rights groups, called on MPs to do just that and “help keep constituents’ data safe by protecting e2e encryption from legislative threats” — warning that this “basic and essential” security protocol is at risk from clauses in the bill that introduce requirements for companies to scan private and personal messages for evidence of criminal wrongdoing.

Zero access encryption is seen by the UK government as a blocker to such scanning.

“In order to do this, the use of end-to-end encryption is likely to be defined as a violation of the law,” the ORG also warned. “And companies operating in the UK who want to continue to defend user privacy through end-to-end encryption could, under the draft Bill, be threatened with partial shutdowns, being blocked from the UK, or even personal arrests.”

“We call on Parliament to ensure that end-to-end encryption must not be threatened or undermined by the Online Safety Bill, and that services utilising strong encryption are left out of the Bill’s content monitoring and filtering requirements,” it added in the online appeal.

DMCS has been contacted with questions on the logic of the government’s policy toward e2e encryption.

In a statement yesterday, the digital minister Caroline Dinenage said: “We’re helping businesses get their safety standards up to scratch before our new online harms laws are introduced and also making sure they are protecting children and users right now.

“We want businesses of all sizes to step up to a gold standard of safety online and this advice will help them to do so.”

#boris-johnson, #computer-security, #cryptography, #data-protection, #data-security, #e2e-encryption, #encryption, #end-to-end-encryption, #europe, #human-rights, #law-enforcement, #online-freedom, #online-safety-bill, #open-rights-group, #policy, #privacy, #security, #social-media-platforms, #telecommunications, #uk-government, #united-kingdom

UK gets data flows deal from EU — for now

The UK’s digital businesses can breathe a sign of relief today as the European Commission has officially signed off on data adequacy for the (now) third country, post-Brexit.

It’s a big deal for UK businesses as it means the country will be treated by Brussels as having essentially equivalent data protection rules as markets within the bloc, despite no longer being a member itself — enabling personal data to continue to flow freely from the EU to the UK, and avoiding any new legal barriers.

The granting of adequacy status has been all but assured in recent weeks, after European Union Member States signed off on a draft adequacy arrangement. But the Commission’s adoption of the decision marks the final step in the process — at least for now.

It’s notable that the Commission’s PR includes a clear warning that if the UK seeks to weaken protections afforded to people’s data under the current regime it “will intervene”.

In a statement, Věra Jourová, Commission VP for values and transparency, said:

The UK has left the EU but today its legal regime of protecting personal data is as it was. Because of this, we are adopting these adequacy decisions today. At the same time, we have listened very carefully to the concerns expressed by the Parliament, the Members States and the European Data Protection Board, in particular on the possibility of future divergence from our standards in the UK’s privacy framework. We are talking here about a fundamental right of EU citizens that we have a duty to protect. This is why we have significant safeguards and if anything changes on the UK side, we will intervene.”

The UK adequacy decision comes with a Sword of Damocles baked in: A sunset clause of four years. It’s a first — so, er, congratulations to the UK government for projecting a perception of itself as untrustworthy over the short run.

This clause means the UK’s regime will face full scrutiny again in 2025, with no automatic continuation if its standards are deemed to have slipped (as many fear they will).

The Commission also emphasizes that its decision does not mean the UK has four ‘guaranteed’ years in the clear. On the contrary, it says it will “continue to monitor the legal situation in the UK and could intervene at any point, if the UK deviates from the level of protection currently in place”.

Third countries without an adequacy agreement — such as the US, which has adequacy twice struck down by Europe’s top court (after it found US surveillance law incompatible with EU fundamental rights) — do not enjoy ‘seamless’ legal certainty around personal data flows; and must instead take steps to assess each of these transfers individually to determine whether (and how) they can move data legally.

Last week, the European Data Protection Board (EDPB) put out its final bit of guidance for third countries wanting to transfer personal data outside the bloc. And the advice makes it clear that some types of transfers are unlikely to be possible.

For other types of transfers, the advice discusses a number of of supplementary measures (including technical steps like robust encryption) that may be possible for a data controller to use in order to, through their own technical, contractual and organizational effort, ramp up the level of protection to achieve the required standard.

It is, in short, a lot of work. And without today’s adequacy decision UK businesses would have had to get intimately acquainted with the EDPB’s guidance. For now, though, they’ve dodged that bullet.

The qualifier is still very necessary, though, because the UK government has signalled that it intends to rethink data protection.

How exactly it goes about that — and to what extent it changes the current ‘essentially equivalent’ regime — may make all the difference. For example, Digital minister Oliver Dowden has talked about data being “a great opportunity” for the UK, post-Brexit.

And writing in the FT back in February he suggested there will be room for the UK to rewrite its national data protection rules without diverging so much that it puts adequacy at risk. “We fully intend to maintain those world-class standards. But to do so, we do not need to copy and paste the EU’s rule book, the General Data Protection Regulation, word-for-word,” he suggested then, adding that: “Countries as diverse as Israel and Uruguay have successfully secured adequacy with Brussels despite having their own data regimes. Not all of those were identical to GDPR, but equal doesn’t have to mean the same. The EU doesn’t hold the monopoly on data protection.”

The devil will, as they say, be in the detail. But some early signals are concerning — and the UK’s startup ecosystem would be well advised to take an active role in impressing upon government the importance to stay aligned with European data standards.

Moreover, there’s also the prospect of a legal challenge to the adequacy decision — even as is, i.e. based on current UK standards (which find plenty of critics). Certainly it can’t be ruled out — and the CJEU hasn’t shied away from quashing other adequacy arrangements it judged to be invalid…

Today, though, the Department for Digital, Media, Culture and Sport (DCMS) has seized the chance to celebrate a PR win, writing that the Commission’s decision “rightly recognises the country’s high data protection standards”.

The department also reiterated the UK government’s intention to “promote the free flow of personal data globally and across borders”, including through what it bills as “ambitious new trade deals and through new data adequacy agreements with some of the fastest growing economies” — simultaneously claiming it would do so “while ensuring people’s data continues to be protected to a high standard”. Pinky promise.

“All future decisions will be based on what maximises innovation and keeps up with evolving tech,” the DCMS added in a press release. “As such, the government’s approach will seek to minimise burdens on organisations seeking to use data to tackle some of the most pressing global issues, including climate change and the prevention of disease.”

In a statement, Dowden also made a point of combining both streams, saying: “We will now focus on unlocking the power of data to drive innovation and boost the economy while making sure we protect people’s safety and privacy.”

UK business and tech associations were just as quick to welcome the Commission’s adequacy decision. The alternative would of course have been very costly disruption.

In a statement, John Foster, director of policy for the Confederation of British Industry, said: “This breakthrough in the EU-UK adequacy decision will be welcomed by businesses across the country. The free flow of data is the bedrock of the modern economy and essential for firms across all sectors– from automotive to logistics — playing an important role in everyday trade of goods and services. This positive step will help us move forward as we develop a new trading relationship with the EU.”

In another supporting statement, Julian David, CEO of techUK, added: “Securing an EU-UK adequacy decision has been a top priority for techUK and the wider tech industry since the day after the 2016 referendum. The decision that the UK’s data protection regime offers an equivalent level of protection to the EU GDPR is a vote of confidence in the UK’s high data protection standards and is of vital importance to UK-EU trade as the free flow of data is essential to all business sectors.

“The data adequacy decision also provides a basis for the UK and EU to work together on global routes for the free flow of data with trust, building on the G7 Digital and Technology declaration and possibly unlocking €2TR of growth. The UK must also now move to complete the development of its own international data transfer regime in order to allow companies in the UK not just to exchange data with the EU but also to be able to access opportunities across the world.”

The Commission has actually adopted two UK adequacy decisions today — one under the General Data Protection Regulation (GDPR) and another for the Law Enforcement Directive.

Discussing key elements in its decision to grant the UK adequacy, EU lawmakers highlighted the fact the UK’s (current) system is based upon transposed European rules; that access to personal data by public authorities in the UK (such as for national security reasons) is done under a framework that has what it dubbed as “strong safeguards” (such as intercepts being subject to prior authorisation by an independent judicial body; measures needing to be necessary and proportionate; and redress mechanisms for those who believe they are subject to unlawful surveillance).

The Commission also noted that the UK is subject to the jurisdiction of the European Court of Human Rights; must adhere to the European Convention of Human Rights; and the Council of Europe Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data — aka “the only binding international treaty in the area of data protection”.

“These international commitments are an essential elements of the legal framework assessed in the two adequacy decisions,” the Commission notes. 

Data transfers for the purposes of UK immigration control have been excluded from the scope of the adequacy decision adopted under the GDPR — with the Commission saying that’s “in order to reflect a recent judgment of the England and Wales Court of Appeal on the validity and interpretation of certain restrictions of data protection rights in this area”.

“The Commission will reassess the need for this exclusion once the situation has been remedied under UK law,” it added.

So, again, there’s another caveat right there.

#brexit, #data-controller, #data-protection, #data-security, #encryption, #europe, #european-commission, #european-court-of-human-rights, #european-data-protection-board, #european-union, #general-data-protection-regulation, #oliver-dowden, #personal-data, #policy, #privacy, #surveillance-law, #uk-government, #united-kingdom, #united-states

Perspectives on tackling Big Tech’s market power

The need for markets-focused competition watchdogs and consumer-centric privacy regulators to think outside their respective ‘legal silos’ and find creative ways to work together to tackle the challenge of big tech market power was the impetus for a couple of fascinating panel discussions organized by the Centre for Economic Policy Research (CEPR), which were livestreamed yesterday but are available to view on-demand here.

The conversations brought together key regulatory leaders from Europe and the US — giving a glimpse of what the future shape of digital markets oversight might look like at a time when fresh blood has just been injected to chair the FTC so regulatory change is very much in the air (at least around tech antitrust).

CEPR’s discussion premise is that integration, not merely intersection, of competition and privacy/data protection law is needed to get a proper handle on platform giants that have, in many cases, leveraged their market power to force consumers to accept an abusive ‘fee’ of ongoing surveillance.

That fee both strips consumers of their privacy and helps tech giants perpetuate market dominance by locking out interesting new competition (which can’t get the same access to people’s data so operates at a baked in disadvantage).

A running theme in Europe for a number of years now, since a 2018 flagship update to the bloc’s data protection framework (GDPR), has been the ongoing under-enforcement around the EU’s ‘on-paper’ privacy rights — which, in certain markets, means regional competition authorities are now actively grappling with exactly how and where the issue of ‘data abuse’ fits into their antitrust legal frameworks.

The regulators assembled for CEPR’s discussion included, from the UK, the Competition and Markets Authority’s CEO Andrea Coscelli and the information commissioner, Elizabeth Denham; from Germany, the FCO’s Andreas Mundt; from France, Henri Piffaut, VP of the French competition authority; and from the EU, the European Data Protection Supervisor himself, Wojciech Wiewiórowski, who advises the EU’s executive body on data protection legislation (and is the watchdog for EU institutions’ own data use).

The UK’s CMA now sits outside the EU, of course — giving the national authority a higher profile role in global mergers & acquisition decisions (vs pre-brexit), and the chance to help shape key standards in the digital sphere via the investigations and procedures it chooses to pursue (and it has been moving very quickly on that front).

The CMA has a number of major antitrust probes open into tech giants — including looking into complaints against Apple’s App Store and others targeting Google’s plan to depreciate support for third party tracking cookies (aka the so-called ‘Privacy Sandbox’) — the latter being an investigation where the CMA has actively engaged the UK’s privacy watchdog (the ICO) to work with it.

Only last week the competition watchdog said it was minded to accept a set of legally binding commitments that Google has offered which could see a quasi ‘co-design’ process taking place, between the CMA, the ICO and Google, over the shape of the key technology infrastructure that ultimately replaces tracking cookies. So a pretty major development.

Germany’s FCO has also been very active against big tech this year — making full use of an update to the national competition law which gives it the power to take proactive inventions around large digital platforms with major competitive significance — with open procedures now against Amazon, Facebook and Google.

The Bundeskartellamt was already a pioneer in pushing to loop EU data protection rules into competition enforcement in digital markets in a strategic case against Facebook, as we’ve reported before. That closely watched (and long running) case — which targets Facebook’s ‘superprofiling’ of users, based on its ability to combine user data from multiple sources to flesh out a single high dimension per-user profile — is now headed to Europe’s top court (so likely has more years to run).

But during yesterday’s discussion Mundt confirmed that the FCO’s experience litigating that case helped shape key amendments to the national law that’s given him beefier powers to tackle big tech. (And he suggested it’ll be a lot easier to regulate tech giants going forward, using these new national powers.)

“Once we have designated a company to be of ‘paramount significance’ we can prohibit certain conduct much more easily than we could in the past,” he said. “We can prohibit, for example, that a company impedes other undertaking by data processing that is relevant for competition. We can prohibit that a use of service depends on the agreement to data collection with no choice — this is the Facebook case, indeed… When this law was negotiated in parliament parliament very much referred to the Facebook case and in a certain sense this entwinement of competition law and data protection law is written in a theory of harm in the German competition law.

“This makes a lot of sense. If we talk about dominance and if we assess that this dominance has come into place because of data collection and data possession and data processing you need a parameter in how far a company is allowed to gather the data to process it.”

“The past is also the future because this Facebook case… has always been a big case. And now it is up to the European Court of Justice to say something on that,” he added. “If everything works well we might get a very clear ruling saying… as far as the ECN [European Competition Network] is concerned how far we can integrate GDPR in assessing competition matters.

“So Facebook has always been a big case — it might get even bigger in a certain sense.”

France’s competition authority and its national privacy regulator (the CNIL), meanwhile, have also been joint working in recent years.

Including over a competition complaint against Apple’s pro-user privacy App Tracking Transparency feature (which last month the antitrust watchdog declined to block) — so there’s evidence there too of respective oversight bodies seeking to bridge legal silos in order to crack the code of how to effectively regulate tech giants whose market power, panellists agreed, is predicated on earlier failures of competition law enforcement that allowed tech platforms to buy up rivals and sew up access to user data, entrenching advantage at the expense of user privacy and locking out the possibility of future competitive challenge.

The contention is that monopoly power predicated upon data access also locks consumers into an abusive relationship with platform giants which can then, in the case of ad giants like Google and Facebook, extract huge costs (paid not in monetary fees but in user privacy) for continued access to services that have also become digital staples — amping up the ‘winner takes all’ characteristic seen in digital markets (which is obviously bad for competition too).

Yet, traditionally at least, Europe’s competition authorities and data protection regulators have been focused on separate workstreams.

The consensus from the CEPR panels was very much that that is both changing and must change if civil society is to get a grip on digital markets — and wrest control back from tech giants to that ensure consumers and competitors aren’t both left trampled into the dust by data-mining giants.

Denham said her motivation to dial up collaboration with other digital regulators was the UK government entertaining the idea of creating a one-stop-shop ‘Internet’ super regulator. “What scared the hell out of me was the policymakers the legislators floating the idea of one regulator for the Internet. I mean what does that mean?” she said. “So I think what the regulators did is we got to work, we got busy, we become creative, got our of our silos to try to tackle these companies — the likes of which we have never seen before.

“And I really think what we have done in the UK — and I’m excited if others think it will work in their jurisdictions — but I think that what really pushed us is that we needed to show policymakers and the public that we had our act together. I think consumers and citizens don’t really care if the solution they’re looking for comes from the CMA, the ICO, Ofcom… they just want somebody to have their back when it comes to protection of privacy and protection of markets.

“We’re trying to use our regulatory levers in the most creative way possible to make the digital markets work and protect fundamental rights.”

During the earlier panel, the CMA’s Simeon Thornton, a director at the authority, made some interesting remarks vis-a-vis its (ongoing) Google ‘Privacy Sandbox’ investigation — and the joint working it’s doing with the ICO on that case — asserting that “data protection and respecting users’ rights to privacy are very much at the heart of the commitments upon which we are currently consulting”.

“If we accept the commitments Google will be required to develop the proposals according to a number of criteria including impacts on privacy outcomes and compliance with data protection principles, and impacts on user experience and user control over the use of their personal data — alongside the overriding objective of the commitments which is to address our competition concerns,” he went on, adding: “We have worked closely with the ICO in seeking to understand the proposals and if we do accept the commitments then we will continue to work closely with the ICO in influencing the future development of those proposals.”

“If we accept the commitments that’s not the end of the CMA’s work — on the contrary that’s when, in many respects, the real work begins. Under the commitments the CMA will be closely involved in the development, implementation and monitoring of the proposals, including through the design of trials for example. It’s a substantial investment from the CMA and we will be dedicating the right people — including data scientists, for example, to the job,” he added. “The commitments ensure that Google addresses any concerns that the CMA has. And if outstanding concerns cannot be resolved with Google they explicitly provide for the CMA to reopen the case and — if necessary — impose any interim measures necessary to avoid harm to competition.

“So there’s no doubt this is a big undertaking. And it’s going to be challenging for the CMA, I’m sure of that. But personally I think this is the sort of approach that is required if we are really to tackle the sort of concerns we’re seeing in digital markets today.”

Thornton also said: “I think as regulators we do need to step up. We need to get involved before the harm materializes — rather than waiting after the event to stop it from materializing, rather than waiting until that harm is irrevocable… I think it’s a big move and it’s a challenging one but personally I think it’s a sign of the future direction of travel in a number of these sorts of cases.”

Also speaking during the regulatory panel session was FTC commissioner Rebecca Slaughter — a dissenter on the $5BN fine it hit Facebook with back in 2019 for violating an earlier consent order (as she argued the settlement provided no deterrent to address underlying privacy abuse, leaving Facebook free to continue exploiting users’ data) — as well as Chris D’Angelo, the chief deputy AG of the New York Attorney General, which is leading a major states antitrust case against Facebook.

Slaughter pointed out that the FTC already combines a consumer focus with attention on competition but said that historically there has been separation of divisions and investigations — and she agreed on the need for more joined-up working.

She also advocated for US regulators to get out of a pattern of ineffective enforcement in digital markets on issues like privacy and competition where companies have, historically, been given — at best — what amounts to wrist slaps that don’t address root causes of market abuse, perpetuating both consumer abuse and market failure. And be prepared to litigate more.

As regulators toughen up their stipulations they will need to be prepared for tech giants to push back — and therefore be prepared to sue instead of accepting a weak settlement.

“That is what is most galling to me that even where we take action, in our best faith good public servants working hard to take action, we keep coming back to the same questions, again and again,” she said. “Which means that the actions we are taking isn’t working. We need different action to keep us from having the same conversation again and again.”

Slaughter also argued that it’s important for regulators not to pile all the burden of avoiding data abuses on consumers themselves.

“I want to sound a note of caution around approaches that are centered around user control,” she said. “I think transparency and control are important. I think it is really problematic to put the burden on consumers to work through the markets and the use of data, figure out who has their data, how it’s being used, make decisions… I think you end up with notice fatigue; I think you end up with decision fatigue; you get very abusive manipulation of dark patterns to push people into decisions.

“So I really worry about a framework that is built at all around the idea of control as the central tenant or the way we solve the problem. I’ll keep coming back to the notion of what instead we need to be focusing on is where is the burden on the firms to limit their collection in the first instance, prohibit their sharing, prohibit abusive use of data and I think that that’s where we need to be focused from a policy perspective.

“I think there will be ongoing debates about privacy legislation in the US and while I’m actually a very strong advocate for a better federal framework with more tools that facilitate aggressive enforcement but I think if we had done it ten years ago we probably would have ended up with a notice and consent privacy law and I think that that would have not been a great outcome for consumers at the end of the day. So I think the debate and discussion has evolved in an important way. I also think we don’t have to wait for Congress to act.”

As regards more radical solutions to the problem of market-denting tech giants — such as breaking up sprawling and (self-servingly) interlocking services empires — the message from Europe’s most ‘digitally switched on’ regulators seemed to be don’t look to us for that; we are going to have to stay in our lanes.

So tl;dr — if antitrust and privacy regulators’ joint working just sums to more intelligent fiddling round the edges of digital market failure, and it’s break-ups of US tech giants that’s what’s really needed to reboot digital markets, then it’s going to be up to US agencies to wield the hammers. (Or, as Coscelli elegantly phrased it: “It’s probably more realistic for the US agencies to be in the lead in terms of structural separation if and when it’s appropriate — rather than an agency like ours [working from inside a mid-sized economy such as the UK’s].”)

The lack of any representative from the European Commission on the panel was an interesting omission in that regard — perhaps hinting at ongoing ‘structural separation’ between DG Comp and DG Justice where digital policymaking streams are concerned.

The current competition chief, Margrethe Vestager — who also heads up digital strategy for the bloc, as an EVP — has repeatedly expressed reluctance to impose radical ‘break up’ remedies on tech giants. She also recently preferred to waive through another Google digital merger (its acquisition of fitness wearable Fitbit) — agreeing to accept a number of ‘concessions’ and ignoring major mobilization by civil society (and indeed EU data protection agencies) urging her to block it.

Yet in an earlier CEPR discussion session, another panellist — Yale University’s Dina Srinivasan — pointed to the challenges of trying to regulate the behavior of companies when there are clear conflicts of interest, unless and until you impose structural separation as she said has been necessary in other markets (like financial services).

“In advertising we have an electronically traded market with exchanges and we have brokers on both sides. In a competitive market — when competition was working — you saw that those brokers were acting in the best interest of buyers and sellers. And as part of carrying out that function they were sort of protecting the data that belonged to buyers and sellers in that market, and not playing with the data in other ways — not trading on it, not doing conduct similar to insider trading or even front running,” she said, giving an example of how that changed as Google gained market power.

“So Google acquired DoubleClick, made promises to continue operating in that manner, the promises were not binding and on the record — the enforcement agencies or the agencies that cleared the merger didn’t make Google promise that they would abide by that moving forward and so as Google gained market power in that market there’s no regulatory requirement to continue to act in the best interests of your clients, so now it becomes a market power issue, and after they gain enough market power they can flip data ownership and say ‘okay, you know what before you owned this data and we weren’t allowed to do anything with it but now we’re going to use that data to for example sell our own advertising on exchanges’.

“But what we know from other markets — and from financial markets — is when you flip data ownership and you engage in conduct like that that allows the firm to now build market power in yet another market.”

The CMA’s Coscelli picked up on Srinivasan’s point — saying it was a “powerful” one, and that the challenges of policing “very complicated” situations involving conflicts of interests is something that regulators with merger control powers should be bearing in mind as they consider whether or not to green light tech acquisitions.

(Just one example of a merger in the digital space that the CMA is still scrutizing is Facebook’s acquisition of animated GIF platform Giphy. And it’s interesting to speculate whether, had brexit happened a little faster, the CMA might have stepped in to block Google’s Fitibit merger where the EU wouldn’t.)

Coscelli also flagged the issue of regulatory under-enforcement in digital markets as a key one, saying: “One of the reasons we are today where we are is partially historic under-enforcement by competition authorities on merger control — and that’s a theme that is extremely interesting and relevant to us because after the exit from the EU we now have a bigger role in merger control on global mergers. So it’s very important to us that we take the right decisions going forward.”

“Quite often we intervene in areas where there is under-enforcement by regulators in specific areas… If you think about it when you design systems where you have vertical regulators in specific sectors and horizontal regulators like us or the ICO we are more successful if the vertical regulators do their job and I’m sure they are more success if we do our job properly.

“I think we systematically underestimate… the ability of companies to work through whatever behavior or commitments or arrangement are offered to us, so I think these are very important points,” he added, signalling that a higher degree of attention is likely to be applied to tech mergers in Europe as a result of the CMA stepping out from the EU’s competition regulation umbrella.

Also speaking during the same panel, the EDPS warned that across Europe more broadly — i.e. beyond the small but engaged gathering of regulators brought together by CEPR — data protection and competition regulators are far from where they need to be on joint working, implying that the challenge of effectively regulating big tech across the EU is still a pretty Sisyphean one.

It’s true that the Commission is not sitting on hands in the face of tech giant market power.

At the end of last year it proposed a regime of ex ante regulations for so-called ‘gatekeeper’ platforms, under the Digital Markets Act. But the problem of how to effectively enforce pan-EU laws — when the various agencies involved in oversight are typically decentralized across Member States — is one key complication for the bloc. (The Commission’s answer with the DMA was to suggest putting itself in charge of overseeing gatekeepers but it remains to be seen what enforcement structure EU institutions will agree on.)

Clearly, the need for careful and coordinated joint working across multiple agencies with different legal competencies — if, indeed, that’s really what’s needed to properly address captured digital markets vs structural separation of Google’s search and adtech, for example, and Facebook’s various social products — steps up the EU’s regulatory challenge in digital markets.

“We can say that no effective competition nor protection of the rights in the digital economy can be ensured when the different regulators do not talk to each other and understand each other,” Wiewiórowski warned. “While we are still thinking about the cooperation it looks a little bit like everybody is afraid they will have to trade a little bit of its own possibility to assess.”

“If you think about the classical regulators isn’t it true that at some point we are reaching this border where we know how to work, we know how to behave, we need a little bit of help and a little bit of understanding of the other regulator’s work… What is interesting for me is there is — at the same time — the discussion about splitting of the task of the American regulators joining the ones on the European side. But even the statements of some of the commissioners in the European Union saying about the bigger role the Commission will play in the data protection and solving the enforcement problems of the GDPR show there is no clear understanding what are the differences between these fields.”

One thing is clear: Big tech’s dominance of digital markets won’t be unpicked overnight. But, on both sides of the Atlantic, there are now a bunch of theories on how to do it — and growing appetite to wade in.

#advertising-tech, #amazon, #andreas-mundt, #competition-and-markets-authority, #competition-law, #congress, #data-processing, #data-protection, #data-protection-law, #data-security, #digital-markets-act, #digital-rights, #doubleclick, #elizabeth-denham, #europe, #european-commission, #european-court-of-justice, #european-union, #facebook, #federal-trade-commission, #financial-services, #fitbit, #france, #general-data-protection-regulation, #germany, #human-rights, #margrethe-vestager, #policy, #privacy, #uk-government, #united-kingdom, #united-states, #yale-university

UK’s ICO warns over ‘big data’ surveillance threat of live facial recognition in public

The UK’s chief data protection regulator has warned over reckless and inappropriate use of live facial recognition (LFR) in public places.

Publishing an opinion today on the use of this biometric surveillance in public — to set out what is dubbed as the “rules of engagement” — the information commissioner, Elizabeth Denham, also noted that a number of investigations already undertaken by her office into planned applications of the tech have found problems in all cases.

“I am deeply concerned about the potential for live facial recognition (LFR) technology to be used inappropriately, excessively or even recklessly. When sensitive personal data is collected on a mass scale without people’s knowledge, choice or control, the impacts could be significant,” she warned in a blog post.

“Uses we’ve seen included addressing public safety concerns and creating biometric profiles to target people with personalised advertising.

“It is telling that none of the organisations involved in our completed investigations were able to fully justify the processing and, of those systems that went live, none were fully compliant with the requirements of data protection law. All of the organisations chose to stop, or not proceed with, the use of LFR.”

“Unlike CCTV, LFR and its algorithms can automatically identify who you are and infer sensitive details about you. It can be used to instantly profile you to serve up personalised adverts or match your image against known shoplifters as you do your weekly grocery shop,” Denham added.

“In future, there’s the potential to overlay CCTV cameras with LFR, and even to combine it with social media data or other ‘big data’ systems — LFR is supercharged CCTV.”

The use of biometric technologies to identify individuals remotely sparks major human rights concerns, including around privacy and the risk of discrimination.

Across Europe there are campaigns — such as Reclaim your Face — calling for a ban on biometric mass surveillance.

In another targeted action, back in May, Privacy International and others filed legal challenges at the controversial US facial recognition company, Clearview AI, seeking to stop it from operating in Europe altogether. (Some regional police forces have been tapping in — including in Sweden where the force was fined by the national DPA earlier this year for unlawful use of the tech.)

But while there’s major public opposition to biometric surveillance in Europe, the region’s lawmakers have so far — at best — been fiddling around the edges of the controversial issue.

A pan-EU regulation the European Commission presented in April, which proposes a risk-based framework for applications of artificial intelligence, included only a partial prohibition on law enforcement’s use of biometric surveillance in public places — with wide ranging exemptions that have drawn plenty of criticism.

There have also been calls for a total ban on the use of technologies like live facial recognition in public from MEPs across the political spectrum. The EU’s chief data protection supervisor has also urged lawmakers to at least temporarily ban the use of biometric surveillance in public.

The EU’s planned AI Regulation won’t apply in the UK, in any case, as the country is now outside the bloc. And it remains to be seen whether the UK government will seek to weaken the national data protection regime.

A recent report it commissioned to examine how the UK could revise its regulatory regime, post-Brexit, has — for example — suggested replacing the UK GDPR with a new “UK framework” — proposing changes to “free up data for innovation and in the public interest”, as it puts it, and advocating for revisions for AI and “growth sectors”. So whether the UK’s data protection regime will be put to the torch in a post-Brexit bonfire of ‘red tape’ is a key concern for rights watchers.

(The Taskforce on Innovation, Growth and Regulatory Reform report advocates, for example, for the complete removal of Article 22 of the GDPR — which gives people rights not to be subject to decisions based solely on automated processing — suggesting it be replaced with “a focus” on “whether automated profiling meets a legitimate or public interest test”, with guidance on that envisaged as coming from the Information Commissioner’s Office (ICO). But it should also be noted that the government is in the process of hiring Denham’s successor; and the digital minister has said he wants her replacement to take “a bold new approach” that “no longer sees data as a threat, but as the great opportunity of our time”. So, er, bye-bye fairness, accountability and transparency then?)

For now, those seeking to implement LFR in the UK must comply with provisions in the UK’s Data Protection Act 2018 and the UK General Data Protection Regulation (aka, its implementation of the EU GDPR which was transposed into national law before Brexit), per the ICO opinion, including data protection principles set out in UK GDPR Article 5, including lawfulness, fairness, transparency, purpose limitation, data minimisation, storage limitation, security and accountability.

Controllers must also enable individuals to exercise their rights, the opinion also said.

“Organisations will need to demonstrate high standards of governance and accountability from the outset, including being able to justify that the use of LFR is fair, necessary and proportionate in each specific context in which it is deployed. They need to demonstrate that less intrusive techniques won’t work,” wrote Denham. “These are important standards that require robust assessment.

“Organisations will also need to understand and assess the risks of using a potentially intrusive technology and its impact on people’s privacy and their lives. For example, how issues around accuracy and bias could lead to misidentification and the damage or detriment that comes with that.”

The timing of the publication of the ICO’s opinion on LFR is interesting in light of wider concerns about the direction of UK travel on data protection and privacy.

If, for example, the government intends to recruit a new, ‘more pliant’ information commissioner — who will happily rip up the rulebook on data protection and AI, including in areas like biometric surveillance — it will at least be rather awkward for them to do so with an opinion from the prior commissioner on the public record that details the dangers of reckless and inappropriate use of LFR.

Certainly, the next information commissioner won’t be able to say they weren’t given clear warning that biometric data is particularly sensitive — and can be used to estimate or infer other characteristics, such as their age, sex, gender or ethnicity.

Or that ‘Great British’ courts have previously concluded that “like fingerprints and DNA [a facial biometric template] is information of an ‘intrinsically private’ character”, as the ICO opinion notes, while underlining that LFR can cause this super sensitive data to be harvested without the person in question even being aware it’s happening. 

Denham’s opinion also hammers hard on the point about the need for public trust and confidence for any technology to succeed, warning that: “The public must have confidence that its use is lawful, fair, transparent and meets the other standards set out in data protection legislation.”

The ICO has previously published an Opinion into the use of LFR by police forces — which she said also sets “a high threshold for its use”. (And a few UK police forces — including the Met in London — have been among the early adopters of facial recognition technology, which has in turn led some into legal hot water on issues like bias.)

Disappointingly, though, for human rights advocates, the ICO opinion shies away from recommending a total ban on the use of biometric surveillance in public by private companies or public organizations — with the commissioner arguing that while there are risks with use of the technology there could also be instances where it has high utility (such as in the search for a missing child).

“It is not my role to endorse or ban a technology but, while this technology is developing and not widely deployed, we have an opportunity to ensure it does not expand without due regard for data protection,” she wrote, saying instead that in her view “data protection and people’s privacy must be at the heart of any decisions to deploy LFR”.

Denham added that (current) UK law “sets a high bar to justify the use of LFR and its algorithms in places where we shop, socialise or gather”.

“With any new technology, building public trust and confidence in the way people’s information is used is crucial so the benefits derived from the technology can be fully realised,” she reiterated, noting how a lack of trust in the US has led to some cities banning the use of LFR in certain contexts and led to some companies pausing services until rules are clearer.

“Without trust, the benefits the technology may offer are lost,” she also warned.

There is one red line that the UK government may be forgetting in its unseemly haste to (potentially) gut the UK’s data protection regime in the name of specious ‘innovation’. Because if it tries to, er, ‘liberate’ national data protection rules from core EU principles (of lawfulness, fairness, proportionality, transparency, accountability and so on) — it risks falling out of regulatory alignment with the EU, which would then force the European Commission to tear up a EU-UK data adequacy arrangement (on which the ink is still drying).

The UK having a data adequacy agreement from the EU is dependent on the UK having essentially equivalent protections for people’s data. Without this coveted data adequacy status UK companies will immediately face far greater legal hurdles to processing the data of EU citizens (as the US now does, in the wake of the demise of Safe Harbor and Privacy Shield). There could even be situations where EU data protection agencies order EU-UK data flows to be suspended altogether…

Obviously such a scenario would be terrible for UK business and ‘innovation’ — even before you consider the wider issue of public trust in technologies and whether the Great British public itself wants to have its privacy rights torched.

Given all this, you really have to wonder whether anyone inside the UK government has thought this ‘regulatory reform’ stuff through. For now, the ICO is at least still capable of thinking for them.

 

#artificial-intelligence, #biometrics, #clearview-ai, #data-protection, #data-protection-law, #elizabeth-denham, #europe, #european-commission, #european-union, #facial-recognition, #general-data-protection-regulation, #information-commissioners-office, #law-enforcement, #privacy, #privacy-international, #safe-harbor, #surveillance, #tc, #uk-government, #united-kingdom

UK’s CMA opens market study into Apple, Google’s mobile “duopoly”

The UK’s competition watchdog will take a deep dive look into Apple and Google’s dominance of the mobile ecosystem, it said today — announcing a market study which will examine the pair’s respective smartphone platforms (iOS and Android); their app stores (App Store and Play Store); and web browsers (Safari and Chrome). 

The Competition and Markets Authority (CMA) is concerned that the mobile platform giants’ “effective duopoly” in those areas  might be harming consumers, it added.

The study will be wide ranging, with the watchdog concerns about the nested gateways that are created as a result of the pair’s dominance of mobile ecosystem — intermediating how consumers can access a variety of products, content and services (such as music, TV and video streaming; fitness tracking, shopping and banking, to cite some of the examples provided by the CMA).

“These products also include other technology and devices such as smart speakers, smart watches, home security and lighting (which mobiles can connect to and control),” it went on, adding that it’s looking into whether their dominance of these pipes is “stifling competition across a range of digital markets”, saying too that it’s “concerned this could lead to reduced innovation across the sector and consumers paying higher prices for devices and apps, or for other goods and services due to higher advertising prices”.

The CMA further confirmed the deep dive will examine “any effects” of the pair’s market power over other businesses — giving the example of app developers who rely on Apple or Google to market their products to customers via their smart devices.

The watchdog already has an open investigation into Apple’s App Store, following a number of antitrust complaints by developers.

It is investigating Google’s planned depreciation of third party tracking cookies too, after complaints by adtech companies and publishers that the move could harm competition. (And just last week the CMA said it was minded to accept a series of concessions offered by Google that would enable the regulator to stop it turning off support for cookies entirely if it believes the move will harm competition.)

The CMA said both those existing investigations are examining issues that fall within the scope of the new mobile ecosystem market study but that its work on the latter will be “much broader”.

It added that it will adopt a joined-up approach across all related cases — “to ensure the best outcomes for consumers and other businesses”.

It’s giving itself a full year to examine Gapple’s mobile ecosystems.

It is also soliciting feedback on any of the issues raised in its statement of scope — calling for responses by 26 July. The CMA added that it’s also keen to hear from app developers, via its questionnaire, by the same date.

Taking on tech giants

The watchdog has previously scrutinized the digital advertising market — and found plenty to be concerned about vis-a-vis Google’s dominance there.

That earlier market study has been feeding the UK government’s plan to reform competition rules to take account of the market-deforming power of digital giants. And the CMA suggested the new market study, examining ‘Gapple’s’ mobile muscle, could similarly help shape UK-wide competition law reforms.

Last year the UK announced its plan to set up a “pro-competition” regime for regulating Internet platforms — including by establishing a dedicated Digital Markets Unit within the CMA (which got going earlier this year).

The legislation for the reform has not yet been put before parliament but the government has said it wants the competition regulator to be able to “proactively shape platforms’ behavior” to avoid harmful behavior before it happens” — saying too that it supports enabling ex ante interventions once a platform has been identified to have so-called “strategic market status”.

Germany already adopted similar reforms to its competition law (early this year), which enable proactive interventions to tackle large digital platforms with what is described as “paramount significance for competition across markets”. And its Federal Cartel Office has, in recent months, wasted no time in opening a number of proceedings to determine whether Amazon, Google and Facebook have such a status.

The CMA also sounds keen to get going to tackle Internet gatekeepers.

Commenting in a statement, CEO Andrea Coscelli said:

“Apple and Google control the major gateways through which people download apps or browse the web on their mobiles – whether they want to shop, play games, stream music or watch TV. We’re looking into whether this could be creating problems for consumers and the businesses that want to reach people through their phones.

“Our ongoing work into big tech has already uncovered some worrying trends and we know consumers and businesses could be harmed if they go unchecked. That’s why we’re pressing on with launching this study now, while we are setting up the new Digital Markets Unit, so we can hit the ground running by using the results of this work to shape future plans.”

The European Union also unveiled its own proposals for clipping the wings of big tech last year — presenting its Digital Markets Act plan in December which will apply a single set of operational rules to so-called “gatekeeper” platforms operating across the EU.

The clear trend in Europe on digital competition is toward increasing oversight and regulation of the largest platforms — in the hopes that antitrust authorities can impose measures that will help smaller players thrive.

Critics might say that’s just playing into the tech giants’ hands, though — because it’s fiddling around the edges when more radical intervention (break ups) are what’s really needed to reboot captured markets.

Apple and Google were contacted for comment on the CMA’s market study.

A Google spokesperson said: “Android provides people with more choice than any other mobile platform in deciding which apps they use, and enables thousands of developers and manufacturers to build successful businesses. We welcome the CMA’s efforts to understand the details and differences between platforms before designing new rules.”

According to Google, the Android App Economy generated £2.8BN in revenue for UK developers last year, which it claims supported 240,000 jobs across the country — citing a Public First report that it commissioned.

The tech giant also pointed to operational changes it has already made in Europe, following antitrust interventions by the European Commission — such as adding a choice screen to Android where users can pick from a list of alternative search engines.

Earlier this month it agreed to shift the format underlying that choice screen from an unpopular auction model to free participation.

#amazon, #android, #app-store, #apple, #apple-inc, #big-tech, #cma, #competition-and-markets-authority, #competition-law, #digital-markets-act, #digital-markets-unit, #duopoly, #europe, #european-commission, #european-union, #germany, #google, #ios, #mobile, #policy, #smartphone, #smartphones, #uk-government, #united-kingdom, #web-browsers

Google won’t end support for tracking cookies unless UK’s competition watchdog agrees

Well this is big. The UK’s competition regulator looks set to get an emergency brake that will allow it to stop Google ending support for third party cookies, a technology that’s currently used for targeting online ads, if it believes competition would be harmed by the depreciation going ahead.

The development follows an investigation opened by the Competition and Markets Authority (CMA) into Google’s self-styled ‘Privacy Sandbox’ earlier this year.

The regulator will have the power to order a standstill of at least 60 days on any move by Google to remove support for cookies from Chrome if it accepts a set of legally binding commitments the latter has offered — and which the regulator has today issued a notification of intention to accept.

The CMA could also reopen a fuller investigation if it’s not happy with how things are looking at the point it orders any standstill to stop Google crushing tracking cookies.

It follows that the watchdog could also block Google’s wider ‘Privacy Sandbox’ technology transition entirely — if it decides the shift cannot be done in a way that doesn’t harm competition. However the CMA said today it takes the “provisional” view that the set of commitments Google has offered will address competition concerns related to its proposals.

It’s now opened a consultation to see if the industry agrees — with the feedback line open until July 8.

Commenting in a statement, Andrea Coscelli, the CMA’s chief executive, said:

“The emergence of tech giants such as Google has presented competition authorities around the world with new challenges that require a new approach.

“That’s why the CMA is taking a leading role in setting out how we can work with the most powerful tech firms to shape their behaviour and protect competition to the benefit of consumers.

“If accepted, the commitments we have obtained from Google become legally binding, promoting competition in digital markets, helping to protect the ability of online publishers to raise money through advertising and safeguarding users’ privacy.”

In a blog post sketching what it’s pledged — under three broad headlines of ‘Consultation and collaboration’; ‘No data advertising advantage for Google products’; and ‘No self-preferencing’ — Google writes that if the CMA accepts its commitments it will “apply them globally”, making the UK’s intervention potentially hugely significant.

It’s perhaps one slightly unexpected twist of Brexit that it’s put the UK in a position to be taking key decisions about the rules for global digital advertising. (The European Union is also working on new rules for how platform giants can operate but the CMA’s intervention on Privacy Sandbox does not yet have a direct equivalent in Brussels.)

That Google is choosing to offer to turn a UK competition intervention into a global commitment is itself very interesting. It may be there in part as an added sweetener — nudging the CMA to accept the offer so it can feel like a global standard setter.

At the same time, businesses do love operational certainty. So if Google can hash out a set of rules that are accepted by one (fairly) major market, because they’ve been co-designed with national oversight bodies, and then scale those rules everywhere it may create a shortcut path to avoiding any more regulator-enforced bumps in the future.

So Google may see this as a smoother path toward the sought for transition for its adtech business to a post-cookie future. Of course it also wants to avoid being ordered to stop entirely.

More broadly, engaging with the fast-paced UK regulator could be a strategy for Google to try to surf over the political deadlocks and risks which can characterize discussions on digital regulation in other markets (especially its home turf of the U.S. — where there has been a growing drumbeat of calls to break up tech giants; and where Google specifically now faces a number of antitrust investigations).

The outcome it may be hoping for is being able to point to regulator-stamped ‘compliance’ — in order that it can claim it as evidence there’s no need for its ad empire to be broken up.

Google’s offering of commitments also signifies that regulators who move fastest to tackle the power of tech giants will be the ones helping to define and set the standards and conditions that apply for web users everywhere. At least unless or until any more radical interventions rain down on big tech.

What is Privacy Sandbox?

Privacy Sandbox is a complex stack of interlocking technology proposals for replacing current ad tracking methods (which are widely seen as horrible for user privacy) with alternative infrastructure that Google claims will be better for individual privacy and also still allow the adtech and publishing industries to generate (it claims much the same) revenue by targeting ads at cohorts of web users — who will be put into ‘interest buckets’ based on what they look at online.

The full details of the proposals (which include components like FLoCs, aka Google’s proposed new ad ID based on federated learning of cohorts; and Fledge/Turtledove, Google’s suggested new ad delivery technology) have not yet been set in stone.

Nonetheless, Google announced in January 2020 that it intended to end support for third party cookies within two years — so that rather nippy timeframe has likely concentrated opposition, with pushback coming from the adtech industry and (some) publishers who are concerned it will have a major impact on their ad revenues when individual-level ad targeting goes away.

The CMA began to look into Google’s planned depreciating of tracking cookies after complaints that the transition to a new infrastructure of Google’s devising will merely increase Google’s market power — by locking down third parties’ ability to track Internet users for ad targeting while leaving Google with a high dimension view of what people get up to online as a result of its expansive access to first party data (gleaned through its dominance for consumer web services).

The executive summary of today’s CMA notice lists its concerns that, without proper regulatory oversight, Privacy Sandbox might:

  • distort competition in the market for the supply of ad inventory and in the market for the supply of ad tech services, by restricting the functionality associated with user tracking for third parties while retaining this functionality for Google;
  • distort competition by the self-preferencing of Google’s own advertising products and services and owned and operated ad inventory; and
  • allow Google to exploit its apparent dominant position by denying Chrome web users substantial choice in terms of whether and how their personal data is used for the purpose of targeting and delivering advertising to them.

At the same time, privacy concerns around the ad tracking and targeting of Internet users are undoubtedly putting pressure on Google to retool Chrome (which ofc dominates web browser marketshare) — given that other web browsers have been stepping up efforts to protect their users from online surveillance by doing stuff like blocking trackers for years.

Web users hate creepy ads — which is why they’ve been turning to ad blockers in droves. Numerous major data scandals have also increased awareness of privacy and security. And — in Europe and elsewhere — digital privacy regulations have been toughened up or introduced in recent years. So the line of ‘what’s acceptable’ for ad businesses to do online has been shifting.

But the key issue here is how privacy and competition regulation interacts — and potentially conflicts — with the very salient risk that ill-thought through and overly blunt competition interventions could essentially lock in privacy abuses of web users (as a result of a legacy of weak enforcement around online privacy, which allowed for rampant, consent-less ad tracking and targeting of Internet users to develop and thrive in the first place).

Poor privacy enforcement coupled with banhammer-wielding competition regulators does not look like a good recipe for protecting web users’ rights.

However there is cautious reason for optimism here.

Last month the CMA and the UK’s Information Commissioner’s Office (ICO) issued a joint statement in which they discussed the importance of having competition and data protection in digital markets — citing the CMA’s Google Privacy Sandbox probe as a good example of a case that requires nuanced joint working.

Or, as they put it then: “The CMA and the ICO are working collaboratively in their engagement with Google and other market participants to build a common understanding of Google’s proposals, and to ensure that both privacy and competition concerns can be addressed as the proposals are developed in more detail.”

Although the ICO’s record on enforcement against rights-trampling adtech is, well, non-existent. So its preference for regulatory inaction in the face of adtech industry lobbying should off-set any quantum of optimism derived from the bald fact of the UK’s privacy and competition regulators’ ‘joint working’.

(The CMA, by contrast, has been very active in the digital space since gaining, post-Brexit, wider powers to pursue investigations. And in recent years took a deep dive look at competition in the digital ad market, so it’s armed with plenty of knowledge. It is also in the process of configuring a new unit that will oversee a pro-competition regime which the UK explicitly wants to clip the wings of big tech.)

What has Google committed to?

The CMA writes that Google has made “substantial and wide-ranging” commitments vis-a-vis Privacy Sandbox — which it says include:

  • A commitment to develop and implement the proposals in a way that avoids distortions to competition and the imposition of unfair terms on Chrome users. This includes a commitment to involve the CMA and the ICO in the development of the Proposals to ensure this objective is met.
  • Increased transparency from Google on how and when the proposals will be taken forward and on what basis they will be assessed. This includes a commitment to publicly disclose the results of tests of the effectiveness of alternative technologies.
  • Substantial limits on how Google will use and combine individual user data for the purposes of digital advertising after the removal of third-party cookies.
  • A commitment that Google will not discriminate against its rivals in favour of its own advertising and ad-tech businesses when designing or operating the alternatives to third-party cookies.
  • A standstill period of at least 60 days before Google proceeds with the removal of third party cookies giving the CMA the opportunity, if any outstanding concerns cannot be resolved with Google, to reopen its investigation and, if necessary, impose any interim measures necessary to avoid harm to competition.

Google also writes that: “Throughout this process, we will engage the CMA and the industry in an open, constructive and continuous dialogue. This includes proactively informing both the CMA and the wider ecosystem of timelines, changes and tests during the development of the Privacy Sandbox proposals, building on our transparent approach to date.”

“We will work with the CMA to resolve concerns and develop agreed parameters for the testing of new proposals, while the CMA will be getting direct input from the ICO,” it adds.

Google’s commitments cover a number of areas directly related to competition — such as self-preferencing, non-discrimination, and stipulations that it will not combine user data from specific sources that might give it an advantage vs third parties.

However privacy is also being explicitly baked into the competition consideration, here, per the CMA — which writes that the commitments will [emphasis ours]:

Establish the criteria that must be taken into account in designing, implementing and evaluating Google’s Proposals. These include the impact of the Privacy Sandbox Proposals on: privacy outcomes and compliance with data protection principles; competition in digital advertising and in particular the risk of distortion to competition between Google and other market participants; the ability of publishers to generate revenue from ad inventory; and user experience and control over the use of their data.

An ICO spokeswoman was also keen to point out that one of the first commitments obtained from Google under the CMA’s intervention “focuses on privacy and data protection”.

In a statement, the data watchdog added:

“The commitments obtained mark a significant moment in the assessment of the Privacy Sandbox proposals. They demonstrate that consumer rights in digital markets are best protected when competition and privacy are considered together.

“As we outlined in our recent