Never before have so many countries, including China, moved with such vigor at the same time to limit the power of a single industry.
Never before have so many countries, including China, moved with such vigor at the same time to limit the power of a single industry.
European Union lawmakers who are drawing up rules for applying artificial intelligence are considering fines of up to 4% of global annual turnover (or €20M, if greater) for a set of prohibited use-cases, according to a leaked draft of the AI regulation — reported earlier by Politico — that’s expected to be officially unveiled next week.
The plan to regulate AI has been on the cards for a while. Back in February 2020 the European Commission published a white paper, sketching plans for regulating so-called “high risk” applications of artificial intelligence.
At the time EU lawmakers were toying with a sectoral focus — envisaging certain sectors like energy and recruitment as vectors for risk. However that approach appears to have been rethought, per the leaked draft — which does not limit discussion of AI risk to particular industries or sectors.
Instead, the focus is on compliance requirements for high risk AI applications, wherever they may occur (weapons/military uses are specifically excluded, however, as such use-cases fall outside the EU treaties). Although it’s not abundantly clear from this draft exactly how ‘high risk’ will be defined.
The overarching goal for the Commission here is to boost public trust in AI, via a system of compliance checks and balances steeped in “EU values” in order to encourage uptake of so-called “trustworthy” and “human-centric” AI. So even makers of AI applications not considered to be ‘high risk’ will still be encouraged to adopt codes of conduct — “to foster the voluntary application of the mandatory requirements applicable to high-risk AI systems”, as the Commission puts it.
Another chunk of the regulation deals with measures to support AI development in the bloc — pushing Member States to establish regulatory sandboxing schemes in which startups and SMEs can be proritized for support to develop and test AI systems before bringing them to market.
Competent authorities “shall be empowered to exercise their discretionary powers and levers of proportionality in relation to artificial intelligence projects of entities participating the sandbox, while fully preserving authorities’ supervisory and corrective powers,” the draft notes.
Under the planned rules, those intending to apply artificial intelligence will need to determine whether a particular use-case is ‘high risk’ and thus whether they need to conduct a mandatory, pre-market compliance assessment or not.
“The classification of an AI system as high-risk should be based on its intended purpose — which should refer to the use for which an AI system is intended, including the specific context and conditions of use and — and be determined in two steps by considering whether it may cause certain harms and, if so, the severity of the possible harm and the probability of occurrence,” runs one recital in the draft.
“A classification of an AI system as high-risk for the purpose of this Regulation may not necessarily mean that the system as such or the product as a whole would necessarily be considered as ‘high-risk’ under the criteria of the sectoral legislation,” the text also specifies.
Examples of “harms” associated with high-risk AI systems are listed in the draft as including: “the injury or death of a person, damage of property, systemic adverse impacts for society at large, significant disruptions to the provision of essential services for the ordinary conduct of critical economic and societal activities, adverse impact on financial, educational or professional opportunities of persons, adverse impact on the access to public services and any form of public assistance, and adverse impact on [European] fundamental rights.”
Several examples of high risk applications are also discussed — including recruitment systems; systems that provide access to educational or vocational training institutions; emergency service dispatch systems; creditworthiness assessment; systems involved in determining taxpayer-funded benefits allocation; decision-making systems applied around the prevention, detection and prosecution of crime; and decision-making systems used to assist judges.
So long as compliance requirements — such as establishing a risk management system and carrying out post-market surveillance, including via a quality management system — are met such systems would not be barred from the EU market under the legislative plan.
Other requirements include in the area of security and that the AI achieves consistency of accuracy in performance — with a stipulation to report to “any serious incidents or any malfunctioning of the AI system which constitutes a breach of obligations” to an oversight authority no later than 15 days after becoming aware of it.
“High-risk AI systems may be placed on the Union market or otherwise put into service subject to compliance with mandatory requirements,” the text notes.
“Mandatory requirements concerning high-risk AI systems placed or otherwise put into service on the Union market should be complied with taking into account the intended purpose of the AI system and according to the risk management system to be established by the provider.
“Among other things, risk control management measures identified by the provider should be based on due consideration of the effects and possible interactions resulting from the combined application of the mandatory requirements and take into account the generally acknowledged state of the art, also including as reflected in relevant harmonised standards or common specifications.”
Certain AI “practices” are listed as prohibited under Article 4 of the planned law, per this leaked draft — including (commercial) applications of mass surveillance systems and general purpose social scoring systems which could lead to discrimination.
AI systems that are designed to manipulate human behavior, decisions or opinions to a detrimental end (such as via dark pattern design UIs), are also listed as prohibited under Article 4; as are systems that use personal data to generate predictions in order to (detrimentally) target the vulnerabilities of persons or groups of people.
A casual reader might assume the regulation is proposing to ban, at a stroke, practices like behavioral advertising based on people tracking — aka the business models of companies like Facebook and Google. However that assumes adtech giants will accept that their tools have a detrimental impact on users.
On the contrary, their regulatory circumvention strategy is based on claiming the polar opposite; hence Facebook’s talk of “relevant” ads. So the text (as written) looks like it will be a recipe for (yet) more long-drawn out legal battles to try to make EU law stick vs the self-interested interpretations of tech giants.
The rational for the prohibited practices is summed up in an earlier recital of the draft — which states: “It should be acknowledged that artificial intelligence can enable new manipulative, addictive, social control and indiscriminate surveillance practices that are particularly harmful and should be prohibited as contravening the Union values of respect for human dignity, freedom, democracy, the rule of law and respect for human rights.”
It’s notable that the Commission has avoided proposing a ban on the use of facial recognition in public places — as it had apparently been considering, per a leaked draft early last year, before last year’s White Paper steered away from a ban.
In the leaked draft “remote biometric identification” in public places is singled out for “stricter conformity assessment procedures through the involvement of a notified body” — aka an “authorisation procedure that addresses the specific risks implied by the use of the technology” and includes a mandatory data protection impact assessment — vs most other applications of high risk AIs (which are allowed to meet requirements via self-assessment).
“Furthermore the authorising authority should consider in its assessment the likelihood and severity of harm caused by inaccuracies of a system used for a given purpose, in particular with regard to age, ethnicity, sex or disabilities,” runs the draft. “It should further consider the societal impact, considering in particular democratic and civic participation, as well as the methodology, necessity and proportionality for the inclusion of persons in the reference database.”
AI systems “that may primarily lead to adverse implications for personal safety” are also required to undergo this higher bar of regulatory involvement as part of the compliance process.
The envisaged system of conformity assessments for all high risk AIs is ongoing, with the draft noting: “It is appropriate that an AI system undergoes a new conformity assessment whenever a change occurs which may affect the compliance of the system with this Regulation or when the intended purpose of the system changes.”
“For AI systems which continue to ‘learn’ after being placed on the market or put into service (i.e. they automatically adapt how functions are carried out) changes to the algorithm and performance which have not been pre-determined and assessed at the moment of the conformity assessment shall result in a new conformity
assessment of the AI system,” it adds.
The carrot for compliant businesses is to get to display a ‘CE’ mark to help them win the trust of users and friction-free access across the bloc’s single market.
“High-risk AI systems should bear the CE marking to indicate their conformity with this Regulation so that they can move freely within the Union,” the text notes, adding that: “Member States should not create obstacles to the placing on the market or putting into service of AI systems that comply with the requirements laid down in this Regulation.”
As well as seeking to outlaw some practices and establish a system of pan-EU rules for bringing ‘high risk’ AI systems to market safely — with providers expected to make (mostly self) assessments and fulfil compliance obligations (such as around the quality of the data-sets used to train the model; record-keeping/documentation; human oversight; transparency; accuracy) prior to launching such a product into the market and conduct ongoing post-market surveillance — the proposed regulation seeks shrink the risk of AI being used to trick people.
It does this by suggesting “harmonised transparency rules” for AI systems intended to interact with natural persons (aka voice AIs/chat bots etc); and for AI systems used to generate or manipulate image, audio or video content (aka deepfakes).
“Certain AI systems intended to interact with natural persons or to generate content may pose specific risks of impersonation or deception irrespective of whether they qualify as high-risk or not. In certain circumstances, the use of these systems should therefore be subject to specific transparency obligations without prejudice to the requirements and obligations for high-risk AI systems,” runs the text.
“In particular, natural persons should be notified that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use. Moreover, users, who use an AI system to generate or manipulate image, audio or video content that appreciably resembles existing persons, places or events and would falsely appear to a reasonable person to be authentic, should disclose that the content has been artificially created or manipulated by labelling the artificial intelligence output accordingly and disclosing its artificial origin.
“This labelling obligation should not apply where the use of such content is necessary for the purposes of safeguarding public security or for the exercise of a legitimate right or freedom of a person such as for satire, parody or freedom of arts and sciences and subject to appropriate safeguards for the rights and freedoms of third parties.”
While the proposed AI regime hasn’t yet been officially unveiled by the Commission — so details could still change before next week — a major question mark looms over how a whole new layer of compliance around specific applications of (often complex) artificial intelligence can be effectively oversee and any violations enforced, especially given ongoing weaknesses in the enforcement of the EU’s data protection regime (which begun being applied back in 2018).
So while providers of high risk AIs are required to take responsibility for putting their system/s on the market (and therefore for compliance with all the various stipulations, which also include registering high risk AI systems in an EU database the Commission intends to maintain), the proposal leaves enforcement in the hands of Member States — who will be responsible for designating one or more national competent authorities to supervise application of the oversight regime.
We’ve seen how this story plays out with the General Data Protection Regulation. The Commission itself has conceded GDPR enforcement is not consistently or vigorously applied across the bloc — so a major question is how these fledgling AI rules will avoid the same forum-shopping fate?
“Member States should take all necessary measures to ensure that the provisions of this Regulation are implemented, including by laying down effective, proportionate and dissuasive penalties for their infringement. For certain specific infringements, Member States should take into account the margins and criteria set out in this Regulation,” runs the draft.
The Commission does add a caveat — about potentially stepping in in the event that Member State enforcement doesn’t deliver. But there’s no near term prospect of a different approach to enforcement, suggesting the same old pitfalls will likely appear.
“Since the objective of this Regulation, namely creating the conditions for an ecosystem of trust regarding the placing on the market, putting into service and use of artificial intelligence in the Union, cannot be sufficiently achieved by the Member States and can rather, by reason of the scale or effects of the action, be better achieved at Union level, the Union may adopt measures, in accordance with the principle of subsidiarity as set out in Article 5 of the Treaty on European Union,” is the Commission’s back-stop for future enforcement failure.
The oversight plan for AI includes setting up a mirror entity akin to the GDPR’s European Data Protection Board — to be called the European Artificial Intelligence Board — which will similarly support application of the regulation by issuing relevant recommendations and opinions for EU lawmakers, such as around the list of prohibited AI practices and high-risk systems.
A protocol mishap involving Ursula von der Leyen, the European Commission president, was cited by critics as symbolic of Turkey’s treatment of women. It also underlined divisions within the European Union.
The European Union may investigate Facebook’s $1BN acquisition of customer service platform Kustomer after concerns were referred to it under EU merger rules.
A spokeswoman for the Commission confirmed it received a request to refer the proposed acquisition from Austria under Article 22 of the EU’s Merger Regulation — a mechanism which allows Member States to flag a proposed transaction that’s not notifiable under national filing thresholds (e.g. because the turnover of one of the companies is too low for a formal notification).
The Commission spokeswoman said the case was notified in Austria on March 31.
“Following the receipt of an Article 22 request for referral, the Commission has to transmit the request for referral to other Member States without delay, who will have the right to join the original referral request within 15 working days of being informed by the Commission of the original request,” she told us, adding: “Following the expiry of the deadline for other Member States to join the referral, the Commission will have 10 working days to decide whether to accept or reject the referral.”
We’ll know in a few weeks whether or not the European Commission will take a look at the acquisition — an option that could see the transaction stalled for months, delaying Facebook’s plans for integrating Kustomer’s platform into its empire.
Facebook and Kustomer have been contacted for comment on the development.
The tech giant’s planned purchase of the customer relations management platform was announced last November and quickly raised concerns over what Facebook might do with any personal data held by Kustomer — which could include sensitive information, given sectors served by the platform include healthcare, government and financial services, among others.
Back in February, the Irish Council for Civil Liberties (ICCL) wrote to the Commission and national and EU data protection agencies to raise concerns about the proposed acquisition — urging scrutiny of the “data processing consequences”, and highlighting how Kustomer’s terms allow it to process user data for very wide-ranging purposes.
“Facebook is acquiring this company. The scope of ‘improving our Services’ [in Kustomer’s terms] is already broad, but is likely to grow broader after Kustomer is acquired,” the ICCL warned. “‘Our Services’ may, for example, be taken to mean any Facebook services or systems or projects.”
“The settled caselaw of the European Court of Justice, and the European data protection board, that ‘improving our services’ and similarly vague statements do not qualify as a ‘processing purpose’,” it added.
The ICCL also said it had written to Facebook asking for confirmation of the post-acquisition processing purposes for which people’s data will be used.
Johnny Ryan, senior fellow at the ICCL, confirmed to TechCrunch it has not had any response from Facebook to those questions.
We’ve also asked Facebook to confirm what it will do with any personal data held on users by Kustomer once it owns the company — and will update this report with any response.
In a separate (recent) episode — involving Google — its acquisition of wearable maker Fitbit went through months of competition scrutiny in the EU and was only cleared by regional regulators after the tech giant made a number of concessions, including committing not to use Fitbit data for ads for ten years.
Until now Facebook’s acquisitions have generally flown under regulators’ radar, including, around a decade ago, when it was sewing up the social space by buying up rivals Instagram and WhatsApp.
Several years later it was forced to pay a fine in the EU over a ‘misleading’ filing — after it combined WhatsApp and Facebook data, despite having told regulators it could not do so.
With so many data scandals now inextricably attached to Facebook, the tech giant is saddled with customer mistrust by default and faces far greater scrutiny of how it operates — which is now threatening to inject friction into its plans to expand its b2b offering by acquiring a CRM player. So after ‘move fast and break things’ Facebook is having to move slower because of its reputation for breaking stuff.
The European Union’s failure to secure adequate vaccine supplies, followed by an export ban, has dented the reputation of the bloc’s leaders. It may also hurt their ability to act in other areas.
While Washington went into business with the drug companies, Europe was more fiscally conservative and trusted the free market.
The bloc is a major producer of shots but has struggled with its rollout, fueling disputes with Britain and other allies. In a sign of the frustration, Italy recently blocked a shipment to Australia.
Europe’s collective vaccine purchase is an experiment in deeper integration. Despite a rocky start, many countries still stand to benefit, but it’s the most powerful who have least to gain.
Ursula von der Leyen has largely stayed away from the limelight while driving the handling of a crisis and letting subordinates take the blame.
Already criticized for a slow rollout for its 27 members, Brussels retreated on export controls linked to Ireland and Brexit.
Alan Cowell, a longtime New York Times correspondent, recalls a different Europe, one of currency controls, cumbersome paperwork and burdensome cross-border regulations.
Beijing and Brussels were on the brink of an agreement to roll back restrictions on investment. But the deal’s fate is uncertain amid growing animosity toward China and increasingly vocal opposition.
In a landmark collective undertaking, the bloc is poised to start distributing shots to all 27 member nations and their 410 million citizens.
Silicon Valley is building a powerful influence industry in Brussels, which has “never seen this kind of money” spent this way.
The agreement calls for European Union countries to cut their collective greenhouse gas emissions by 55 percent from 1990 levels, a more substantial reduction than previously proposed.
The measures, which require British agreement, would cover air and road travel, freight and fishing for six months, to prevent immediate chaos should the transition period end without a trade pact.
Leaders will meet Thursday to work out a compromise with the two holdouts, who have vetoed the bloc’s budget and stimulus plans over threats that they will lose access to funds.
With negotiators at impasse, the prime minister hopes he and European leaders can hammer out a trade deal to replace the one that expires on Dec. 31.
Boris Johnson and the European Union’s president are preparing their domestic audiences for either a landmark accord requiring compromise or a breakdown that will disrupt cross-channel trade.
The two illiberal governments, having been enabled by the bloc’s leaders and evaded punishment, now hold a 1.8 trillion euro package hostage.
Margrethe Vestager, a European Commission vice president, said Amazon was unfairly using data to box out smaller competitors.
On Day 1 of new restrictions, the Bank of England and British Treasury announced expanded efforts to support the nation’s finances.
TikTok said on Wednesday it’s strengthening its enforcement actions against hate speech and hateful ideologies to include “neighboring ideologies,” like white nationalism and others, as well as statements that emerge from those ideologies.
In a blog post, TikTok explained that it regularly evaluates its enforcement processes with the help of global experts to determine when it needed to take action against emerging risks.
While the TikTok Trust & Safety teams were already working to remove neo-Nazism and white supremacy from its platform under existing policies, it’s more recently expanded enforcement will also cover related ideologies, including white nationalism, white genocide theory, as well as “statements that have their origin in these ideologies, and movements such as Identitarianism and male supremacy,” TikTok said.
The announcement was made on TikTok’s European newsroom, and follows TikTok’s recent joining of the European Commission’s Code of Conduct on Countering Illegal Hate Speech Online. However, the guidelines TikTok discussed apply to its global audience.
TikTok had made similar statements on its U.S. newsroom in August, including its plans to take action against other hateful ideologies, including white nationalism and male supremacy, in addition to white supremacy and anti-semitism. A TikTok spokesperson told TechCrunch the new announcement was meant to offer “further details” on that policy.
The company’s new blog post noted how many monitoring organizations have been reporting that anti-semitic sentiment is increasing around the world.
TikTok itself had been recently accused of having a “white supremacy” problem, according to a report from the Anti-Defamation League, which led to the U.S. newsroom announcement earlier this year. The ADL had uncovered dozens of accounts that were using combinations of white supremacist symbols, terms and slogans as screen names or handles, its report said.
It also said it secured a commitment from TikTok to work together to remove such content going forward. At the time of the report, TikTok had claimed to have already removed 1,000 accounts during the year for violating hate speech policies, and said it had taken down hundreds of thousands of videos under those same guidelines. In the U.S. newsroom post, TikTok updated its numbers, saying it had banned more than 1,300 accounts for hateful content or behavior, removed more than 380,000 videos for violation of its hate speech policy, and removed over 64,000 hateful comments.
TikTok offered no update on those figures, or EU-specific data, in today’s post.
The post went on to detail other existing policies in this area. For example, TikTok says it doesn’t permit any content that denies the Holocaust and other violent tragedies — a policy Facebook only recently adopted after years of choosing to favor free speech. TikTok also says it takes action to remove misinformation and hurtful stereotypes about Jewish, Muslim and other communities — including those that spread misinformation about “notable Jewish individuals and families” that are used as proxies to spread antisemitism.
TikTok additionally noted it removes content harmful to the LGBTQ+ community by removing hateful ideas, including content that promotes promotes conversion therapy and the idea that no one is born LGBTQ+.
The company spoke about another area of policy it’s worked to improve, too. Today, TikTok is working to train Trust & Safety enforcement team members as to when it’s appropriate to remove certain language. In the case of language that was previously used to exclude and demean groups, it’s removed. But if those terms are now being reclaimed by impacted communities as terms of empowerment and counter-speech, the speech wouldn’t be taken down.
When content is taken down, TikTok users will be able to ask for a review of the action, TikTok also promised — a level of transparency that isn’t always seen today.
Much of what TikTok announced on Wednesday isn’t a new policy, necessarily, but is meant to address the E.U. audience specifically, where TikTok faces continual scrutiny over its data practices and other policies.
The public sector usually publishes its business opportunities in the form of ‘tenders,’ to increase transparency to the public. However, this data is scattered, and larger businesses have access to more information, giving them opportunities to grab contracts before official tenders are released. We have seen the controversy around UK government contracts going to a number of private consultants who have questionable prior experience in the issues they are winning contracts on.
And public-to-private sector business makes up 14% of global GDP, and even a 1% improvement could save €20B for taxpayers per year, according to the European Commission .
Stotles is a new UK startup technology that turns fragmented public sector data — such as spending, tenders, contracts, meeting minutes, or news releases — into a clearer view of the market, and extracts relevant early signals about potential opportunities.
It’s now raised a £1.4m seed round led by Speedinvest, with participation from 7Percent Ventures, FJLabs, and high-profile angels including Matt Robinson, co-founder of GoCardless and CEO at Nested; Carlos Gonzalez-Cadenas, COO at Go -Cardless; Charlie Songhurst, former Head of Corporate Strategy at Microsoft; Will Neale, founder of Grabyo; and Akhil Paul. It received a previous investment from Seedcamp last year.
Stotles’ founders say they had “scathing” experiences dealing with public procurement in their previous roles at organizations like Boston Consulting Group and the World Economic Forum.
The private beta has been open for nine months, and is used by companies including UiPath, Freshworks, Rackspace, and Couchbase. With this funding announcement, they’ll be opening up an early access program.
Competitors include: Global Data, Contracts Advance, BIP Solutions, Spend Network/Open Opps, Tussel, TenderLake. However, most of the players out there are focused on tracking cold tenders, or providing contracting data for periodic generic market research.
The prime minister blew up over what he called Europe’s insistence that Britain make all the compromises. But Europe says it still wants to talk.
The move came after Britain refused to withdraw legislation that it admits could break international law by potentially overriding commitments on Northern Ireland.
European regulators once again have the behavior of the biggest US tech companies—Amazon, Apple, Facebook, and Google among them—squarely in their sights as they move forward with a proposal to reform how digital marketplaces and data sharing operate.
An early draft of the Digital Services Act, under consideration by the European Parliament, would not only require tech forms to share data with smaller rivals but would also limit the ways companies can use customer data they’ve already collected, the Financial Times was first to report.
Under the proposal, tech firms with the potential to act as gatekeepers “shall not pre-install exclusively their own applications nor require from any third-party operating system developers or hardware manufacturers to pre-install exclusively gatekeepers’ own application,” according to Reuters. The draft also mandates that gatekeeper companies will also not be permitted to use data collected on their platforms to target users unless that data is also shared with rival firms.
Last December (yes, in the before-times) UK-based mental health startup eQuoo had a round of announcements, becoming the NHS approved mental health game, as well as signing Barmer, the largest insurance company in Germany, as a client.
It’s now been selected as the Mental Health App for Unilever’s new global initiative aimed at the mental health of young people. The move came after Unilever’s People Data Centre (PDC) selected eQuoo out of all the mental health games on the Google Playstore, being, as it is, one of the few backed by scientific research. Unilever’s new brand campaign, which will feature eQuoo app – will be marketed to over 70,000 18 to 35-year olds.
“eQuoo teaches important skills in a fun and engaging way,” said Unilever’s Global PDC Search and Social Analyst, Janelle Tomayo. “The game teaches you how to become a better communicator using fictional characters to navigate through difficult circumstances with skills and storylines empirically based on current psychological research.”
Silja Litvin, founder and CEO of eQuoo said: “1 in 3 young adults experience an anxiety disorder, crippling and harming too many people at the cusp of their adult lives. Together eQuoo and Unilever will equip thousands of people with the personal resilience to manage the pressures of today’s world.”
PsycApps, which makes eQuoo, is a Digital Mental Health startup that is using gamification, Cognitive behavioral therapy (CBT), Positive Psychology and AI to treat mental illness, using evidence-based features. It’s achieved a top rating at ORCHA, the leading health app assessment platform and is also available through the GP EMIS data bank, meaning that NHS doctors can now refer their patients to eQuoo to improve their mental health and wellbeing.
The market for mental health-oriented games and apps is increasing considerably. AKILI, the first ADHD game for children, attained FDA approval. In June, the European Medicines Agency approved Akili’s digital therapy for attention deficit hyperactivity disorder (ADHD), which uses a video game to treat the underlying cause of the condition. The European Commission has granted a CE mark for the game called EndeavorRx, allowing the product to be marketed in Europe.
The bloc wants to persuade its most anti-immigrant member countries to agree to a common policy. But the future of its new plan, like many of its details, remains uncertain.
European antitrust regulators now have until almost the end of the year to take a decision on whether to green light Google’s planned acquisition of Fitbit.
The tech giant announced its intention to buy the fitness tracking wearable maker in November 2019, saying it would shell out $2.1 billion in cash to make off with Fitbit and the health data it holds on some 28M+ users.
EU regulators were quick to sound the alarm about letting the tech giant go shopping for such a major cache of sensitive personal data, with the European Data Protection Board warned in February that the proposed purchase poses a huge risk to privacy.
There is also a parallel concern that Fitbit’s fitness data could further consolidate Google’s regional dominance in the ad market. And last month EU competition regulators announced a full antitrust probe — saying then they would take a decision within 90 working days. That deadline has now been extended by a further two weeks.
A Commission spokeswoman confirmed the earlier provisional December 9 deadline has been pushed on “in agreement with the parties” — citing Article 10(3) of the EU’s Merger Regulation.
“The provisional legal deadline for a final decision in this case is now December 23, 2020,” she added.
The Commission has not offered any detail on the reason for allocating more time to take a decision.
When EU regulators announced the in-depth probe, the Commission said it was concerned data gathered by Fitbit could lead to a distortion of competition if Google was allowed to assimilate the wearable maker and “further entrench” its dominance in online ad markets.
Other concerns include the impact on the nascent digital healthcare sector, and whether Google might be incentivised to degrade the interoperability of rival wearables with its Android OS once it has its own hardware skin in the game.
The tech giant, meanwhile, has offered assurances around the deal in an attempt to get it cleared — claiming ahead of the Commission’s probe announcement it would not use Fitbit health data for ad targeting, and suggesting that it would create a ‘data silo’ for Fitbit data to keep it separate from other data holdings.
However regulators have expressed scepticism — with the Commission writing last month that the “data silo commitment proposed by Google is insufficient to clearly dismiss the serious doubts identified at this stage as to the effects of the transaction”.
It remains to be seen what the bloc’s competition regulators conclude after taking a longer and harder look at the deal — and it’s worth noting they are simultaneously consulting on whether to give themselves new powers to be able to intervene faster to regulate digital markets — but Google’s hopes of friction-free regulatory clearance and being able to hit the ground running in 2020 with Fitbit’s data in its pocket have certainly not come to pass.
Beijing’s hopes of using Europe as a counterweight to the United States have faltered as country after country confronts China over trade, Hong Kong, human rights and other issues.
The European Commission has begun testing backend infrastructure that’s needed to make national coronavirus contacts tracing apps interoperate across the bloc’s internal borders.
It’s kicked off test runs between the backend servers of the official apps from the Czech Republic, Denmark, Germany, Ireland, Italy and Latvia, and the newly established gateway server — which is being developed and set up by T-Systems and SAP, and will be operated from the Commission’s data centre in Luxembourg, it said today.
The service is due to become operational in October, meaning EU Member States with compatible apps will be able extend digital contacts tracing for app users travelling within the group of listed countries.
Interoperability guidelines were agreed for national coronavirus contacts tracing apps back in May.
The Commission says the gateway service will only exchange a minimum of data — namely the arbitrary identifiers generated by the tracing apps.
“The information exchanged is pseudonymised, encrypted, kept to the minimum, and only stored as long as necessary to trace back infections. It does not allow the identification of individual persons,” it adds.
Only decentralized national coronavirus contacts tracing apps are compatible with the gateway service at this stage. And while the Commission says it is continuing to support work being undertaken within some Member States to find ways to extend interoperability to tracing apps with different architectures, it’s not clear how viable that will be without risks to privacy.
The main advantage of the interoperability plan for national coronavirus contacts tracing apps is to avoid the need for EU citizens to install multiple tracing apps — provided they’re traveling to another country in the region that has a national app with compatible architecture.
However, in addition to varying choices of app architecture, some EU Member States don’t even have a national app yet. So it’s clear there will continue to be gaps in cross-border coverage for the foreseeable future which increases the challenge of breaking (non-domestic-)travel-related coronavirus transmission trains.
Valdis Dombrovskis, a former prime minister of Latvia, was picked to take over at a particularly tense time. The previous office holder quit after claims he had flouted coronavirus rules.
Google has made its pitch to shape the next decades of digital regulation across the European Union, submitting a 135-page response yesterday to the consultation on the forthcoming Digital Services Act (DSA) — which will update the bloc’s long-standing rules around ecommerce.
The package also looks set to introduce specific rules for so-called “gatekeeper platforms” which wield outsized market power thanks to digital network effects. Hence Mountain View’s dialled-up attention to detail.
The lion’s share of Google’s submission focuses on lobbying against the prospect of ex ante regulation for such platform giants — something the European Commission has nonetheless signalled is front of mind as it looks at how to rein in platform power.
This type of regulation intervention aims to identify competitive problems and shape responses ‘before the event’ via the application of obligations on players who hold significant market power vs after the fact competition enforcement when market harm has been established.
“A blanket approach to ex ante competition regulation could have unintended consequences on user experience as well as multiplying costs for European businesses,” it writes, urging lawmakers to take a long, hard look at existing regulation to see if it’s not able to do the job of ensuring markets are “working properly”.
“Where the evidence shows meaningful gaps, the next step ought to be to consider how one can modernise those existing rules and procedures to address the underlying concerns before turning to consideration of new and distinct regulatory frameworks,” it adds.
If EU lawmakers must go ahead with ex ante regulation of platforms giants, Google — an adtech giant — is especially keen that they do not single out any specific business models. So it definitely wouldn’t be a fan of ex ante regs applied only to surveillance-fuelled ad-targeting platforms. Funny that.
“The criteria for identifying ‘gatekeeper power’ should be independent of the particular business model that a platform uses, making no distinction as between platforms that operate business models based on advertising, subscriptions, sales commissions, or sales of hardware,” Google writes.
“Digital platforms often operate using different business and monetization strategies, across multiple markets, geographies, and sectors, with varying degrees of competitive strength in each. Regulators should not favor or discriminate against any business, business model, or technology from the outset,” it goes on.
“In certain sectors, the platform may have market power; in others, it may be a new entrant or marginal player. The digital ecosystem is extremely diverse and evolving rapidly and it would be misguided for gatekeeper designations to be evaluated by reference to the position of an entire company or corporate group.”
Nor should lawmakers opt for what Google dubs “an overly simplistic” assessment of what constitutes a gatekeeper — giving the example of number of users as an inadequate way to determine whether a platform giant has significant market power in a given moment. (Relevant: Google market share of search in Europe exceeds 90%.)
“Recent competition enforcement demonstrates the range of platforms that have been found to have market power (e.g., Microsoft, Google, Facebook, Amazon, and Apple) and other platforms may be found to have market power in the future (borne out, for example, by the UK CMA’s investigation into online auction platform services),” it writes. “The gatekeeper assessment should therefore recognize that a range of platforms — operating a range of different business models (e.g., ad-funded, subscription-based, commission-based, hardware sales) — may hold ‘market power’ in different circumstances and vis-à-vis different platform participants.”
The tech giant can also be seen pushing a familiar talking point when its business is accused of profiting, parasitically, off of others’ content — by suggesting that when regulators are assessing whether a platform is a gatekeeper or not by considering the economic dependence of traditional businesses on a limited number of online platforms they should look favorably on those platforms “through which a materially significant proportion of business (e.g. in the form of highly valuable traffic) is channeled”.
But of course it would say that clicks are just as good as all the ad dollars it’s making.
Google is also pushing for regular review of any gatekeeper designations to ensure any obligations keep pace with fast-moving markets and competition shifts (it points to the recent rise of TikTok by way of example).
It also doesn’t want gatekeeper designations to apply universally across all markets — arguing instead they should only apply in the specific market where a platform is “found to have ‘gatekeeper’ power”.
“Large digital platforms tend to operate across multiple markets and sectors, with varying degrees of competitive strength in each,” Google argues, adding that: “Applying ex ante rules outside these markets would create a risk of deterring pro-competitive market entry through excessive regulation, thereby depriving SMEs and consumers of attractive new products.”
That would stand in contrast to the EU’s modus operandi around competition law enforcement — where a business that’s been judged to be dominant in one market (like Google is in search) has what competition chief Margrethe Vestager likes to refer to as a “special responsibility” not to abuse its market power to leverage that advantage in any other market, not only the one it’s been found to hold most of the market power.
At the same time as Google is lobbying for limits on any gatekeeper designations, the tech giant wants to see certain types of rules applied universally to all players. Here it gives the examples of privacy, transparency (such as for fees) and ranking decisions.
Data portability is another area it’s urging rules to be applied industry-wide.
It also wants to see any online ad rules applied universally, not just to gatekeeper platforms. But it’s also very keen for hard limits on any such rules.
“It will be important that any interventions seeking to achieve more transparency and accountability are carefully designed to avoid inadvertently hampering the ability of online advertising tools to deliver the value that publishers and advertisers have come to expect,” the adtech giant writes, lobbying to reduce the amount of transparency and accountability set down in law by invoking claims of privacy risks to user data; threats to commercial IP; and ‘bad actors’ gaming the system if it’s not allowed to continue being (an ad-fraud-tastic) blackbox.
“Consideration of these measures will therefore require the balancing of factors including protection of users’ personal data and partners’ commercially sensitive information, and potential harm to users and competition through disclosure of data signals that allow ‘bad actors’ to game the system, or rivals to copy innovations. We stand ready to engage with the Commission on these issues,” Google intones.
On updating ecommerce rules and liability — which is a stated aim of the DSA plan — Google is cautiously supportive of regulatory changes to reflect what it describes as “the digital transformation of the last two decades”. While pushing to retain core elements of the current e-Commerce Directive regime, including the country-of-origin principle and freedom to provide cross-border digital services.
For example it wants to see more expansive definitions of digital services, to allow for more specific rules for certain types of businesses — pushing for a move away from the ‘active’ and ‘passive’ hosts distinction for platforms, to enable them to respond more proactively in a content moderation context without inviting liability by doing so, but suggesting hosting services may be better served by retaining the current regime (Article 14 of the e-Commerce Directive).
On liability for illegal content it is lobbying for see clear lines between illegal material and what’s “lawful-but-harmful”.
“Where Member States believe a category of content is sufficiently harmful, their governments may make that content illegal directly, through democratic processes, in a clear and proportionate manner, rather than through back-door regulation of amorphously-defined harms,” it writes.
It also wants the updated law to retain the general prohibition on content monitoring obligations — and downplays the potential of AI to offer any ‘third way’ there.
“While breakthroughs in machine learning and other technology are impressive, the technology is far from perfect, and less accurate on more nuanced or context-dependent content. Their mandated use would be inappropriate, and could lead to restrictions on lawful content and on citizens’ fundamental rights,” Google warns. “The DSA can help prevent risks to fundamental rights by ensuring that companies are not forced to prioritise speed of removal over careful decision-making,” it adds, saying it encounters “many grey-area cases that require appropriate time to evaluate the law and context”.
“We remain concerned about recent laws that enable imposition of large penalties if short, fixed turn-around times are not met,” it goes on, pointing to a recent ruling by the French Constitutional Council which struck down an online hate speech law on freedom of expression grounds.
“Any new standard should safeguard fundamental rights by ensuring an appropriate balance between speed and accuracy of removal,” Google adds.
You can read its full submission — including answers to the Commission’s questionnaire — here.
The Commission’s DSA consultation closes on September 8. EU lawmakers have previously said they will come forward with a draft proposal for the new rules by the end of the year.
The small satellite launch industry is heating up, with a number of small launch providers currently vying to become the next with an active orbital launch vehicle. Existing large launch vehicle operator Arianespace is also joining the fray, however, and performed a first demonstration launch to show how its rideshare offering will work for small satellite companies. This also marks the first launch for Arianespace in over a year, after a number of launches planned for earlier in 2020 were scrubbed or delayed due to COVID-19 and the mitigating measures put in place in French Guiana where it has its launch facility.
Arianespace launched its Vega light payload rocket from the Guiana Space Center at 9:51 PM ET (6:51 PM PT) on Wednesday evening, carrying a total of 53 satellites on board to various target destinations in low Earth orbit. This was a proof-of-concept mission, funded in part by the European Space Agency and the European Commission, but it did carry actual satellites on behalf of commercial customers – including 26 for remote space-based sensing company Planet. IoT connectivity startup Swarm had 12 of its tiny satellites on board, and communications satellite startup Kepler sent up. its third-satellite. Two other startups, Satellogic, which does remote sensing, and GHGSat, which does methane emission tracking, also had satellites among the large shared payload.
This mission was intended to show that Arianespace’s Vega vehicle is able to serve the needs of small satellite rideshare customers. The rideshare model is a popular one for small satellite operators, since it helps spread the cost of a launch across multiple customers. Small satellites are extremely lightweight relative to the large, geosynchronous satellites that many of these launch vehicles were intended to carry on behalf of government and defense customers, so their operators typically don’t have the budget to support booking up a full-scale launch.
SpaceX introduced a self-booked rideshare model last year for small satellite companies, and Rocket Lab offers a service dedicated to the same market, with smaller launch vehicles that greatly reduce launch costs and that can carry small satellites more directly to their target destination. The market seems ready to support more launch providers, however, and for Arianespace, it’s a way to diversify their offering and bring in new revenue while serving this growing demand.
Phil Hogan had one of the most powerful jobs in Brussels, and leaves at a pivotal time in trade relations. The dinner he attended with politicians in Ireland has become a political scandal.
Two senior politicians have resigned, and the European Union’s trade commissioner is under pressure to quit after flouting coronavirus restrictions at a large private dinner.
Facebook is considering expanding the types of data its users are able to port directly to alternative platforms.
In comments on portability sent to US regulators ahead of an FTC hearing on the topic next month, Facebook says it intends to expand the scope of its data portability offerings “in the coming months”.
It also offers some “possible examples” of how it could build on the photo portability tool it began rolling out last year — suggesting it could in future allow users to transfer media they’ve produced or shared on Facebook to a rival platform or take a copy of their “most meaningful posts” elsewhere.
Allowing Facebook-based events to be shared to third party cloud-based calendar services is another example cited in Facebook’s paper.
It suggests expanding portability in such ways could help content creators build their brands on other platforms or help event organizers by enabling them to track Facebook events using calendar based tools.
However there are no firm commitments from Facebook to any specific portability product launches or expansions of what it offers currently.
“We remain committed to ensuring the current product remains stable and performant for people and we are also exploring how we might extend this tool, mindful of the need to preserve the privacy of our users and the integrity of our services,” Facebook writes of its photo transfer tool.
On whether it will expand support for porting photos to other rival services (i.e. not just Google Photos) Facebook has this non-committal line to offer regulators: “Supporting these additional use cases will mean finding more destinations to which people can transfer their data. In the short term, we’ll pursue these destination partnerships through bilateral agreements informed by user interest and expressions of interest from potential partners.”
Beyond allowing photo porting to Google Photos, Facebook users have long been able to download a copy of some of the information it holds on them.
But the kind of portability regulators are increasingly interested in is about going much further than that — meaning offering mechanisms that enable easy and secure data transfers to other services in a way that could encourage and support fast-moving competition to attention-monopolizing tech giants.
The Federal Trade Commission is due to host a public workshop on September 22, 2020, which it says will “examine the potential benefits and challenges to consumers and competition raised by data portability”.
The regulator notes that the topic has gained interest following the implementation of major privacy laws that include data portability requirements — such as the European Union’s General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA).
It asked for comment submissions by August 21, which is what Facebook’s paper is responding to.
In comments to the Reuters news agency, Facebook’s privacy and public policy manager, Bijan Madhani, said the company wants to see “dedicated portability legislation” coming out of any post-workshop recommendations.
It reports that Facebook supports a portability bill that’s doing the rounds in Congress — called the Access Act, which is sponsored by Democratic Senators Richard Blumenthal and Mark Warner, and Republican senator Josh Hawley — which would require large tech platforms to let their users easily move their data to other services.
Albeit Madhani dubs it a good first step, adding that the company will continue to engage with the lawmakers on shaping its contents.
“Although some laws already guarantee the right to portability, our experience suggests that companies and people would benefit from additional guidance about what it means to put those rules into practice,” Facebook also writes in its comments to the FTC .
Ahead of dipping its toe into portability via the photo transfer tool, Facebook released a white paper on portability last year, seeking to shape the debate and influence regulatory thinking around any tighter or more narrowly defined portability requirements.
The Facebook founder pushed the European Commission to narrow the types of data that should fall under portability rules. In the public discussion with commissioner Thierry Breton, in May, he raised the example of the Cambridge Analytica Facebook data misuse scandal, claiming the episode illustrated the risks of too much platform “openness” — and arguing that there are “direct trade-offs about openness and privacy”.
Zuckerberg went on to press for regulation that helps industry “balance these two important values around openness and privacy”. So it’s clear the company is hoping to shape the conversation about what portability should mean in practice.
Or, to put it another way, Facebook wants to be able to define which data can flow to rivals and which can’t.
“Our position is that portability obligations should not mandate the inclusion of observed and inferred data types,” Facebook writes in further comments to the FTC — lobbying to put broad limits on how much insight rivals would be able to gain into Facebook users who wish to take their data elsewhere.
Both its white paper and comments to the FTC plough this preferred furrow of making portability into a ‘hard problem’ for regulators, by digging up downsides and fleshing out conundrums — such as how to tackle social graph data.
On portability requests that wrap up data on what Facebook refers to as “non-requesting users”, its comments to the FTC work to sew doubt about the use of consent mechanisms to allow people to grant each other permission to have their data exported from a particular service — with the company questioning whether services “could offer meaningful choice and control to non-requesting users”.
“Would requiring consent inappropriately restrict portability? If not, how could consent be obtained? Should, for example, non-requesting users have the ability to choose whether their data is exported each time one of their friends wants to share it with an app? Could an approach offering this level of granularity or frequency of notice could lead to notice fatigue?” Facebook writes, skipping lightly over the irony given the levels of fatigue its own apps’ default notifications can generate for users.
Facebook also appears to be advocating for an independent body or regulator to focus on policy questions and liability issues tied to portability, writing in a blog post announcing its FTC submission: “In our comments, we encourage the FTC to examine portability in practice. We also ask it to recommend dedicated federal portability legislation and provide advice to industry on the policy and regulatory tensions we highlight, so that companies implementing data portability have the clear rules and certainty necessary to build privacy-protective products that enhance people’s choice and control online.”
In its FTC submission the company goes on to suggest that “an independent mechanism or body” could “collaboratively set privacy and security standards to ensure data portability partnerships or participation in a portability ecosystem that are transparent and consistent with the broader goals of data portability”.
Facebook then further floats the idea of an accreditation model under which recipients of user data “could demonstrate, through certification to an independent body, that they meet the data protection and processing standards found in a particular regulation, such as the [EU’s] GDPR or associated code of conduct”.
“Accredited entities could then be identified with a seal and would be eligible to receive data from transferring service providers. The independent body (potentially in consultation with relevant regulators) could work to assess compliance of certifying entities, revoking accreditation where appropriate,” it further suggests.
However its paper also notes the risk that requiring accreditation might present a barrier to entry for the small businesses and startups that might otherwise be best positioned to benefit from portability.
Twitter users have to wait to longer to find out what penalties, if any, the platform faces under the European Union’s General Data Protection Regulation (GDPR) for a data breach that dates back around two years.
The tech firm’s lead regulator in the region, Ireland’s Data Protection Commission (DPC), began investigating an earlier Twitter breach in November 2018 — completing the probe earlier this year and submitting a draft decision to other EU DPAs for review in May, just ahead of the second anniversary of the GDPR’s application.
In a statement on the development, Graham Doyle, the DPC’s deputy commissioner, told TechCrunch: “The Irish Data Protection Commission (DPC) issued a draft decision to other Concerned Supervisory Authorities (CSAs) on 22 May 2020, in relation to this inquiry into Twitter. A number of objections were raised by CSAs and the DPC engaged in a consultation process with them. However, following consultation a number of objections were maintained and the DPC has now referred the matter to the European Data Protection Board (EDPB) under Article 65 of the GDPR.”
Under the regulation’s one-stop-shop mechanism, cross-border cases are handled by a lead regulator — typically where the business has established its regional base. For many tech companies that means Ireland, so the DPC has an oversized role in the regulation of Silicon Valley’s handling of people’s data.
This means it now has a huge backlog of highly anticipated complaints relating to tech giants including Apple, Facebook, Google, LinkedIn and indeed Twitter. The regulator also continues to face criticism for not yet ‘getting it over the line’ in any of these complaints and investigations pertaining to big tech. So the Twitter breach case is being especially closely watched as it looks set to be the Irish DPC’s first enforcement decision in a cross-border GDPR case.
Last year commissioner Helen Dixon said the first of these decisions would be coming “early” in 2020. In the event, we’re past the halfway mark of the year with still no enforcement to show for it. Though the DPC emphasizes the need to follow due process to ensure final decisions stand up to any challenge.
The latest delay in the Twitter case is a consequence of disagreements between the DPC and other regional watchdogs which, under the rules of GDPR, have a right to raise objections on a draft decision where users in their countries are also affected.
It’s not clear what specific objections have been raised to the DPC’s draft Twitter decision, or indeed what Ireland’s regulator has decided in what should be a relatively straightforward case, given it’s a breach — not a complaint about a core element of a data-mining business model.
Far more complex complaints are still sitting on the DPC’s desk. Doyle confirmed that a complaint pertaining to WhatsApp’s legal basis for sharing user data with Facebook remains the next most progressed in the stack, for example.
So, given the DPC’s Twitter breach draft decision hasn’t been universally accepted by Europe’s data watchdogs it’s all but inevitable Facebook -WhatsApp will go through the same objections process. Ergo, expect more delays.
Article 65 of the GDPR sets out a process for handling objections on draft decisions. It allows for one month for DPAs to reach a two-thirds majority, with the possibility for a further extension of another month — which would push a decision on the Twitter case into late October.
If there’s still not enough votes in favor at that point, a further two weeks are allowed for EDPB members to reach a simple majority. If DPAs are still split the Board chair, currently Andrea Jelinek, has the deciding vote. So the body’s role in major decisions over big tech looks set to be very key.
We’ve reached out to the EDPB with questions related to the Twitter objections and will update this report with any response.
The Article 65 process exists to try to find consensus across a patchwork of national and regional data supervisors. But it won’t silence critics who argue the GDPR is not able to be applied fast enough to uphold EU citizens’ rights in the face of fast-iterating data-mining giants.
To wit: Given the latest developments, a final decision on the Twitter breach could be delayed until November — a full two years after the investigation began.
Earlier this summer a two-year review of GDPR by the European Commission, meanwhile, highlighted a lack of uniformly vigorous enforcement. Though commissioners signalled a willingness to wait and see how the one-stop-shop mechanism runs its course on cross-border cases, while admitting there’s a need to reinforce cooperation and co-ordination on cross border issues.
“We need to be sure that it’s possible for all the national authorities to work together. And in the network of national authorities it’s the case — and with the Board [EDPB] it’s possible to organize that. So we’ll continue to work on it,” justice commissioner, Didier Reynders, said in June.
“The best answer will be a decision from the Irish data protection authority about important cases,” he added then.
Without European support, it is not clear how the United States alone would enforce U.N. sanctions to punish Iran for violating the 2015 nuclear deal that world powers are trying to save.
Europe’s leaders are treading carefully to avoid providing a pretext for further state violence or for a Russian intervention.
A month after Europe’s top court struck down a flagship data transfer arrangement between the EU and the US as unsafe, European privacy campaign group, noyb, has filed complaints against 101 websites with regional operators which it’s identified as still sending data to the US via Google Analytics and/or Facebook Connect integrations.
Among the entities listed in its complaint are ecommerce companies, publishers & broadcasters, telcos & ISPs, banks and universities — including Airbnb Ireland, Allied Irish Banks, Danske Bank, Fastweb, MTV Internet, Sky Deutschland, Takeaway.com and Tele2, to name a few.
“A quick analysis of the HTML source code of major EU webpages shows that many companies still use Google Analytics or Facebook Connect one month after a major judgment by the Court of Justice of the European Union (CJEU) — despite both companies clearly falling under US surveillance laws, such as FISA 702,” the campaign group writes on its website.
“Neither Facebook nor Google seem to have a legal basis for the data transfers. Google still claims to rely on the ‘Privacy Shield’ a month after it was invalidated, while Facebook continues to use the ‘SCCs’ [Standard Contractual Clauses], despite the Court finding that US surveillance laws violate the essence of EU fundamental rights.”
We’ve reached out to Facebook and Google with questions about their legal bases for such transfers — and will update this report with any response.
Privacy watchers will know that noyb’s founder, Max Schrems, was responsible for the original legal challenge that took down an anterior EU-US data arrangement, Safe Harbor, all the way back in 2015. His updated complaint ended up taking down the EU-US Privacy Shield last month — although he’d actually targeted Facebook’s use of a separate data transfer mechanism (SCCs), urging its data supervisor, Ireland’s DPC, to step in and suspend its use of that tool.
The regulator chose to go to court instead, raising wider concerns about the legality of EU-US data transfer arrangements — which resulted in the CJEU concluding that the Commission should not have granted the US a so-called ‘adequacy agreement’, thus pulling the rug out from under Privacy Shield.
The decision means the US is now what’s considered a ‘third country’ in data protection terms, with no special arrangement to enable it to process EU users’ information.
More than that, the court’s ruling also made it clear EU data watchdogs have a responsibility to intervene where they suspect there are risks to EU people’s data if it’s being transferred to a third country via SCCs.
noyb’s contention with this latest clutch of complaints is that none of the aforementioned 101 websites has a valid legal basis to keep transferring visitor data to the US via the embedded Google Analytics and/or Facebook Connect integrations.
“We have done a quick search on major websites in each EU member state for code from Facebook and Google. These code snippets forward data on each visitor to Google or Facebook. Both companies admit that they transfer data of Europeans to the US for processing, where these companies are under a legal obligation to make such data available to US agencies like the NSA. Neither Google Analytics nor Facebook Connect are essential to run these webpages and are services that could have been replaced or at least deactivated by now,” said Schrems, honorary chair of noyb.eu, in a statement.
Since the CJEU’s Schrems II ruling, and indeed since the Safe Harbor strike down, the US Department of Commerce and European Commission have stuck their heads in the sand — signalling they intend to try cobbling together another data pact to replace the defunct Privacy Shield (which replaced the blasted-to-smithereens (un)Safe Harbor. So, er… ).
Yet without root-and-branch reform of US surveillance law, any third pop by respective lawmakers at papering over the legal schism of US national security priorities vs EU privacy rights is just as surely doomed to fail.
The more cynical among you might say the high level administrative manoeuvers around this topic are, in fact, simply intended to buy more time — for the data to keep flowing and ‘business as usual’ to continue.
But there is now substantial legal risk attached to a strategy of trying to pretend US surveillance law doesn’t exist.
Here’s Schrems again, on last month’s CJEU ruling, suggesting that Facebook and Google could be in the frame for legal liability if they don’t proactively warn EU customers of their data responsibilities: “The Court was explicit that you cannot use the SCCs when the recipient in the US falls under these mass surveillance laws. It seems US companies are still trying to convince their EU customers of the opposite. This is more than shady. Under the SCCs the US data importer would instead have to inform the EU data sender of these laws and warn them. If this is not done, then these US companies are actually liable for any financial damage caused.”
And as noyb’s press release notes, GDPR’s penalties regime can scale as high as 4% of the worldwide turnover of the EU sender and the US recipient of personal data. So, again, hi Facebook, hi Google…
The crowdfunded campaign group has pledged to continue dialling up the pressure on EU regulators to act and on EU data processors to review any US data transfer arrangements — and “adapt to the clear ruling by the EU’s supreme court”, as it puts it.
Other types of legal action are also starting to draw on Europe’s General Data Protection Regulation (GDPR) framework — and, importantly, attract funding — such as two class action style suits filed against Oracle and Salesforce’s use of tracking cookies earlier this month. (As we said when GDPR came into force back in 2018, the lawsuits are coming.)
Now, with two clear strikes from the CJEU on the issue of US surveillance law vs EU data protection, it looks like it’ll be diminishing returns for US tech giants hoping to pretend everything’s okay on the data processing front.
noyb is also putting its money where its mouth is — offering free guidelines and model requests for EU entities to use to help them get their data affairs in prompt legal order.
“While we understand that some things may need some time to rearrange, it is unacceptable that some players seem to simply ignore Europe’s top court,” Schrems added, in further comments on the latest flotilla of complaints. “This is also unfair towards competitors that comply with these rules. We will gradually take steps against controllers and processors that violate the GDPR and against authorities that do not enforce the Court’s ruling, like the Irish DPC that stays dormant.”
We’ve reached out to Ireland’s Data Protection Commission to ask what steps it will be taking in light of the latest noyb complaints, a number of which target websites that appear to be operated by an Ireland-based legal entity.
Schrems original 2013 complaint against Facebook’s use of SCCs also ended up in Ireland, where the tech giant — and many others — locates its EU EQ. Schrem’s request that the DPC order Facebook to suspend its use of SCCs still hasn’t been fulfilled, some seven years and five complaints later. And the regulator continues to face accusations of inaction, given the growing backlog of cross-border GDPR complaints against tech giants like Facebook and Google.
Ireland’s DPC has still yet to issue a single final decision on any of these major GDPR complaints. But the legal pressure for it and all EU regulators to get a move on and enforce the bloc’s law will only increase, even as class action style lawsuits are filed to try to do what regulators have failed to.
Earlier this summer the Commission acknowledged a lack of uniformly “vigorous” enforcement of GDPR in a review of the mechanism’s first two years of operation.
“The European Data Protection Board [EDPB] and the data protection authorities have to step up their work to create a truly common European culture — providing more coherent and more practical guidance, and work on vigorous but uniform enforcement,” said Věra Jourová, Commission VP for values and transparency then, giving the Commission’s first public assessment of whether GDPR is working.
We’ve also reached out to France’s CNIL to ask what action it will be taking in light of the noyb complaints.
Following the judgement in July the French regulator said it was “conducting a precise analysis”, along with the EDPB, with a view to “drawing conclusions as soon as possible on the consequences of the ruling for data transfers from the European Union to the United States”.
Since then the EDPB guidance has come out — inking the obvious: That transfers on the basis of Privacy Shield “are illegal”. And while the CJEU ruling did not invalidate the use of SCCs it gave only a very qualified green light to continued use.
As we reported last month, the ability to use SCCs to transfer data to the U.S. hinges on a data controller being able to offer a legal guarantee that “U.S. law does not impinge on the adequate level of protection” for the transferred data.
“Whether or not you can transfer personal data on the basis of SCCs will depend on the result of your assessment, taking into account the circumstances of the transfers, and supplementary measures you could put in place,” the EDPB added.
Mobile device maker HMD Global has announced a $230M Series A2 — its first tranche of external funding since a $100M round back in 2018 when it tipped over into a unicorn valuation. Since late 2016 the startup has exclusively licensed Nokia’s brand for mobile devices, going on to ship some 240M devices to date.
Its latest cash injection is notable both for its size (HMD claims it as the third largest funding round in Europe this year); and the profile of the strategic investors ploughing in capital — namely: Google, Nokia and Qualcomm.
Though whether a tech giant (Google) whose OS dominates the world’s smartphone market (Android) becoming a strategic investor in Europe’s last significant mobile OEM (HMD) catches the attention of regional competition enforcers remains to be seen. Er, vertical integration anyone? (To wit: It’s a little over two years since Google was slapped with a $5BN penalty by EU regulators for antitrust violations related to how it operates Android — and the Commission has said it continues to monitor the market ‘remedies’.)
In a further quirk, when we spoke to HMD Global CEO, Florian Seiche, ahead of today’s announcement, he didn’t expect the names of the investors to be disclosed — but we’d already been sent press release material listing them so he duly confirmed the trio are investors in the round. (But wouldn’t be drawn on how much equity Google is grabbing.)
HMD’s smartphones run on Google’s Android platform, which gives the tech giant a firm business reason for supporting the mobile maker in growing the availability of Google-packed hardware in key growth markets around the world.
And while HMD likens its consistent (and consistently updated) flavor of Android to the premium ‘pure’ Android experience you get from Google’s own-brand Pixel smartphones, the difference is the Finnish company offers devices across the range of price points, and targets hardware at mobile users in developing markets.
The upshot is relatively little overlap with Google’s Pixel hardware, and still plenty of business upside for Google should HMD grow the pipeline of Google services users (as it makes money by targeting ads).
Connoisseurs of mobile history may see more than a little irony in Google investing into Nokia branded smartphones (via HMD), given Android’s role in fatally disrupting Nokia’s lucrative smartphone business — knocking the Finnish giant off its perch as the world’s number one mobile maker and ushering in an era of Android-fuelled Asian mobile giants. But wait long enough in tech and what goes around oftentimes comes back around.
“We’re extremely excited,” said Seiche, when we mention Google’s pivotal role in Nokia’s historical downfall in smartphones. “How we are going to write that next chapter on smartphones is a critical strategic pillar for the company and our opportunity to team up so closely with Google around this has been a very, very great partnership from the beginning. And then this investment definitely confirms that — also for the future.”
“It’s a critical time for the industry therefore having a clear strategy — having a clear differentiation and a different point of view to offer, we believe, is a fantastic asset that we have developed for ourselves. And now is a great moment for us to double down on this,” he added.
We also asked Seiche whether HMD has any interest in taking advantage of the European Commission’s Android antitrust enforcement decision — i.e. to fork Android and remove the usual Google services, perhaps swapping them out for some European alternatives, which is at least a possibility for OEMs selling in the region — but Seiche told us: “We have looked at it but we strongly believe that consumers or enterprise customers actually love [Google] services and therefore they choose those services for themselves.” (Millions of dollars of direct investment from Google also, presumably, helps make the Google services business case stack up.)
Nokia, meanwhile, has always had a close relationship with HMD — which was established by former Nokia execs for the sole purpose of licensing its iconic mobile brand. (The backstory there is a clause in the sale terms of Nokia’s mobile device division to Microsoft expired in 2016, paving the way for Nokia’s brand to be returned to the smartphone market without the prior Windows Mobile baggage.)
Its investment into HMD now looks like a vote of confidence in how the company has been executing in the fiercely competitive mobile space to date (HMD doesn’t break out a lot of detail about device sales but Seiche told us it sold in excess of 70M mobiles last year; that’s a combined figure for smartphones and feature phones) — as well as an upbeat assessment of the scope of the growth opportunity ahead of it.
On the latter front US-led geopolitical tensions between the West and China do look poised to generate a tail-wind for HMD’s business.
Mobile chipmaker Qualcomm, for example, is facing a loss of business, as US government restrictions threaten its ability to continue selling chips to Huawei; a major Chinese device maker that’s become a key target for US president Trump. Its interest in supporting HMD’s growth, therefore, looks like a way for Qualcomm to hedge against US government disruption aimed at Chinese firms in its mobile device maker portfolio.
While with Trump’s recent threats against the TikTok app it seems safe to assume that no tech company with a Chinese owner is safe.
As a European company, HMD is able to position itself as a safe haven — and Seiche’s sales pitch talks up a focus on security detail and overall quality of experience as key differentiating factors vs the Android hoards.
“We have been very clear and very consistent right from the beginning to pick these core principles that are close to our heart and very closely linked with the Nokia brand itself — and definitely security, quality and trust are key elements,” he told TechCrunch. “This is resonating with our carrier and retail customers around the world and it is definitely also a core fundamental differentiator that those partners that are taking a longer term view clearly see that same opportunity that we see for us going forward.”
HMD does use manufacturing facilities in China, as well as in a number of other locations around the world — including Brazil, India, Indonesia and Vietnam.
But asked whether it sees any supply chain risks related to continued use of Chinese manufacturers to build ‘secure’ mobile hardware, Seiche responded by claiming: “The most important [factor] is we do control the software experience fully.” He pointed specifically to HMD’s acquisition of Valona Labs earlier this year. The Finnish security startup carries out all its software audits. “They basically control our software to make sure we can live up to that trusted standard,” Seiche added.
Landing a major tranche of new funding now — and with geopolitical tension between the West and the Far East shining a spotlight on its value as alternative, European mobile maker — HMD is eyeing expansion in growth markets such as Africa, Brail and India. (Currently, HMD said it’s active in 91 markets across eight regions, with its devices ranged in 250,000 retail outlets around the world.)
It’s also looking to bring 5G to devices at a greater range of price-points, beyond the current flagship Nokia 8.3. Seiche also said it wants to do more on the mobile services side. HMD’s first 5G device, the flagship Nokia 8.3, is due to land in the US and Europe in a matter of weeks. And Seiche suggested a timeframe of the middle of next year for launching a 5G device at a mid tier price point.
“The 5G journey again has started, in terms of market adoption, in China. But now Europe, US are the key next opportunity — not just in the premium tier but also in the mid segment. And to get to that as fast as possible is one of our goals,” he said, noting joint-working with Qualcomm on that.
“We also see great opportunity with Nokia in that 5G transition — because they are also working on a lot of private LTE deployments which is also an interesting area since… we are also very strongly present in that large enterprise segment,” he added.
On mobile services, Seiche highlighted the launch of HMD Connect: A data SIM aimed at travellers — suggesting it could expand into additional connectivity offers in future, forging more partnerships with carriers.
“We have already launched several services that are close to the hardware business — like insurance for your smartphones — but we are also now looking at connectivity as a great area for us,” he said. “The first pilot of that has been our global roaming but we believe there is a play in the future for consumers or enterprise customers to get their connectivity directly with their device. And we’re partnering also with operators to make that happen.”
“You can see us more as a complement [to carriers],” he added, arguing that business “dynamics” for carriers have also changed substantially — and customer acquisition hasn’t been a linear game for some time.
“In a similar way when we talk about Google Pixel vs us — we have a different footprint. And again if you look at carriers where they get their subscribers from today is already today a mix between their own direct channels and their partner channels. And actually why wouldn’t a smartphone player be a natural good partner of choice also for them? So I think you’ll see that as a trend, potentially, evolving in the next couple of years.”