The TikTok-Oracle deal would set two dangerous precedents

The TikTok-Oracle deal would set two dangerous precedents

Enlarge (credit: Sam Whitney | Getty Images)

In August 2020, President Donald Trump dropped a bombshell executive order banning TikTok in the United States. Since then, as TikTok has competed against other Big Tech companies—growing among teen users while Facebook and others have struggled—its ability to survive in the United States has remained under a cloud of uncertainty. Would regulators step in and kill off a product that had become a staple form of communication for some 100 million Americans?

That cloud seemed to lift last week in the wake of reports that TikTok will enter into a data storage deal with Oracle. In the short term, the agreement would be good for US users, enabling TikTok to invest more of its resources and energy into improving its product, rather than wrestling with the government.

Read 10 remaining paragraphs | Comments

#china, #data-security, #nationalism, #oracle, #policy, #tik-tok, #trump

Ketch raises another $20M as demand grows for its privacy data control platform

Six months after securing a $23 million Series A round, Ketch, a startup providing online privacy regulation and data compliance, brought in an additional $20 million in A1 funding, this time led by Acrew Capital.

Returning with Acrew for the second round are CRV, super{set} (the startup studio founded by Ketch’s co-founders CEO Tom Chavez and CTO Vivek Vaidya), Ridge Ventures and Silicon Valley Bank. The new investment gives Ketch a total of $43 million raised since the company came out of stealth earlier this year.

In 2020, Ketch introduced its data control platform for programmatic privacy, governance and security. The platform automates data control and consent management so that consumers’ privacy preferences are honored and implemented.

Enterprises are looking for a way to meet consumer needs and accommodate their rights and consents. At the same time, companies want data to fuel their growth and gain the trust of consumers, Chavez told TechCrunch.

There is also a matter of security, with much effort going into ransomware and malware, but Chavez feels a big opportunity is to bring security to the data wherever it lies. Once the infrastructure is in place for data control it needs to be at the level of individual cells and rows, he said.

“If someone wants to be deleted, there is a challenge in finding your specific row of data,” he added. “That is an exercise in data control.”

Ketch’s customer base grew by more than 300% since its March Series A announcement, and the new funding will go toward expanding its sales and go-to-market teams, Chavez said.

Ketch app. Image Credits: Ketch

This year, the company launched Ketch OTC, a free-to-use privacy tool that streamlines all aspects of privacy so that enterprise compliance programs build trust and reduce friction. Customer growth through OTC increased five times in six months. More recently, Qonsent, which developing a consent user experience, is using Ketch’s APIs and infrastructure, Chavez said.

When looking for strategic partners, Chavez and Vaidya wanted to have people around the table who have a deep context on what they were doing and could provide advice as they built out their products. They found that in Acrew founding partner Theresia Gouw, whom Chavez referred to as “the OG of privacy and security.”

Gouw has been investing in security and privacy for over 20 years and says Ketch is flipping the data privacy and security model on its head by putting it in the hands of developers. When she saw more people working from home and more data breaches, she saw an opportunity to increase and double down on Acrew’s initial investment.

She explained that Ketch is differentiating itself from competitors by taking data privacy and security and tying it to the data itself to empower software developers. With the OTC tool, similar to putting locks and cameras on a home, developers can download the API and attach rules to all of a user’s data.

“The magic of Ketch is that you can take the security and governance rules and embed them with the software and the piece of data,” Gouw added.

#acrew-capital, #advertising-tech, #api, #cloud-computing, #crv, #data-protection, #data-security, #enterprise, #funding, #ketch, #privacy, #recent-funding, #ridge-ventures, #silicon-valley-bank, #software-developers, #startups, #superset, #tc, #theresia-gouw, #tom-chavez

Ireland probes TikTok’s handling of kids’ data and transfers to China

Ireland’s Data Protection Commission (DPC) has yet another ‘Big Tech’ GDPR probe to add to its pile: The regulator said yesterday it has opened two investigations into video sharing platform TikTok.

The first covers how TikTok handles children’s data, and whether it complies with Europe’s General Data Protection Regulation.

The DPC also said it will examine TikTok’s transfers of personal data to China, where its parent entity is based — looking to see if the company meets requirements set out in the regulation covering personal data transfers to third countries.

TikTok was contacted for comment on the DPC’s investigation.

A spokesperson told us:

“The privacy and safety of the TikTok community, particularly our youngest members, is a top priority. We’ve implemented extensive policies and controls to safeguard user data and rely on approved methods for data being transferred from Europe, such as standard contractual clauses. We intend to fully cooperate with the DPC.”

The Irish regulator’s announcement of two “own volition” enquiries follows pressure from other EU data protection authorities and consumers protection groups which have raised concerns about how TikTok handles’ user data generally and children’s information specifically.

In Italy this January, TikTok was ordered to recheck the age of every user in the country after the data protection watchdog instigated an emergency procedure, using GDPR powers, following child safety concerns.

TikTok went on to comply with the order — removing more than half a million accounts where it could not verify the users were not children.

This year European consumer protection groups have also raised a number of child safety and privacy concerns about the platform. And, in May, EU lawmakers said they would review the company’s terms of service.

On children’s data, the GDPR sets limits on how kids’ information can be processed, putting an age cap on the ability of children to consent to their data being used. The age limit varies per EU Member State but there’s a hard cap for kids’ ability to consent at 13 years old (some EU countries set the age limit at 16).

In response to the announcement of the DPC’s enquiry, TikTok pointed to its use of age gating technology and other strategies it said it uses to detect and remove underage users from its platform.

It also flagged a number of recent changes it’s made around children’s accounts and data — such as flipping the default settings to make their accounts privacy by default and limiting their exposure to certain features that intentionally encourage interaction with other TikTok users if those users are over 16.

While on international data transfers it claims to use “approved methods”. However the picture is rather more complicated than TikTok’s statement implies. Transfers of Europeans’ data to China are complicated by there being no EU data adequacy agreement in place with China.

In TikTok’s case, that means, for any personal data transfers to China to be lawful, it needs to have additional “appropriate safeguards” in place to protect the information to the required EU standard.

When there is no adequacy arrangement in place, data controllers can, potentially, rely on mechanisms like Standard Contractual Clauses (SCCs) or binding corporate rules (BCRs) — and TikTok’s statement notes it uses SCCs.

But — crucially — personal data transfers out of the EU to third countries have faced significant legal uncertainty and added scrutiny since a landmark ruling by the CJEU last year which invalidated a flagship data transfer arrangement between the US and the EU and made it clear that DPAs (such as Ireland’s DPC) have a duty to step in and suspend transfers if they suspect people’s data is flowing to a third country where it might be at risk.

So while the CJEU did not invalidate mechanisms like SCCs entirely they essentially said all international transfers to third countries must be assessed on a case-by-case basis and, where a DPA has concerns, it must step in and suspend those non-secure data flows.

The CJEU ruling means just the fact of using a mechanism like SCCs doesn’t mean anything on its own re: the legality of a particular data transfer. It also amps up the pressure on EU agencies like Ireland’s DPC to be pro-active about assessing risky data flows.

Final guidance put out by the European Data Protection Board, earlier this year, provides details on the so-called ‘special measures’ that a data controller may be able to apply in order to increase the level of protection around their specific transfer so the information can be legally taken to a third country.

But these steps can include technical measures like strong encryption — and it’s not clear how a social media company like TikTok would be able to apply such a fix, given how its platform and algorithms are continuously mining users’ data to customize the content they see and in order to keep them engaged with TikTok’s ad platform.

In another recent development, China has just passed its first data protection law.

But, again, this is unlikely to change much for EU transfers. The Communist Party regime’s ongoing appropriation of personal data, through the application of sweeping digital surveillance laws, means it would be all but impossible for China to meet the EU’s stringent requirements for data adequacy. (And if the US can’t get EU adequacy it would be ‘interesting’ geopolitical optics, to put it politely, were the coveted status to be granted to China…)

One factor TikTok can take heart from is that it does likely have time on its side when it comes to the’s EU enforcement of its data protection rules.

The Irish DPC has a huge backlog of cross-border GDPR investigations into a number of tech giants.

It was only earlier this month that Irish regulator finally issued its first decision against a Facebook-owned company — announcing a $267M fine against WhatsApp for breaching GDPR transparency rules (but only doing so years after the first complaints had been lodged).

The DPC’s first decision in a cross-border GDPR case pertaining to Big Tech came at the end of last year — when it fined Twitter $550k over a data breach dating back to 2018, the year GDPR technically begun applying.

The Irish regulator still has scores of undecided cases on its desk — against tech giants including Apple and Facebook. That means that the new TikTok probes join the back of a much criticized bottleneck. And a decision on these probes isn’t likely for years.

On children’s data, TikTok may face swifter scrutiny elsewhere in Europe: The UK added some ‘gold-plaiting’ to its version of the EU GDPR in the area of children’s data — and, from this month, has said it expects platforms meet its recommended standards.

It has warned that platforms that don’t fully engage with its Age Appropriate Design Code could face penalties under the UK’s GDPR. The UK’s code has been credited with encouraging a number of recent changes by social media platforms over how they handle kids’ data and accounts.

#apps, #articles, #china, #communist-party, #data-controller, #data-protection, #data-protection-commission, #data-protection-law, #data-security, #encryption, #europe, #european-data-protection-board, #european-union, #general-data-protection-regulation, #ireland, #italy, #max-schrems, #noyb, #personal-data, #privacy, #social, #social-media, #spokesperson, #tiktok, #united-kingdom, #united-states

What China’s new data privacy law means for US tech firms

China enacted a sweeping new data privacy law on August 20 that will dramatically impact how tech companies can operate in the country. Officially called the Personal Information Protection Law of the People’s Republic of China (PIPL), the law is the first national data privacy statute passed in China.

Modeled after the European Union’s General Data Protection Regulation, the PIPL imposes protections and restrictions on data collection and transfer that companies both inside and outside of China will need to address. It is particularly focused on apps using personal information to target consumers or offer them different prices on products and services, and preventing the transfer of personal information to other countries with fewer protections for security.

The PIPL, slated to take effect on November 1, 2021, does not give companies a lot of time to prepare. Those that already follow GDPR practices, particularly if they’ve implemented it globally, will have an easier time complying with China’s new requirements. But firms that have not implemented GDPR practices will need to consider adopting a similar approach. In addition, U.S. companies will need to consider the new restrictions on the transfer of personal information from China to the U.S.

Implementation and compliance with the PIPL is a much more significant task for companies that have not implemented GDPR principles.

Here’s a deep dive into the PIPL and what it means for tech firms:

New data handling requirements

The PIPL introduces perhaps the most stringent set of requirements and protections for data privacy in the world (this includes special requirements relating to processing personal information by governmental agencies that will not be addressed here). The law broadly relates to all kinds of information, recorded by electronic or other means, related to identified or identifiable natural persons, but excludes anonymized information.

The following are some of the key new requirements for handling people’s personal information in China that will affect tech businesses:

Extra-territorial application of the China law

Historically, China regulations have only been applied to activities inside the country. The PIPL is similar in applying the law to personal information handling activities within Chinese borders. However, similar to GDPR, it also expands its application to the handling of personal information outside China if the following conditions are met:

  • Where the purpose is to provide products or services to people inside China.
  • Where analyzing or assessing activities of people inside China.
  • Other circumstances provided in laws or administrative regulations.

For example, if you are a U.S.-based company selling products to consumers in China, you may be subject to the China data privacy law even if you do not have a facility or operations there.

Data handling principles

The PIPL introduces principles of transparency, purpose and data minimization: Companies can only collect personal information for a clear, reasonable and disclosed purpose, and to the smallest scope for realizing the purpose, and retain the data only for the period necessary to fulfill that purpose. Any information handler is also required to ensure the accuracy and completeness of the data it handles to avoid any negative impact on personal rights and interests.

#asia, #china, #column, #computer-security, #data-protection, #data-security, #ec-china, #ec-column, #ec-east-asia, #encryption, #european-union, #general-data-protection-regulation, #government, #internet, #iphone, #privacy, #tc

After years of inaction against adtech, UK’s ICO calls for browser-level controls to fix ‘cookie fatigue’

In the latest quasi-throwback toward ‘do not track‘, the UK’s data protection chief has come out in favor of a browser- and/or device-level setting to allow Internet users to set “lasting” cookie preferences — suggesting this as a fix for the barrage of consent pop-ups that continues to infest websites in the region.

European web users digesting this development in an otherwise monotonously unchanging regulatory saga, should be forgiven — not only for any sense of déjà vu they may experience — but also for wondering if they haven’t been mocked/gaslit quite enough already where cookie consent is concerned.

Last month, UK digital minister Oliver Dowden took aim at what he dubbed an “endless” parade of cookie pop-ups — suggesting the government is eyeing watering down consent requirements around web tracking as ministers consider how to diverge from European Union data protection standards, post-Brexit. (He’s slated to present the full sweep of the government’s data ‘reform’ plans later this month so watch this space.)

Today the UK’s outgoing information commissioner, Elizabeth Denham, stepped into the fray to urge her counterparts in G7 countries to knock heads together and coalesce around the idea of letting web users express generic privacy preferences at the browser/app/device level, rather than having to do it through pop-ups every time they visit a website.

In a statement announcing “an idea” she will present this week during a virtual meeting of fellow G7 data protection and privacy authorities — less pithily described in the press release as being “on how to improve the current cookie consent mechanism, making web browsing smoother and more business friendly while better protecting personal data” — Denham said: “I often hear people say they are tired of having to engage with so many cookie pop-ups. That fatigue is leading to people giving more personal data than they would like.

“The cookie mechanism is also far from ideal for businesses and other organisations running websites, as it is costly and it can lead to poor user experience. While I expect businesses to comply with current laws, my office is encouraging international collaboration to bring practical solutions in this area.”

“There are nearly two billion websites out there taking account of the world’s privacy preferences. No single country can tackle this issue alone. That is why I am calling on my G7 colleagues to use our convening power. Together we can engage with technology firms and standards organisations to develop a coordinated approach to this challenge,” she added.

Contacted for more on this “idea”, an ICO spokeswoman reshuffled the words thusly: “Instead of trying to effect change through nearly 2 billion websites, the idea is that legislators and regulators could shift their attention to the browsers, applications and devices through which users access the web.

“In place of click-through consent at a website level, users could express lasting, generic privacy preferences through browsers, software applications and device settings – enabling them to set and update preferences at a frequency of their choosing rather than on each website they visit.”

Of course a browser-baked ‘Do not track’ (DNT) signal is not a new idea. It’s around a decade old at this point. Indeed, it could be called the idea that can’t die because it’s never truly lived — as earlier attempts at embedding user privacy preferences into browser settings were scuppered by lack of industry support.

However the approach Denham is advocating, vis-a-vis “lasting” preferences, may in fact be rather different to DNT — given her call for fellow regulators to engage with the tech industry, and its “standards organizations”, and come up with “practical” and “business friendly” solutions to the regional Internet’s cookie pop-up problem.

It’s not clear what consensus — practical or, er, simply pro-industry — might result from this call. If anything.

Indeed, today’s press release may be nothing more than Denham trying to raise her own profile since she’s on the cusp of stepping out of the information commissioner’s chair. (Never waste a good international networking opportunity and all that — her counterparts in the US, Canada, Japan, France, Germany and Italy are scheduled for a virtual natter today and tomorrow where she implies she’ll try to engage them with her big idea).

Her UK replacement, meanwhile, is already lined up. So anything Denham personally champions right now, at the end of her ICO chapter, may have a very brief shelf life — unless she’s set to parachute into a comparable role at another G7 caliber data protection authority.

Nor is Denham the first person to make a revived pitch for a rethink on cookie consent mechanisms — even in recent years.

Last October, for example, a US-centric tech-publisher coalition came out with what they called a Global Privacy Standard (GPC) — aiming to build momentum for a browser-level pro-privacy signal to stop the sale of personal data, geared toward California’s Consumer Privacy Act (CCPA), though pitched as something that could have wider utility for Internet users.

By January this year they announced 40M+ users were making use of a browser or extension that supports GPC — along with a clutch of big name publishers signed up to honor it. But it’s fair to say its global impact so far remains limited. 

More recently, European privacy group noyb published a technical proposal for a European-centric automated browser-level signal that would let regional users configure advanced consent choices — enabling the more granular controls it said would be needed to fully mesh with the EU’s more comprehensive (vs CCPA) legal framework around data protection.

The proposal, for which noyb worked with the Sustainable Computing Lab at the Vienna University of Economics and Business, is called Advanced Data Protection Control (ADPC). And noyb has called on the EU to legislate for such a mechanism — suggesting there’s a window of opportunity as lawmakers there are also keen to find ways to reduce cookie fatigue (a stated aim for the still-in-train reform of the ePrivacy rules, for example).

So there are some concrete examples of what practical, less fatiguing yet still pro-privacy consent mechanisms might look like to lend a little more color to Denham’s ‘idea’ — although her remarks today don’t reference any such existing mechanisms or proposals.

(When we asked the ICO for more details on what she’s advocating for, its spokeswoman didn’t cite any specific technical proposals or implementations, historical or contemporary, either, saying only: “By working together, the G7 data protection authorities could have an outsized impact in stimulating the development of technological solutions to the cookie consent problem.”)

So Denham’s call to the G7 does seem rather low on substance vs profile-raising noise.

In any case, the really big elephant in the room here is the lack of enforcement around cookie consent breaches — including by the ICO.

Add to that, there’s the now very pressing question of how exactly the UK will ‘reform’ domestic law in this area (post-Brexit) — which makes the timing of Denham’s call look, well, interestingly opportune. (And difficult to interpret as anything other than opportunistically opaque at this point.)

The adtech industry will of course be watching developments in the UK with interest — and would surely be cheering from the rooftops if domestic data protection ‘reform’ results in amendments to UK rules that allow the vast majority of websites to avoid having to ask Brits for permission to process their personal data, say by opting them into tracking by default (under the guise of ‘fixing’ cookie friction and cookie fatigue for them).

That would certainly be mission accomplished after all these years of cookie-fatigue-generating-cookie-consent-non-compliance by surveillance capitalism’s industrial data complex.

It’s not yet clear which way the UK government will jump — but eyebrows should raise to read the ICO writing today that it expects compliance with (current) UK law when it has so roundly failed to tackle the adtech industry’s role in cynically sicking up said cookie fatigue by failing to take any action against such systemic breaches.

The bald fact is that the ICO has — for years — avoided tackling adtech abuse of data protection, despite acknowledging publicly that the sector is wildly out of control.

Instead, it has opted for a cringing ‘process of engagement’ (read: appeasement) that has condemned UK Internet users to cookie pop-up hell.

This is why the regulator is being sued for inaction — after it closed a long-standing complaint against the security abuse of people’s data in real-time bidding ad auctions with nothing to show for it… So, yes, you can be forgiven for feeling gaslit by Denham’s call for action on cookie fatigue following the ICO’s repeat inaction on the causes of cookie fatigue…

Not that the ICO is alone on that front, however.

There has been a fairly widespread failure by EU regulators to tackle systematic abuse of the bloc’s data protection rules by the adtech sector — with a number of complaints (such as this one against the IAB Europe’s self-styled ‘transparency and consent framework’) still working, painstakingly, through the various labyrinthine regulatory processes.

France’s CNIL has probably been the most active in this area — last year slapping Amazon and Google with fines of $42M and $120M for dropping tracking cookies without consent, for example. (And before you accuse CNIL of being ‘anti-American’, it has also gone after domestic adtech.)

But elsewhere — notably Ireland, where many adtech giants are regionally headquartered — the lack of enforcement against the sector has allowed for cynical, manipulative and/or meaningless consent pop-ups to proliferate as the dysfunctional ‘norm’, while investigations have failed to progress and EU citizens have been forced to become accustomed, not to regulatory closure (or indeed rapture), but to an existentially endless consent experience that’s now being (re)branded as ‘cookie fatigue’.

Yes, even with the EU’s General Data Protection Regulation (GDPR) coming into application in 2018 and beefing up (in theory) consent standards.

This is why the privacy campaign group noyb is now lodging scores of complaints against cookie consent breaches — to try to force EU regulators to actually enforce the law in this area, even as it also finds time to put up a practical technical proposal that could help shrink cookie fatigue without undermining data protection standards. 

It’s a shining example of action that has yet to inspire the lion’s share of the EU’s actual regulators to act on cookies. The tl;dr is that EU citizens are still waiting for the cookie consent reckoning — even if there is now a bit of high level talk about the need for ‘something to be done’ about all these tedious pop-ups.

The problem is that while GDPR certainly cranked up the legal risk on paper, without proper enforcement it’s just a paper tiger. And the pushing around of lots of paper is very tedious, clearly. 

Most cookie pop-ups you’ll see in the EU are thus essentially privacy theatre; at the very least they’re unnecessarily irritating because they create ongoing friction for web users who must constantly respond to nags for their data (typically to repeatedly try to deny access if they can actually find a ‘reject all’ setting).

But — even worse — many of these pervasive pop-ups are actively undermining the law (as a number of studies have shown) because the vast majority do not meet the legal standard for consent.

So the cookie consent/fatigue narrative is actually a story of faux compliance enabled by an enforcement vacuum that’s now also encouraging the watering down of privacy standards as a result of such much unpunished flouting of the law.

There is a lesson here, surely.

‘Faux consent’ pop-ups that you can easily stumble across when surfing the ‘ad-supported’ Internet in Europe include those failing to provide users with clear information about how their data will be used; or not offering people a free choice to reject tracking without being penalized (such as with no/limited access to the content they’re trying to access), or at least giving the impression that accepting is a requirement to access said content (dark pattern!); and/or otherwise manipulating a person’s choice by making it super simple to accept tracking and far, far, far more tedious to deny.

You can also still sometimes find cookie notices that don’t offer users any choice at all — and just pop up to inform that ‘by continuing to browse you consent to your data being processed’ — which, unless the cookies in question are literally essential for provision of the webpage, is basically illegal. (Europe’s top court made it abundantly clear in 2019 that active consent is a requirement for non-essential cookies.)

Nonetheless, to the untrained eye — and sadly there are a lot of them where cookie consent notices are concerned — it can look like it’s Europe’s data protection law that’s the ass because it seemingly demands all these meaningless ‘consent’ pop-ups, which just gloss over an ongoing background data grab anyway.

The truth is regulators should have slapped down these manipulative dark patterns years ago.

The problem now is that regulatory failure is encouraging political posturing — and, in a twisting double-back throw by the ICO! — regulatory thrusting around the idea that some newfangled mechanism is what’s really needed to remove all this universally inconvenient ‘friction’.

An idea like noyb’s ADPC does indeed look very useful in ironing out the widespread operational wrinkles wrapping the EU’s cookie consent rules. But when it’s the ICO suggesting a quick fix after the regulatory authority has failed so spectacularly over the long duration of complaints around this issue you’ll have to forgive us for being sceptical.

In such a context the notion of ‘cookie fatigue’ looks like it’s being suspiciously trumped up; fixed on as a convenient scapegoat to rechannel consumer frustration with hated online tracking toward high privacy standards — and away from the commercial data-pipes that demand all these intrusive, tedious cookie pop-ups in the first place — whilst neatly aligning with the UK government’s post-Brexit political priorities on ‘data’.

Worse still: The whole farcical consent pantomime — which the adtech industry has aggressively engaged in to try to sustain a privacy-hostile business model in spite of beefed up European privacy laws — could be set to end in genuine tragedy for user rights if standards end up being slashed to appease the law mockers.

The target of regulatory ire and political anger should really be the systematic law-breaking that’s held back privacy-respecting innovation and non-tracking business models — by making it harder for businesses that don’t abuse people’s data to compete.

Governments and regulators should not be trying to dismantle the principle of consent itself. Yet — at least in the UK — that does now look horribly possible.

Laws like GDPR set high standards for consent which — if they were but robustly enforced — could lead to reform of highly problematic practices like behavorial advertising combined with the out-of-control scale of programmatic advertising.

Indeed, we should already be seeing privacy-respecting forms of advertising being the norm, not the alternative — free to scale.

Instead, thanks to widespread inaction against systematic adtech breaches, there has been little incentive for publishers to reform bad practices and end the irritating ‘consent charade’ — which keeps cookie pop-ups mushrooming forth, oftentimes with ridiculously lengthy lists of data-sharing ‘partners’ (i.e. if you do actually click through the dark patterns to try to understand what is this claimed ‘choice’ you’re being offered).

As well as being a criminal waste of web users’ time, we now have the prospect of attention-seeking, politically charged regulators deciding that all this ‘friction’ justifies giving data-mining giants carte blanche to torch user rights — if the intention is to fire up the G7 to send a collect invite to the tech industry to come up with “practical” alternatives to asking people for their consent to track them — and all because authorities like the ICO have been too risk averse to actually defend users’ rights in the first place.

Dowden’s remarks last month suggest the UK government may be preparing to use cookie consent fatigue as convenient cover for watering down domestic data protection standards — at least if it can get away with the switcheroo.

Nothing in the ICO’s statement today suggests it would stand in the way of such a move.

Now that the UK is outside the EU, the UK government has said it believes it has an opportunity to deregulate domestic data protection — although it may find there are legal consequences for domestic businesses if it diverges too far from EU standards.

Denham’s call to the G7 naturally includes a few EU countries (the biggest economies in the bloc) but by targeting this group she’s also seeking to engage regulators further afield — in jurisdictions that currently lack a comprehensive data protection framework. So if the UK moves, cloaked in rhetoric of ‘Global Britain’, to water down its (EU-based) high domestic data protection standards it will be placing downward pressure on international aspirations in this area — as a counterweight to the EU’s geopolitical ambitions to drive global standards up to its level.

The risk, then, is a race to the bottom on privacy standards among Western democracies — at a time when awareness about the importance of online privacy, data protection and information security has actually never been higher.

Furthermore, any UK move to weaken data protection also risks putting pressure on the EU’s own high standards in this area — as the regional trajectory would be down not up. And that could, ultimately, give succour to forces inside the EU that lobby against its commitment to a charter of fundamental rights — by arguing such standards undermine the global competitiveness of European businesses.

So while cookies themselves — or indeed ‘cookie fatigue’ — may seem an irritatingly small concern, the stakes attached to this tug of war around people’s rights over what can happen to their personal data are very high indeed.

#advertising-tech, #amazon, #california, #canada, #cookie-consent-notices, #cookie-fatigue, #cookies, #data-protection, #data-protection-law, #data-security, #do-not-track, #elizabeth-denham, #europe, #european-union, #france, #g7, #general-data-protection-regulation, #germany, #google, #ireland, #italy, #japan, #noyb, #oliver-dowden, #online-privacy, #online-tracking, #privacy, #tc, #tracking, #uk-government, #united-kingdom, #united-states, #web-tracking

SEC fines brokerage firms over email hacks that exposed client data

The U.S. Securities and Exchange Commission has fined several brokerage firms a total of $750,000 for exposing the sensitive personally identifiable information of thousands of customers and clients after hackers took over employee email accounts.

A total of eight entities belonging to three companies have been sanctioned by the SEC, including Cetera (Advisor Networks, Investment Services, Financial Specialists, Advisors, and Investment Advisers), Cambridge Investment Research (Investment Research and Investment Research Advisors), and KMS Financial Services.

In a press release, the SEC announced that it had sanctioned the firms for failures in their cybersecurity policies and procedures that allowed hackers to gain unauthorized access to cloud-based email accounts, exposing the personal information of thousands of customers and clients at each firm

In the case of Cetera, the SEC said that cloud-based email accounts of more than 60 employees were infiltrated by unauthorized third parties for more than three years, exposing at least 4,388 clients’ personal information.

The order states that none of the accounts featured the protections required by Cetera’s policies, and the SEC also charged two of the Cetera entities with sending breach notifications to clients containing “misleading language suggesting that the notifications were issued much sooner than they actually were after discovery of the incidents.”

The SEC’s order against Cambridge concludes that the personal information exposure of at least 2,177 Cambridge customers and clients was the result of lax cybersecurity practices at the firm. 

“Although Cambridge discovered the first email account takeover in January 2018, it failed to adopt and implement firm-wide enhanced security measures for cloud-based email accounts of its representatives until 2021, resulting in the exposure and potential exposure of additional customer and client records and information,” the SEC said. 

The order against KMS is similar; the SEC’s order states that the data of almost 5,000 customers and clients were exposed as a result of the company’s failure to adopt written policies and procedures requiring additional firm-wide security measures until May 2020. 

“Investment advisers and broker-dealers must fulfill their obligations concerning the protection of customer information,” said Kristina Littman, chief of the SEC Enforcement Division’s Cyber Unit. “It is not enough to write a policy requiring enhanced security measures if those requirements are not implemented or are only partially implemented, especially in the face of known attacks.”

All of the parties agreed to resolve the charges and to not commit future violations of the charged provisions, without admitting or denying the SEC’s findings. As part of the settlements, Cetera will pay a penalty of $300,000, while Cambridge and KMS will pay fines of $250,000 and $200,000 respectively.  

Cambridge told TechCrunch that it does not comment on regulatory matters, but said it has and does maintain a comprehensive information security group and procedures to ensure clients’ accounts are fully protected. Cetera and KMS have yet to respond.

This latest action by the SEC comes just weeks after the Commission ordered London-based publishing and education giant Pearson to pay a $1 million fine for misleading investors about a 2018 data breach at the company.

#chief, #computer-security, #data-breach, #data-security, #security

UK names John Edwards as its choice for next data protection chief as gov’t eyes watering down privacy standards

The UK government has named the person it wants to take over as its chief data protection watchdog, with sitting commissioner Elizabeth Denham overdue to vacate the post: The Department of Digital, Culture, Media and Sport (DCMS) today said its preferred replacement is New Zealand’s privacy commissioner, John Edwards.

Edwards, who has a legal background, has spent more than seven years heading up the Office of the Privacy Commissioner In New Zealand — in addition to other roles with public bodies in his home country.

He is perhaps best known to the wider world for his verbose Twitter presence and for taking a public dislike to Facebook: In the wake of the 2018 Cambridge Analytica data misuse scandal Edwards publicly announced that he was deleting his account with the social media — accusing Facebook of not complying with the country’s privacy laws.

An anti-‘Big Tech’ stance aligns with the UK government’s agenda to tame the tech giants as it works to bring in safety-focused legislation for digital platforms and reforms of competition rules that take account of platform power.

If confirmed in the role — the DCMS committee has to approve Edwards’ appointment; plus there’s a ceremonial nod needed from the Queen — he will be joining the regulatory body at a crucial moment as digital minister Oliver Dowden has signalled the beginnings of a planned divergence from the European Union’s data protection regime, post-Brexit, by Boris Johnson’s government.

Dial back the clock five years and prior digital minister, Matt Hancock, was defending the EU’s General Data Protection Regulation (GDPR) as a “decent piece of legislation” — and suggesting to parliament that there would be little room for the UK to diverge in data protection post-Brexit.

But Hancock is now out of government (aptly enough after a data leak showed him breaching social distancing rules by kissing his aide inside a government building), and the government mood music around data has changed key to something far more brash — with sitting digital minister Dowden framing unfettered (i.e. deregulated) data-mining as “a great opportunity” for the post-Brexit UK.

For months, now, ministers have been eyeing how to rework the UK’s current (legascy) EU-based data protection framework — to, essentially, reduce user rights in favor of soundbites heavy on claims of slashing ‘red tape’ and turbocharging data-driven ‘innovation’. Of course the government isn’t saying the quiet part out loud; its press releases talk about using “the power of data to drive growth and create jobs while keeping high data protection standards”. But those standards are being reframed as a fig leaf to enable a new era of data capture and sharing by default.

Dowden has said that the emergency data-sharing which was waived through during the pandemic — when the government used the pressing public health emergency to justify handing NHS data to a raft of tech giantsshould be the ‘new normal’ for a post-Brexit UK. So, tl;dr, get used to living in a regulatory crisis.

A special taskforce, which was commissioned by the prime minister to investigate how the UK could reshape its data policies outside the EU, also issued a report this summer — in which it recommended scrapping some elements of the UK’s GDPR altogether — branding the regime “prescriptive and inflexible”; and advocating for changes to “free up data for innovation and in the public interest”, as it put it, including pushing for revisions related to AI and “growth sectors”.

The government is now preparing to reveal how it intends to act on its appetite to ‘reform’ (read: reduce) domestic privacy standards — with proposals for overhauling the data protection regime incoming next month.

Speaking to the Telegraph for a paywalled article published yesterday, Dowden trailed one change that he said he wants to make which appears to target consent requirements — with the minister suggesting the government will remove the legal requirement to gain consent to, for example, track and profile website visitors — all the while framing it as a pro-consumer move; a way to do away with “endless” cookie banners.

Only cookies that pose a ‘high risk’ to privacy would still require consent notices, per the report — whatever that means.

“There’s an awful lot of needless bureaucracy and box ticking and actually we should be looking at how we can focus on protecting people’s privacy but in as light a touch way as possible,” the digital minister also told the Telegraph.

The draft of this Great British ‘light touch’ data protection framework will emerge next month, so all the detail is still to be set out. But the overarching point is that the government intends to redefine UK citizens’ privacy rights, using meaningless soundbites — with Dowden touting a plan for “common sense” privacy rules — to cover up the fact that it intends to reduce the UK’s currently world class privacy standards and replace them with worse protections for data.

If you live in the UK, how much privacy and data protection you get will depend upon how much ‘innovation’ ministers want to ‘turbocharge’ today — so, yes, be afraid.

It will then fall to Edwards — once/if approved in post as head of the ICO — to nod any deregulation through in his capacity as the post-Brexit information commissioner.

We can speculate that the government hopes to slip through the devilish detail of how it will torch citizens’ privacy rights behind flashy, distraction rhetoric about ‘taking action against Big Tech’. But time will tell.

Data protection experts are already warning of a regulatory stooge.

While the Telegraph suggests Edwards is seen by government as an ideal candidate to ensure the ICO takes a “more open and transparent and collaborative approach” in its future dealings with business.

In a particularly eyebrow raising detail, the newspaper goes on to report that government is exploring the idea of requiring the ICO to carry out “economic impact assessments” — to, in the words of Dowden, ensure that “it understands what the cost is on business” before introducing new guidance or codes of practice.

All too soon, UK citizens may find that — in the ‘sunny post-Brexit uplands’ — they are afforded exactly as much privacy as the market deems acceptable to give them. And that Brexit actually means watching your fundamental rights being traded away.

In a statement responding to Edwards’ nomination, Denham, the outgoing information commissioner, appeared to offer some lightly coded words of warning for government, writing [emphasis ours]: “Data driven innovation stands to bring enormous benefits to the UK economy and to our society, but the digital opportunity before us today will only be realised where people continue to trust their data will be used fairly and transparently, both here in the UK and when shared overseas.”

The lurking iceberg for government is of course that if wades in and rips up a carefully balanced, gold standard privacy regime on a soundbite-centric whim — replacing a pan-European standard with ‘anything goes’ rules of its/the market’s choosing — it’s setting the UK up for a post-Brexit future of domestic data misuse scandals.

You only have to look at the dire parade of data breaches over in the US to glimpse what’s coming down the pipe if data protection standards are allowed to slip. The government publicly bashing the private sector for adhering to lax standards it deregulated could soon be the new ‘get popcorn’ moment for UK policy watchers…

UK citizens will surely soon learn of unfair and unethical uses of their data under the ‘light touch’ data protection regime — i.e. when they read about it in the newspaper.

Such an approach will indeed be setting the country on a path where mistrust of digital services becomes the new normal. And that of course will be horrible for digital business over the longer run. But Dowden appears to lack even a surface understanding of Internet basics.

The UK is also of course setting itself on a direct collision course with the EU if it goes ahead and lowers data protection standards.

This is because its current data adequacy deal with the bloc — which allows for EU citizens’ data to continue flowing freely to the UK — was granted only on the basis that the UK was, at the time it was inked, still aligned with the GDPR. So Dowden’s rush to rip up protections for people’s data presents a clear risk to the “significant safeguards” needed to maintain EU adequacy. Meaning the deal could topple.

Back in June, when the Commission signed off on the UK’s adequacy deal, it clearly warned that “if anything changes on the UK side, we will intervene”.

Add to that, the adequacy deal is also the first with a baked in sunset clause — meaning it will automatically expire in four years. So even if the Commission avoids taking proactive action over slipping privacy standards in the UK there is a hard deadline — in 2025 — when the EU’s executive will be bound to look again in detail at exactly what Dowden & Co. have wrought. And it probably won’t be pretty.

The longer term UK ‘plan’ (if we can put it that way) appears to be to replace domestic economic reliance on EU data flows — by seeking out other jurisdictions that may be friendly to a privacy-light regime governing what can be done with people’s information.

Hence — also today — DCMS trumpeted an intention to secure what it billed as “new multi-billion pound global data partnerships” — saying it will prioritize striking ‘data adequacy’ “partnerships” with the US, Australia, the Republic of Korea, Singapore, and the Dubai International Finance Centre and Colombia.

Future partnerships with India, Brazil, Kenya and Indonesia will also be prioritized, it added — with the government department cheerfully glossing over the fact it’s UK citizens’ own privacy that is being deprioritized here.

“Estimates suggest there is as much as £11 billion worth of trade that goes unrealised around the world due to barriers associated with data transfers,” DCMS writes in an ebullient press release.

As it stands, the EU is of course the UK’s largest trading partner. And statistics from the House of Commons library on the UK’s trade with the EU — which you won’t find cited in the DCMS release — underline quite how tiny this potential Brexit ‘data bonanza’ is, given that UK exports to the EU stood at £294 billion in 2019 (43% of all UK exports).

So even the government’s ‘economic’ case to water down citizens’ privacy rights looks to be puffed up with the same kind of misleadingly vacuous nonsense as ministers’ reframing of a post-Brexit UK as ‘Global Britain’.

Everyone hates cookies banners, sure, but that’s a case for strengthening not weakening people’s privacy — for making non-tracking the default setting online and outlawing manipulative dark patterns so that Internet users don’t constantly have to affirm they want their information protected. Instead the UK may be poised to get rid of annoying cookie consent ‘friction’ by allowing a free for all on citizens’ data.

 

#artificial-intelligence, #australia, #brazil, #colombia, #data-mining, #data-protection, #data-security, #digital-rights, #elizabeth-denham, #europe, #european-union, #facebook, #gdpr, #general-data-protection-regulation, #human-rights, #india, #indonesia, #john-edwards, #kenya, #korea, #matt-hancock, #new-zealand, #nhs, #oliver-dowden, #privacy, #singapore, #social-issues, #social-media, #uk-government, #united-kingdom, #united-states

Cribl raises $200M to help enterprises do more with their data

At a time when remote work, cybersecurity attacks and increased privacy and compliance requirements threaten a company’s data, more companies are collecting and storing their observability data, but are being locked in with vendors or have difficulty accessing the data.

Enter Cribl. The San Francisco-based company is developing an “open ecosystem of data” for enterprises that utilizes unified data pipelines, called “observability pipelines,” to parse and route any type of data that flows through a corporate IT system. Users can then choose their own analytics tools and storage destinations like Splunk, Datadog and Exabeam, but without becoming dependent on a vendor.

The company announced Wednesday a $200 million round of Series C funding to value Cribl at $1.5 billion, according to a source close to the company. Greylock and Redpoint Ventures co-led the round and were joined by new investor IVP, existing investors Sequoia and CRV and strategic investment from Citi Ventures and CrowdStrike. The new capital infusion gives Cribl a total of $254 million in funding since the company was started in 2017, Cribl co-founder and CEO Clint Sharp told TechCrunch.

Sharp did not discuss the valuation; however, he believes that the round is “validation that the observability pipeline category is legit.” Data is growing at a compound annual growth rate of 25%, and organizations are collecting five times more data today than they did 10 years ago, he explained.

“Ultimately, they want to ask and answer questions, especially for IT and security people,” Sharp added. “When Zoom sends data on who started a phone call, that might be data I need to know so I know who is on the call from a security perspective and who they are communicating with. Also, who is sending files to whom and what machines are communicating together in case there is a malicious actor. We can also find out who is having a bad experience with the system and what resources they can access to try and troubleshoot the problem.”

Cribl also enables users to choose how they want to store their data, which is different from competitors that often lock companies into using only their products. Instead, customers can buy the best products from different categories and they will all talk to each other through Cribl, Sharp said.

Though Cribl is developing a pipeline for data, Sharp sees it more as an “observability lake,” as more companies have differing data storage needs. He explains that the lake is where all of the data will go that doesn’t need to go into an existing storage solution. The pipelines will send the data to specific tools and then collect the data, and what doesn’t fit will go back into the lake so companies have it to go back to later. Companies can keep the data for longer and more cost effectively.

Cribl said it is seven times more efficient at processing event data and boasts a customer list that includes Whole Foods, Vodafone, FINRA, Fannie Mae and Cox Automotive.

Sharp went after additional funding after seeing huge traction in its existing customer base, saying that “when you see that kind of traction, you want to keep doubling down.” His aim is to have a presence in every North American city and in Europe, to continue launching new products and growing the engineering team.

Up next, the company is focusing on go-to-market and engineering growth. Its headcount is 150 currently, and Sharp expects to grow that to 250 by the end of the year.

Over the last fiscal year, Cribl grew its revenue 293%, and Sharp expects that same trajectory for this year. The company is now at a growth stage, and with the new investment, he believes Cribl is the “future leader in observability.”

“This is a great investment for us, and every dollar, we believe, is going to create an outsized return as we are the only commercial company in this space,” he added.

Scott Raney, managing director at Redpoint Ventures, said his firm is a big enterprise investor in software, particularly in companies that help organizations leverage data to protect themselves, a sweet spot that Cribl falls into.

He feels Sharp is leading a team, having come from Splunk, that has accomplished a lot, has a vision and a handle on the business and knows the market well. Where Splunk is capturing the machine data and using its systems to extract the data, Cribl is doing something similar in directing the data where it needs to go, while also enabling companies to utilize multiple vendors and build apps to sit on top of its infrastructure.

“Cribl is adding opportunity by enriching the data flowing through, and the benefits are going to be meaningful in cost reduction,” Raney said. “The attitude out there is to put data in cheaper places, and afford more flexibility to extract data. Step one is to make that transition, and step two is how to drive the data sitting there. Cribl is doing something that will go from being a big business to a legacy company 30 years from now.”

#citi-ventures, #clint-sharp, #cloud, #computing, #cribl, #crowdstrike, #crv, #data-security, #datadog, #developer, #enterprise, #exabeam, #funding, #greylock, #information-technology, #ivp, #recent-funding, #redpoint-ventures, #scott-raney, #sequoia, #splunk, #startups, #storage-solution, #tc

Insider hacks to streamline your SOC 3 certification application

If you’re a tech company offering anyone a service, somewhere in your future is a security assessment giving you the seal of approval to manage clients’ data and operate on your devices. No one takes security lightly anymore. The business costs of cyberattacks have now hit an all-time high. Government bodies, companies and consumers need the assurance that the next software they download isn’t going to be an open door for hackers.

For good reason, security certifications like the SOC 3 really put you through the wringer. My company, Waydev, has just attained the SOC 3 certification, becoming one of the first development analytics tools to receive that accreditation. We learned so much from the process, we felt it was right to share our experience with others that might be daunted by the prospect.

As a non-tech founder, it was hard not only to navigate the process, but to appreciate its value. But by putting our business caps on, our team was able to optimize our approach and minimize the time and effort needed to achieve our goal. In doing so, we were granted SOC 3 compliance in two weeks, as opposed to the two months it takes some companies.

We also turned the assessment into an opportunity to better our product, align our internal teams, boost our brand and even launch partnerships.

So here’s our advice on how teams can smoothly reach an SOC 3 while simultaneously balancing workloads and minimizing disruption to users.

First, bring your teams on board

Because we can’t expect employees to stack those hours on top of their regular workdays, as a leader you have to accept — and communicate — that the speed of your output will inevitably decrease.

As a founder, you’ll be acting as captain steering a ship into that SOC 3 port, and you’ll need all members of your crew to join forces. This isn’t a job for a specially designated security team alone and will require deep involvement from your development and other teams, too. That might lead to internal resistance, as they still have a full-time job tending to your product and customers.

That’s why it’s so important to start by being crystal clear with your employees about what this process will mean to their work lives. However, they have to embrace the true benefits that will arise. SOC 3 will immediately raise your brand’s appeal and likely see new customers come in as a result.

Each employee will also come out the other end with well-honed cybersecurity skills — they’ll have a deep understanding of potential cyber threats to the company, and all security initiatives will carry a far lighter burden. There’s also the sense of pride and fulfillment that comes with having an indisputable edge over your competitors.

#column, #computer-security, #cryptography, #cyberwarfare, #data-security, #ec-column, #ec-cybersecurity, #ec-how-to, #security, #security-tools, #startups

InfoSum raises $65M Series B as organizations embrace secure data sharing

InfoSum, a London-based startup that provides a decentralized platform for secure data sharing between organizations, has secured a $65 million Series B funding round led by Chrysalis Investments.

The investment comes less than a year after InfoSum closed a $15.1 million Series A round co-led by Upfront Ventures and IA Ventures. Since, the data privacy startup has tripled its revenue, doubled its employee base, and secured more than fifty new customers, including AT&T, Disney, Omnicom and Merkle.

Its growth was boosted by businesses that are increasingly focused on data privacy, largely as a result of the mass shift to remote working and cloud-based collaboration necessitated by the pandemic. InfoSum’s data collaboration platform uses patented technology to connect customer records between and amongst companies, without moving or sharing data. It helps organizations to alleviate security concerns, according to the startup, and is compliant with all current privacy laws, including GDPR.

The platform was bolstered earlier this year with the launch of InfoSum Bridge, a product which it claims significantly expands the customer identity linking capabilities of its platform. It is designed to connect advertising identifiers along with its own “bunkered” data sets to better facilitate ad targeting based on first-party data.

“The technology that enables companies to safely and securely compare customer data is thankfully entering a new phase, driven by privacy-conscious consumers and companies focused on value and control. InfoSum is proud to be leading the way,” said Brian Lesser, chairman and CEO of InfoSum. “Companies are looking for solutions to help resolve the existing friction and inefficiencies around data collaboration, and InfoSum is the company to drive this growth forward.”

The company, which says it is poised for “exponential growth” in 2021 as businesses continue to embrace privacy-focused tools and software, will use the newly raised investment to accelerate hiring across every aspect of its business, expand into new regions, and further the development of its platform.

Nick Halstead, who previously founded and led big data startup DataSift, founded InfoSum (then called CognitiveLogic) in 2015 with a vision to connect the world’s data without ever sharing it. The company currently has 80 employees spread across offices in the U.S., the U.K., and Germany.

#articles, #att, #chrysalis, #cloud-computing, #data-security, #datasift, #disney, #funding, #general-data-protection-regulation, #germany, #human-rights, #ia-ventures, #identity-management, #infosum, #london, #merkle, #nick-halstead, #omnicom, #open-data-institute, #privacy, #security, #social-issues, #united-kingdom, #united-states, #upfront-ventures

Stop using Zoom, Hamburg’s DPA warns state government

Hamburg’s state government has been formally warned against using Zoom over data protection concerns.

The German state’s data protection agency (DPA) took the step of issuing a public warning yesterday, writing in a press release that the Senate Chancellory’s use of the popular videoconferencing tool violates the European Union’s General Data Protection Regulation (GDPR) since user data is transferred to the US for processing.

The DPA’s concern follows a landmark ruling (Schrems II) by Europe’s top court last summer which invalidated a flagship data transfer arrangement between the EU and the US (Privacy Shield), finding US surveillance law to be incompatible with EU privacy rights.

The fallout from Schrems II has been slow to manifest — beyond an instant blanket of legal uncertainty. However a number of European DPAs are now investigating the use of US-based digital services because of the data transfer issue, and in some instances publicly warning against the use of mainstream US tools like Facebook and Zoom because user data cannot be adequately safeguarded when it’s taken over the pond.

German agencies are among the most proactive in this respect. But the EU’s data protection supervisor is also investigating the bloc’s use of cloud services from US giants Amazon and Microsoft over the same data transfer concern.

At the same time, negotiations between the European Commission and the Biden administration to seek a replacement data transfer deal remain ongoing. However EU lawmakers have repeatedly warned against any quick fix — saying reform of US surveillance law is likely required before there can be a revived Privacy Shield. And as the legal limbo continues a growing number of public bodies in Europe are facing pressure to ditch US-based services in favor of compliant local alternatives.

In the Hamburg case, the DPA says it took the step of issuing the Senate Chancellory with a public warning after the body did not provide an adequate response to concerns raised earlier.

The agency asserts that use of Zoom by the public body does not comply with the GDPR’s requirement for a valid legal basis for processing personal data, writing: “The documents submitted by the Senate Chancellery on the use of Zoom show that [GDPR] standards are not being adhered to.”

The DPA initiated a formal procedure earlier, via a hearing, on June 17, 2021 but says the Senate Chancellory failed to stop using the videoconferencing tool. Nor did it provide any additional documents or arguments to demonstrate compliance usage. Hence the DPA taking the step of a formal warning, under Article 58 (2) (a) of the GDPR.

In a statement, Ulrich Kühn, the acting Hamburg commissioner for data protection and freedom of information, dubbed it “incomprehensible” that the regional body was continuing to flout EU law in order to use Zoom — pointing out that a local alternative, provided by the German company Dataport (which supplies software to a number of state, regional and local government bodies) is readily available.

In the statement [translated with Google Translate], Kühn said: “Public bodies are particularly bound to comply with the law. It is therefore more than regrettable that such a formal step had to be taken. At the [Senate Chancellery of the Free and Hanseatic City of Hamburg], all employees have access to a tried and tested video conference tool that is unproblematic with regard to third-country transmission. As the central service provider, Dataport also provides additional video conference systems in its own data centers. These are used successfully in other regions such as Schleswig-Holstein. It is therefore incomprehensible why the Senate Chancellery insists on an additional and legally highly problematic system.”

We’ve reached out to the Hamburg DPA and Senate Chancellory with questions.

Zoom has also been contacted for comment.

#data-protection, #data-security, #dataport, #digital-rights, #eu-us-privacy-shield, #europe, #european-commission, #european-union, #general-data-protection-regulation, #government, #hamburg, #personal-data, #privacy, #schrems-ii, #surveillance-law, #united-states, #video-conferencing, #zoom

Pearson to pay $1M fine for misleading investors about 2018 data breach

Pearson, a London-based publishing and education giant that provides software to schools and universities has agreed to pay $1 million to settle charges that it misled investors about a 2018 data breach resulting in the theft of millions of student records.

The U.S. Securities and Exchange Commission announced the settlement on Monday after the agency found that Pearson made “misleading statements and omissions” about its 2018 data breach, which saw millions of student usernames and scrambled passwords stolen, along with the administrator login credentials of 13,000 schools, district and university customer accounts.

The agency said that in Person’s semi-annual review filed in July 2019, the company referred to the incident as a “hypothetical risk,” even after the data breach had happened. Similarly, in a statement that same month, Pearson said the breach may include dates of birth and email addresses, when it knew that such records were stolen, according to the SEC.

Pearson also said that it had “strict protections” in place when it actually took the company six months to patch the vulnerability after it was notified.

“As the order finds, Pearson opted not to disclose this breach to investors until it was contacted by the media, and even then Pearson understated the nature and scope of the incident, and overstated the company’s data protections,” said Kristina Littman, chief of the SEC Enforcement Division’s Cyber Unit. “As public companies face the growing threat of cyber intrusions, they must provide accurate information to investors about material cyber incidents.”

While Pearson did not admit wrongdoing as part of the settlement, Pearson agreed to pay a $1 million penalty — a small fraction of the $489 million in pre-tax profits that the company raked in last year.

A Pearson spokesperson told TechCrunch: “We’re pleased to resolve this matter with the SEC. We also appreciate the work of the FBI and the Justice Department to identify and charge those responsible for a global cyberattack that affected Pearson and many other companies and industries, including at least one government agency.”

Pearson said the breach related to its AIMSweb1.0 web-based software for entering and tracking students’ academic performance, which it retired in July 2019. “Pearson continues to enhance its cybersecurity efforts to minimize the risk of cyberattacks in an ever-changing threat landscape,” the spokesperson added.

#articles, #computer-security, #cyberattack, #cybercrime, #data-breach, #data-security, #federal-bureau-of-investigation, #pearson, #security, #u-s-securities-and-exchange-commission

Baffle lands $20M Series B to simplify data-centric encryption

California-based Baffle, a startup that aims to prevent data breaches by keeping data encrypted from production through processing, has raised $20 million in Series B funding.

Baffle was founded in 2015 to help thwart the increasing threats to enterprise assets in public and private clouds. Unlike many solutions that only encrypt data in-transit and at-rest, Baffle’s solution keeps data encrypted while it’s being processed by databases and applications through a “security mesh” that de-identifies sensitive data that it claims offers no performance impact to customers.

The startup says its goal is to make data breaches “irrelevant” by efficiently encrypting data wherever it may be, so that even if there is a security breach, the data will be unavailable and unusable by hackers.

“Most encryption is misapplied, and quite frankly, doesn’t do anything to protect your data,” the startup claims. “The protection measures that are most commonly used do nothing to protect you against modern hacks and breaches.”

Baffle supports all major cloud platforms, including AWS, Google Cloud and Microsoft Azure, and it’s currently used to protect more than 100 billion records in financial services, healthcare, retail, industrial IoT, and government, according to the startup. The company claims it stores records belonging to the top 5 global financial services companies and five of the top 25 global companies.

“Securing IT infrastructure—networks, devices, databases, lakes and warehouses—is never complete. Constant change makes it impossible to adopt a zero trust security posture without protecting the data itself,” said Ameesh Divatia, co-founder and CEO of Baffle.

The startup’s Series B funding round, which comes more than three years after it secured closed $6M in Series A financing, was led by new investor Celesta Capital with contributions from National Grid Partners, Lytical Ventures and Nepenthe Capital, and brings the startup’s total funding to date to $36.5 million.

Baffle, which says it has seen threefold revenue growth over the past year, tells TechCrunch that the funds will be used to help it grow to meet market demand and to invest further in product development. It also plans to double its headcount from 25 to 50 employees over the next 12 months.

“With this investment, we can meet market demand for data-centric cloud data protection that enables responsible digital information sharing and breaks the cycle of continuous data and privacy breaches,” Divatia added.

Read more:

#cloud, #computer-security, #cryptography, #data-protection, #data-security, #encryption, #security

Employee talent predictor retrain.ai raised another $7M, adds Splunk as strategic investor

Automation will displace 85 million jobs while simultaneously creating 97 million new jobs by 2025, according to the World Economic Forum. Although that sounds like good news, the hard reality is that millions of people will have to retrain in the jobs of the future.

A number of startups are addressing these problems of employee skills, so looking at talent development, neuroscience-based assessments, and prediction technologies for staffing. These include Pymetrics (raised $56.6M), Eightfold (raised $396.8M) and EmPath (raised $1M). But this sector is by no means done yet.

retrain.ai bills itself as a ‘Talent Intelligence Platform’ and it’s now closed an additional $7 million from its current investors Square Peg, Hetz Ventures, TechAviv, .406 Ventures and Schusterman Family Investments. It’s also now added Splunk Ventures as a strategic investor. The new round of funding takes its total raised to $20 million.

retrain.ai says it uses AI and machine learning to help governments and organizations retrain and upskill talent for jobs of the future, enable diversity initiatives, and that it helps employees and jobseekers manage their careers.
 
Dr. Shay David, Co-Founder and CEO of retrain.ai said: “We are thrilled to have Splunk Ventures join us on this exciting journey as we use the power of data to solve the widening skills gap in the global labor markets.”

The company says it helps companies tackle future workforce strategies by “analyzing millions of data sources to understand the demand and supply of skill sets.”
 
retrain.ai new funding will be used for U.S. expansion, hiring talent and product development.

#406-ventures, #artificial-intelligence, #computing, #data-security, #europe, #hetz, #information-technology, #machine-learning, #neuroscience, #software, #splunk, #square-peg, #system-administration, #tc, #united-states, #world-economic-forum

Siga secures $8.1M Series B to prevent cyberattacks on critical infrastructure

Siga OT Solutions, an Israeli cybersecurity startup that helps organizations secure their operations by monitoring the raw electric signals of critical industrial assets, has raised $8.1 million in Series B funding.

Siga’s SigaGuard says its technology, used by Israel’s critical water facilities and the New York Power Authority, is unique in that rather than monitoring the operational network, it uses machine learning and predictive analysis to “listen” to Level 0 signals. These are typically made up of components and sensors that receive electrical signals, rather than protocols or data packets that can be manipulated by hackers.

By monitoring Level 0, which Siga describes as the “richest and most reliable level of process data within any operational environment,” the company can detect cyberattacks on the most critical and vulnerable physical assets of national infrastructures. This, it claims, ensures operational resiliency even when hackers are successful in manipulating the logic of industrial control system (ICS) controllers.

Amir Samoiloff, co-founder and CEO of Siga, says: “Level 0 is becoming the major axis in the resilience and integrity of critical national infrastructures worldwide and securing this level will become a major element in control systems in the coming years.”

The company’s latest round of funding — led by PureTerra Ventures, with investment from Israeli venture fund SIBF, Moore Capital, and Phoenix Contact — comes amid an escalation in attacks against operational infrastructure. Israel’s water infrastructure was hit by three known cyberattacks in 2020 and these were followed by an attack on the water system of a city in Florida that saw hackers briefly increase the amount of sodium hydroxide in Oldsmar’s water treatment system. 

The $8.1 million investment lands three years after the startup secured $3.5 million in Series A funding. The company said it will use the funding to accelerate its sales and strategic collaborations internationally, with a focus on North America, Europe, Asia, and the United Arab Emirates. 

Read more:

#articles, #asia, #computer-security, #cryptography, #cyberattack, #cybercrime, #cybersecurity-startup, #cyberwarfare, #data-security, #energy, #europe, #florida, #israel, #machine-learning, #north-america, #nozomi-networks, #phoenix, #ransomware, #security, #united-arab-emirates

EU hits Amazon with record-breaking $887M GDPR fine over data misuse

Luxembourg’s National Commission for Data Protection (CNPD) has hit Amazon with a record-breaking €746 million ($887m) GDPR fine over the way it uses customer data for targeted advertising purposes.

Amazon disclosed the ruling in an SEC filing on Friday in which it slammed the decision as baseless and added that it intended to defend itself “vigorously in this matter.”

“Maintaining the security of our customers’ information and their trust are top priorities,” an Amazon spokesperson said in a statement. “There has been no data breach, and no customer data has been exposed to any third party. These facts are undisputed.

“We strongly disagree with the CNPD’s ruling, and we intend to appeal. The decision relating to how we show customers relevant advertising relies on subjective and untested interpretations of European privacy law, and the proposed fine is entirely out of proportion with even that interpretation.”

The penalty is the result of a 2018 complaint by French privacy rights group La Quadrature du Net, a group that claims to represent the interests of thousands of Europeans to ensure their data isn’t used by big tech companies to manipulate their behavior for political or commercial purposes. The complaint, which also targets Apple, Facebook Google and LinkedIn and was filed on behalf of more than 10,000 customers, alleges that Amazon manipulates customers for commercial means by choosing what advertising and information they receive.

La Quadrature du Net welcomed the fine issued by the CNPD, which “comes after three years of silence that made us fear the worst.”

“The model of economic domination based on the exploitation of our privacy and free will is profoundly illegitimate and contrary to all the values that our democratic societies claim to defend,” the group added in a blog post published on Friday.

The CNPD has also ruled that Amazon must commit to changing its business practices. However, the regulator has not publicly committed on its decision, and Amazon didn’t specify what revised business practices it is proposing.

The record penalty, which trumps the €50 million GDPR penalty levied against Google in 2019, comes amid heightened scrutiny of Amazon’s business in Europe. In November last year, the European Commission announced formal antitrust charges against the company, saying the retailer has misused its position to compete against third-party businesses using its platform. At the same time, the Commission a second investigation into its alleged preferential treatment of its own products on its site and those of its partners.

#amazon, #apple, #big-tech, #companies, #computing, #data-protection, #data-security, #europe, #european-commission, #facebook, #general-data-protection-regulation, #google, #policy, #privacy, #spokesperson, #tc, #u-s-securities-and-exchange-commission

Financial firms should leverage machine learning to make anomaly detection easier

Anomaly detection is one of the more difficult and underserved operational areas in the asset-servicing sector of financial institutions. Broadly speaking, a true anomaly is one that deviates from the norm of the expected or the familiar. Anomalies can be the result of incompetence, maliciousness, system errors, accidents or the product of shifts in the underlying structure of day-to-day processes.

For the financial services industry, detecting anomalies is critical, as they may be indicative of illegal activities such as fraud, identity theft, network intrusion, account takeover or money laundering, which may result in undesired outcomes for both the institution and the individual.

There are different ways to address the challenge of anomaly detection, including supervised and unsupervised learning.

Detecting outlier data, or anomalies according to historic data patterns and trends can enrich a financial institution’s operational team by increasing their understanding and preparedness.

The challenge of detecting anomalies

Anomaly detection presents a unique challenge for a variety of reasons. First and foremost, the financial services industry has seen an increase in the volume and complexity of data in recent years. In addition, a large emphasis has been placed on the quality of data, turning it into a way to measure the health of an institution.

To make matters more complicated, anomaly detection requires the prediction of something that has not been seen before or prepared for. The increase in data and the fact that it is constantly changing exacerbates the challenge further.

Leveraging machine learning

There are different ways to address the challenge of anomaly detection, including supervised and unsupervised learning.

#anomaly-detection, #artificial-intelligence, #artificial-neural-networks, #column, #data-security, #ec-column, #ec-enterprise-applications, #ec-fintech, #finance, #machine-learning, #startups, #unsupervised-learning

Noetic Cyber emerges from stealth with $15M led by Energy Impact Partners

Noetic Cyber, a cloud-based continuous cyber asset management and controls platform, has launched from stealth with a Series A funding round of $15 million led by Energy Impact Partners.

The round was also backed by Noetic’s existing investors, TenEleven Ventures and GlassWing Ventures, and brings the total amount of funds raised by the startup to $20 million following a $5 million seed round. Shawn Cherian, a partner at Energy Impact Partners, will join the Noetic board, while Niloofar Razi Howe, a senior operating partner at the investment firm, will join Noetic’s advisory board.

“Noetic is a true market disruptor, offering an innovative way to fix the cyber asset visibility problem — a growing and persistent challenge in today’s threat landscape,” said Howe.

The Massachusetts-based startup claims to be taking a new approach to the cyber asset management problem. Unlike traditional solutions, Noetic is not agent-based, instead using API aggregation and correlation to draw insights from multiple security and IT management tools.

“What makes us different is that we’re putting orchestration and automation at the heart of the solution, so we’re not just showing security leaders that they have problems, but we’re helping them to fix them,” Paul Ayers, CEO and co-founder of Noetic Cyber tells TechCrunch.

Ayer was previously a top exec at PGP Corporation (acquired by Symantec for $370 million) and Vormetric (acquired by Thales for $400 million) and founded Noetic Cyber with Allen Roger and Allen Hadden, who have previously worked at cybersecurity vendors including Authentica, Raptor and Axent. All three were also integral to the development of Resilient Systems, which was acquired by IBM.

“The founding team’s experience in the security, orchestration, automation and response market gives us unique experience and insights to make automation a key pillar of the solution,” Ayers said. “Our model gives you the certainty to make automation possible, the goal is to find and fix problems continuously, getting assets back to a secure state.”

“The development of the technology has been impacted by the current cyber landscape, and the pandemic, as some of the market drivers we’ve seen around the adoption of cloud services, and the increased use of unmanaged devices by remote workers, are driving a great need for accurate cyber asset discovery and management.”

The company, which currently has 20 employees, says it plans to use the newly raised funds to double its headcount by the end of the year, as well as increase its go-to-market capability in the U.S. and the U.K. to grow its customer base and revenue growth.

“In terms of technology development, this investment allows us to continue to add development and product management talent to the team to build on our cyber asset management platform,” Ayers said. 

“The beauty of our approach is that it allows us to easily add more applications and use cases on top of our core asset visibility and management model. We will continue to add more connectors to support customer use cases and will be bringing a comprehensive controls package to market later in 2021, as well as a community edition in 2022.”

#api, #cloud-services, #computer-security, #computing, #cryptography, #cybercrime, #cyberwarfare, #data-security, #energy-impact-partners, #funding, #glasswing-ventures, #ibm, #information-technology, #malware, #massachusetts, #partner, #raptor, #resilient-systems, #security, #shawn-cherian, #symantec, #technology-development, #teneleven-ventures, #thales, #united-kingdom, #united-states, #vormetric

Moving fast and breaking things cost us our privacy and security

Over the years, I’ve had a front-row seat to the future of technology.

In my role at Y Combinator as director of admissions, I saw hundreds of startup pitches. Many shared a particular attribute: They followed the path of quickly growing users and monetizing the data extracted from the user.

As time went on, I began to see the full picture of what our technologies were creating: A “Minority Report” world where our every move is tracked and monetized. Some companies, like Facebook, lived by the mantra “move fast, break things.” Not only did they break things, they failed us by propagating disinformation and propaganda that, ultimately, cost some people their lives.

And that happened because of a growth-at-all-costs mindset. Some of the biggest consumer-facing Silicon Valley companies in the 21st century flourished by using data to sell ads with little or no consideration for user privacy or security. We have some of the brightest minds in technology; if we really wanted to, we could change things so that, at the very least, people wouldn’t have to worry about privacy and the security of their information.

We could move toward a model where people have more control over their own data and where Silicon Valley explores innovations in privacy and data security. While there are multiple long-term approaches and potential new business models to explore, there are ways to approach a privacy-first mindset in the near term. Here are a couple of ways to start moving toward a future in which people can have control over their data.

Workplace applications should lead the charge in enabling more secure identity technologies

We need to approach technology by consciously designing a future where technology works for humans, businesses and society in a secure and ethical way.

Approaching technological growth without understanding or considering the consequences has eroded trust in Silicon Valley. We must do better — and we can start in the workplace by better protecting personal data through self-sovereign identity, an approach that gives people control and ownership over their digital identity.

Using the workplace as a starting point for better privacy and security of people’s digital identities makes sense because many technologies that have been widely adopted — think personal computers, the internet, mobile phones and email — started out in the workplace before they became household technologies, thereby inheriting the foundational principles. With a return to office life on the horizon, there’s no better time than now to reexamine how we might adopt new practices in our workplaces.

We could move toward a model where people have more control over their own data and where Silicon Valley explores innovations in privacy and data security.

So how would employers do this? For starters, they can use the return to office as an impetus for contactless access and digital IDs, which protect against physical and digital data breaches, the latter of which are becoming more common.

Employees could enter offices through their digital IDs, or tokenized IDs, which are stored securely on their phones. They will no longer need to use plastic cards with their personal information and photo imprinted on them, which are easy to fake or duplicate, improving security for both the employer and employee.

Contactless access isn’t a big leap nowadays, either. The pandemic primed us for digital identification — because the use of contactless payment accelerated due to COVID, the change to contactless ID will be seamless for many.

Invest in critical privacy-centric infrastructure

Tokenized identification puts the power in the user’s hands. This is crucial not just for workplace access and identity, but for a host of other, even more important reasons. Tokenized digital IDs are encrypted and can only be used once, making it nearly impossible for anyone to view the data included in the digital ID should the system be breached. It’s like Signal, but for your digital IDs.

As even more sophisticated technologies roll out, more personal data will be produced (and that means more data is vulnerable). It’s not just our driver’s licenses, credit cards or Social Security numbers we must worry about. Our biometrics and personal health-related data, like our medical records, are increasingly online and accessed for verification purposes. Encrypted digital IDs are incredibly important because of the prevalence of hacking and identity theft. Without tokenized digital IDs, we are all vulnerable.

We saw what happened with the Colonial Pipeline ransomware attack recently. It crippled a large portion of the U.S. pipeline system for weeks, showing that critical parts of our infrastructure are extremely vulnerable to breaches.

Ultimately, we need to think about making technology that serves humanity, not vice versa. We also need to ask ourselves if the technology we create is beneficial not just to the user, but to society in general. One way to build technology that better serves humanity is to ensure that it protects users and their values. Self-sovereign identity will be key in our future as other technologies arise. Among other things, we will see our digital wallets house far more than just credit cards, making the need for secure digital IDs more critical. Most importantly, people and companies just need control over their own data, period.

Given the broader general awareness of privacy and security in recent years, employers must take the threat of personal-data vulnerability seriously and lead the way in self-sovereign identity. Through the initial step of contactless access and digital IDs in the workplace, we can begin to inch closer toward a more secure future, at least in terms of our own data and identity.

#column, #computer-security, #data-protection, #data-security, #digital-identity, #opinion, #privacy, #security, #tc

Opaque raises $9.5M seed to secure sensitive data in the cloud

Opaque, a new startup born out of Berkely’s RISELabs, announced a $9.5 million seed round today to build a solution to access and work with sensitive data in the cloud in a secure way, even with multiple organizations involved. Intel Capital led today’s investment with participation by Race Capital, The House Fund and FactoryHQ.

The company helps customers work with secure data in the cloud while making sure the data they are working on is not being exposed to cloud providers, other research participants or anyone else, says company president Raluca Ada Popa.

“What we do is we use this very exciting hardware mechanism called Enclave, which [operates] deep down in the processor — it’s a physical black box — and only gets decrypted there. […] So even if somebody has administrative privileges in the cloud, they can only see encrypted data,” she explained.

Company co-founder Ion Stoica, who was a co-founder at Databricks, says the startup’s solution helps resolve two conflicting trends. On one hand, businesses increasingly want to make use of data, but at the same time are seeing a growing trend toward privacy. Opaque is designed to resolve this by giving customers access to their data in a safe and fully encrypted way.

The company describes the solution as “a novel combination of two key technologies layered on top of state-of-the-art cloud security—secure hardware enclaves and cryptographic fortification.” This enables customers to work with data — for example to build machine learning models — without exposing the data to others, yet while generating meaningful results.

Popa says this could be helpful for hospitals working together on cancer research, who want to find better treatment options without exposing a given hospital’s patient data to other hospitals, or banks looking for money laundering without exposing customer data to other banks, as a couple of examples.

Investors were likely attracted to the pedigree of Popa, a computer security and applied crypto professor at UC Berkeley and Stoica, who is also a Berkeley professor and co-founded Databricks. Both helped found RISELabs at Berkeley where they developed the solution and spun it out as a company.

Mark Rostick, vice president and senior managing director at lead investor Intel Capital says his firm has been working with the founders since the startup’s earliest days, recognizing the potential of this solution to help companies find complex solutions even when there are multiple organizations involved sharing sensitive data.

“Enterprises struggle to find value in data across silos due to confidentiality and other concerns. Confidential computing unlocks the full potential of data by allowing organizations to extract insights from sensitive data while also seamlessly moving data to the cloud without compromising security or privacy,” Rostick said in a statement

He added, “Opaque bridges the gap between data security and cloud scale and economics, thus enabling inter-organizational and intra-organizational collaboration.”

#cloud, #cloverly, #data, #data-security, #encryption, #enterprise, #funding, #machine-learning, #recent-funding, #security, #startups, #tc

Gettr, the latest pro-Trump social network, is already a mess

Well, that was fast. Just days after a Twitter clone from former Trump spokesperson Jason Miller launched, the new social network is already beset by problems.

For one, hackers quickly leveraged Gettr’s API to scrape the email addresses of more than 85,000 of its users. User names, names and birthdays were also part of the scraped data set, which was surfaced by Alon Gal, co-founder of cybersecurity firm Hudson Rock.

“When threat actors are able to extract sensitive information due to neglectful API implementations, the consequence is equivalent to a data breach and should be handled accordingly by the firm [and] examined by regulators,” Gal told TechCrunch.

Last week, TechCrunch’s own Zack Whittaker predicted that Gettr would soon see its data scraped through its API.

The scraped data is just one of Gettr’s headaches. The app actually went live in the App Store and Google Play last month but left beta on July 4 following a launch post in Politico. While the app is meant to appeal to the famously anti-China Trump sphere, Gettr apparently received early funding from Chinese billionaire Guo Wengui, an ally of former Trump advisor Steve Bannon. Earlier this year, The Washington Post reported that Guo is at the center of a massive online disinformation network that spreads anti-vaccine claims and QAnon conspiracies.

On July 2, the app’s team apologized for signup delays citing a spike in downloads, but a bit of launch downtime is probably the least of its problems. Over the weekend, a number of official Gettr accounts including Marjorie Taylor-Greene, Steve Bannon, and Miller’s own were compromised, raising more questions about the app’s shoddy security practices.

That incident aside, fake accounts overwhelm any attempt to find verified users on Gettr. That goes for the app’s own recommendations too: a fake brand account for Steam was among the app’s own recommendations during TechCrunch’s testing.

Another red flag: The app’s design is conspicuously identical to Twitter and appears to have used the company’s API to copy some users’ follower counts and profiles. Gettr encourages new users to use their Twitter handle in the sign up process, saying that it will allow tweets to be copied over in some cases (we signed up, but this didn’t work for us). TechCrunch reached out to Twitter about Gettr’s striking similarities and the use of its API but the company declined to comment.

On mobile, Gettr is basically an exact clone of Twitter — albeit one that’s very rough around the edges. Some of Gettr’s copy is stilted and strange, including the boast that it’s a “non-bias” social network that “tried the best to provide best software quality to the users, allow anyone to express their opinion freely.”

The company is positioning itself as an alternative for anyone who believes that mainstream social networks are hostile to far right ideas. Gettr’s website beckons new users with familiar Trumpian messaging: “Don’t be Cancelled. Flex Your 1st Amendment. Celebrate Freedom.”

“Hydroxycholoroquine works!” Miller shared (Gettr’d?) over the weekend, quoting the former president. “And nobody is going to take down this post or suspend this account! #GETTR.” So far on Gettr, content moderation is either lax or nonexistent. But as we’ve seen with Parler and other havens for sometimes violent conspiracies, that approach can only last so long.

In spite of being widely associated with Trump through Miller and former Trump campaign staffer Tim Murtaugh, the former president doesn’t yet have a presence on the app. Some figures from Trump’s orbit have established profiles on Gettr, including Steve Bannon (84.7K followers) and Mike Pompeo (1.3M followers), but a search for Trump only brings up unofficial accounts. Bloomberg reported that Trump has no plans to join the app. (Given Gettr’s preponderance of Sonic the Hedgehog porn, we can’t exactly blame him.)

The online pro-Trump ecosystem remains scattered in mid-2021. With Trump banned and the roiling conspiracy network around QAnon no longer welcome on Facebook and Twitter, Gettr positioned itself as a refuge for mainstream social media’s many outcasts. But given Gettr’s mounting early woes, the sketchy Twitter clone’s moment in the sun might already be coming to an end.

#api, #app-store, #computer-security, #data-security, #donald-trump, #google, #mike-pompeo, #security, #social, #social-network, #tc

Kill the standard privacy notice

Privacy is a word on everyone’s mind nowadays — even Big Tech is getting in on it. Most recently, Apple joined the user privacy movement with its App Tracking Transparency feature, a cornerstone of the iOS 14.5 software update. Earlier this year, Tim Cook even mentioned privacy in the same breath as the climate crisis and labeled it one of the top issues of the 21st century.

Apple’s solution is a strong move in the right direction and sends a powerful message, but is it enough? Ostensibly, it relies on users to get informed about how apps track them and, if they wish to, regulate or turn off the tracking. In the words of Soviet satirists Ilf and Petrov, “The cause of helping the drowning is in the drowning’s own hands.” It’s a system that, historically speaking, has not produced great results.

Today’s online consumer is drowning indeed — in the deluge of privacy policies, cookie pop-ups, and various web and app tracking permissions. New regulations just pile more privacy disclosures on, and businesses are mostly happy to oblige. They pass the information burden to the end user, whose only rational move is to accept blindly because reading through the heaps of information does not make sense rationally, economically or subjectively. To save that overburdened consumer, we have only one option: We have to kill the standard privacy notice.

A notice that goes unnoticed

Studies show that online consumers often struggle with standard-form notices. A majority of online users expect that if a company has published a document with the title “privacy notice” or “privacy policy” on its website, then it will not collect, analyze or share their personal information with third parties. At the same time, a similar majority of consumers have serious concerns about being tracked and targeted for intrusive advertising.

Online businesses and major platforms gear their privacy notices and other relevant data disclosures toward obtaining consent, not toward educating and explaining.

It’s a privacy double whammy. To get on the platform, users have to accept the privacy notice. By accepting it, they allow tracking and intrusive ads. If they actually read the privacy notice before accepting, that costs them valuable time and can be challenging and frustrating. If Facebook’s privacy policy is as hard to comprehend as German philosopher Immanuel Kant’s “Critique of Pure Reason,” we have a problem. In the end, the option to decline is merely a formality; not accepting the privacy policy means not getting access to the platform.

So, what use is the privacy notice in its current form? For companies, on the one hand, it legitimizes their data-processing practices. It’s usually a document created by lawyers, for lawyers without thinking one second about the interests of the real users. Safe in the knowledge that nobody reads such disclosures, some businesses not only deliberately fail to make the text understandable, they pack it with all kinds of silly or refreshingly honest content.

One company even claimed its users’ immortal souls and their right to eternal life. For consumers, on the other hand, the obligatory checkmark next to the privacy notice can be a nuisance — or it can lull them into a false sense of data security.

On the unlikely occasion that a privacy notice is so blatantly disagreeable that it pushes users away from one platform and toward an alternative, this is often not a real solution, either. Monetizing data has become the dominant business model online, and personal data ultimately flows toward the same Big Tech giants. Even if you’re not directly on their platforms, many of the platforms you are on work with Big Tech through plugins, buttons, cookies and the like. Resistance seems futile.

A regulatory framework from another time

If companies are deliberately producing opaque privacy notices that nobody reads, maybe lawmakers and regulators could intervene and help improve users’ data privacy? Historically, this has not been the case. In pre-digital times, lawmakers were responsible for a multitude of pre-contractual disclosure mandates that resulted in the heaps of paperwork that accompany leasing an apartment, buying a car, opening a bank account or taking out a mortgage.

When it comes to the digital realm, legislation has been reactive, not proactive, and it lags behind technological development considerably. It took the EU about two decades of Google and one decade of Facebook to come up with the General Data Protection Regulation, a comprehensive piece of legislation that still does not rein in rampant data collection practices. This is just a symptom of a larger problem: Today’s politicians and legislators do not understand the internet. How do you regulate something if you don’t know how it works?

Many lawmakers on both sides of the Atlantic often do not understand how tech companies operate and how they make their money with user data — or pretend not to understand for various reasons. Instead of tackling the issue themselves, legislators ask companies to inform the users directly, in whatever “clear and comprehensible” language they see fit. It’s part laissez-faire, part “I don’t care.”

Thanks to this attitude, we are fighting 21st-century challenges — such as online data privacy, profiling and digital identity theft — with the legal logic of Ancient Rome: consent. Not to knock Roman law, but Marcus Aurelius never had to read the iTunes Privacy Policy in full.

Online businesses and major platforms, therefore, gear their privacy notices and other relevant data disclosures toward obtaining consent, not toward educating and explaining. It keeps the data flowing and it makes for great PR when the opportunity for a token privacy gesture appears. Still, a growing number of users are waking up to the setup. It is time for a change.

A call to companies to do the right thing

We have seen that it’s difficult for users to understand all the “legalese,” and they have nowhere to go even if they did. We have also noted lawmakers’ inadequate knowledge and motivation to regulate tech properly. It is up to digital businesses themselves to act, now that growing numbers of online users are stating their discontent and frustration. If data privacy is one of our time’s greatest challenges, it requires concerted action. Just like countries around the world pledged to lower their carbon emissions, enterprises must also band together and commit to protecting their users’ privacy.

So, here’s a plea to tech companies large and small: Kill your standard privacy notices! Don’t write texts that almost no user understands to protect yourselves against potential legal claims so that you can continue collecting private user data. Instead, use privacy notices that are addressed to your users and that everybody can understand.

And don’t stop there — don’t only talk the talk but walk the walk: Develop products that do not rely on the collection and processing of personal data. Return to the internet’s open-source, protocol roots, and deliver value to your community, not to Big Tech and their advertisers. It is possible, it is profitable and it is rewarding.

#apple, #column, #data-protection, #data-security, #digital-rights, #european-union, #facebook, #general-data-protection-regulation, #google, #human-rights, #opinion, #privacy, #privacy-policy, #tc, #terms-of-service

German government bodies urged to remove their Facebook Pages before next year

Germany’s federal information commissioner has run out of patience with Facebook.

Last month, Ulrich Kelber wrote to government agencies “strongly recommend[ing]” they to close down their official Facebook Pages because of ongoing data protection compliance problems and the tech giant’s failure to fix the issue.

In the letter, Kelber warns the government bodies that he intends to start taking enforcement action from January 2022 — essentially giving them a deadline of next year to pull their pages from Facebook.

So expect not to see official Facebook Pages of German government bodies in the coming months.

While Kelber’s own agency, the BfDi, does not appear to have a Facebook Page (although Facebook’s algorithms appear to generate this artificial stub if you try searching for one) plenty of other German federal bodies do — such as the Ministry of Health, whose public page has more than 760,000 followers.

The only alternative to such pages vanishing from Facebook’s platform by Christmas — or else being ordered to be taken down early next year by Kelber — seems to be for the tech giant to make more substantial changes to how its platform operators than it has offered so far, allowing the Pages to be run in Germany in a way that complies with EU law.

However Facebook has a long history of ignoring privacy expectations and data protection laws.

It has also, very recently, shown itself more than willing to reduce the quality of information available to users — if doing so further its business interests (such as to lobby against a media code law, as users in Australia can attest).

So it looks rather more likely that German government agencies will be the ones having to quietly bow off the platform soon…

Kelber says he’s avoided taking action over the ministries’ Facebook Pages until now on account of the public bodies arguing that their Facebook Pages are an important way for them to reach citizens.

However his letter points out that government bodies must be “role models” in matters of legal compliance — and therefore have “a particular duty” to comply with data protection law. (The EDPS is taking a similar tack by reviewing EU institutions’ use of US cloud services giants.)

Per his assessment, an “addendum” provided by Facebook in 2019 does not rectify the compliance problem and he concludes that Facebook has made no changes to its data processing operations to enable Page operators to comply with requirements set out in the EU’s General Data Protection Regulation.

A ruling by Europe’s top court, back in June 2018, is especially relevant here — as it held that the administrator of a fan page on Facebook is jointly responsible with Facebook for the processing of the data of visitors to the page.

That means that the operators of such pages also face data protection compliance obligations, and cannot simply assume that Facebook’s T&Cs provide them with legal cover for the data processing the tech giant undertakes.

The problem, in a nutshell, is that Facebook does not provide Pages operates with enough information or assurances about how it processes users’ data — meaning they’re unable to comply with GDPR principles of accountability and transparency because, for example, they’re unable to adequately inform followers of their Facebook Page what is being done with their data.

There is also no way for Facebook Page operators to switch off (or otherwise block) wider processing of their Page followers by Facebook. Even if they don’t make use of any of the analytics features Facebook provides to Page operators.

The processing still happens.

This is because Facebook operates a take-it-or-leave it ‘data maximizing’ model — to feed its ad-targeting engines.

But it’s an approach that could backfire if it ends up permanently reducing the quality of the information available on its network because there’s a mass migration of key services off its platform. Such as, for example, every government agency in the EU deleted its Facebook Page.

A related blog post on the BfDi’s website also holds out the hope that “data protection-compliant social networks” might develop in the Facebook compliance vacuum.

Certainly there could be a competitive opportunity for alternative platforms that seek to sell services based on respecting users’ rights.

The German Federal Ministry of Health’s verified Facebook Page (Screengrab: TechCrunch/Natasha Lomas)

Discussing the BfDis intervention, Luca Tosoni, a research fellow at the University of Oslo’s Norwegian Research Center for Computers and Law, told TechCrunch: “This development is strictly connected to recent CJEU case law on joint controllership. In particular, it takes into account the Wirtschaftsakademie ruling, which found that the administrator of a Facebook page should be considered a joint controller with Facebook in respect of processing the personal data of the visitors of the page.

“This does not mean that the page administrator and Facebook share equal responsibility for all stages of the data processing activities linked to the use of the Facebook page. However, they must have an agreement in place with a clear allocation of roles and responsibilities. According to the German Federal Commissioner for Data Protection and Freedom of Information, Facebook’s current data protection ‘Addendum’ would not seem to be sufficient to meet the latter requirement.”

“It is worth noting that, in its Fashion ID ruling, the CJEU has taken the view that the GDPR’s obligations for joint controllers are commensurate with those data processing stages in which they actually exercise control,” Tosoni added. “This means that the data protection obligations a Facebook page administrator would normally tend to be quite limited.”

Warnings for other social media services

This particular compliance issue affects Facebook in Germany — and potentially any other EU market. But other social media services may face similar problems too.

For example, Kelber’s letter flags an ongoing audit of Instagram, TikTok and Clubhouse — warning of “deficits” in the level of data protection they offer too.

He goes on to recommend that agencies avoid using the three apps on business devices.  

In an earlier, 2019 assessment of government bodies’ use of social media services, the BfDi suggested usage of Twitter could — by contrast — be compliant with data protection rules. At least if privacy settings were fully enabled and analytics disabled, for example.

At the time the BfDi also warned that Facebook-owned Instagram faced similar compliance problems to Facebook, being subject to the same “abusive” approach to consent he said was taken by the whole group.

Reached for comment on Kelber’s latest recommendations to government agencies, Facebook did not engage with our specific questions — sending us this generic statement instead:

“At the end of 2019, we updated the Page Insights addendum and clarified the responsibilities of Facebook and Page administrators, for which we took questions regarding transparency of data processing into account. It is important to us that also federal agencies can use Facebook Pages to communicate with people on our platform in a privacy-compliant manner.”

An additional complication for Facebook has arisen in the wake of the legal uncertainty following last summer’s Schrems II ruling by the CJEU.

Europe’s top court invalidated the EU-US Privacy Shield arrangement, which had allowed companies to self-certify an adequate level of data protection, removing the easiest route for transferring EU users’ personal data over to the US. And while the court did not outlaw international transfers of EU users’ personal data altogether it made it clear that data protection agencies must intervene and suspend data flows if they suspect information is being moved to a place, and in in such a way, that it’s put at risk.

Following Schrems II, transfers to the US are clearly problematic where the data is being processed by a US company that’s subject to FISA 702, as is the case with Facebook.

Indeed, Facebook’s EU-to-US data transfers were the original target of the complainant in the Schrems II case (by the eponymous Max Schrems). And a decision remains pending on whether the tech giant’s lead EU data supervisor will follow through on a preliminary order last year to it should suspend its EU data flows — due in the coming months.

Even ahead of that long-anticipated reckoning in Ireland, other EU DPAs are now stepping in to take action — and Kelber’s letter references the Schrems II ruling as another issue of concern.

Tosoni agrees that GDPR enforcement is finally stepping up a gear. But he also suggested that compliance with the Schrems II ruling comes with plenty of nuance, given that each data flow must be assessed on a case by case basis — with a range of supplementary measures that controllers may be able to apply.

“This development also shows that European data protection authorities are getting serious about enforcing the GDPR data transfer requirements as interpreted by the CJEU in Schrems II, as the German Federal Commissioner for Data Protection and Freedom flagged this as another pain point,” he said.

“However, the German Federal Commissioner sent out his letter on the use of Facebook pages a few days before the EDPB adopted the final version its recommendations on supplementary measures for international data transfers following the CJEU Schrems II ruling. Therefore, it remains to be seen how German data protection authorities will take these new recommendations into account in the context of their future assessment of the GDPR compliance of the use of Facebook pages by German public authorities.

“Such recommendations do not establish a blanket ban on data transfers to the US but impose the adoption of stringent safeguards, which will need to be followed to keep on transferring the data of German visitors of Facebook pages to the US.”

Another recent judgment by the CJEU reaffirmed that EU data protection agencies can, in certain circumstances, take action when they are not the lead data supervisor for a specific company under the GDPR’s one-stop-shop mechanism — expanding the possibility for litigation by watchdogs in Member States if a local agency believes there’s an urgent need to act.

Although, in the case of the German government bodies’ use of Facebook Pages, the earlier CJEU ruling finding on joint law controllership means the BfDi already has clear jurisdiction to target these agencies’ Facebook Pages itself.

 

#advertising-tech, #australia, #cjeu, #data-processing, #data-protection, #data-security, #digital-rights, #eu-us-privacy-shield, #europe, #european-union, #facebook, #facebook-pages, #general-data-protection-regulation, #germany, #instagram, #ireland, #law, #max-schrems, #policy, #privacy, #twitter, #united-states