Using an opt-in approach will help curb the excesses of Big Tech.
Global air transport data giant SITA has confirmed a data breach involving passenger data.
The company said in a brief statement on Thursday that it had been the “victim of a cyberattack,” and that certain passenger data stored on its U.S. servers had been breached. The cyberattack was confirmed on February 24, after which the company contacted affected airlines.
SITA is one of the largest aviation IT companies in the world, said to be serving around 90% of the world’s airlines, which rely on the company’s passenger service system Horizon to manage reservations, ticketing, and aircraft departures.
But it remains unclear exactly what data was accessed or stolen.
When reached, SITA spokesperson Edna Ayme-Yahil declined to say what specific data had been taken, citing an ongoing investigation. The company said that the incident “affects various airlines around the world, not just in the United States.”
SITA confirmed it had notified several airlines — Malaysia Airlines; Finnair; Singapore Airlines; and Jeju Air, an airline in South Korea — which have already made statements about the breach, but declined to name other affected airlines.
In an email to affected customers seen by TechCrunch, Singapore Airlines said it was not a customer of SITA’s Horizon passenger service system but that about half a million frequent flyer members had their membership number and tier status compromised. The airline said that the transfer of this kind of data is “necessary to enable verification of the membership tier status, and to accord to member airlines’ customers the relevant benefits while traveling.”
The airline said passenger itineraries, reservations, ticketing, and passport data were not affected.
SITA is one of a handful of companies in the aviation market providing passenger ticketing and reservation systems to airlines, alongside Sabre and Amadeus.
Sabre reported a major data breach in mid-2017 affecting its hotel reservation system, after hackers scraped over a million customer credit cards. The U.S.-based company agreed in December to a $2.4 million settlement and to make changes to its cybersecurity policies following the breach.
In 2019, a security researcher found a vulnerability in Amadeus’ passenger booking system, used by Air France, British Airways, and Qantas among others, which made it easy to alter or access traveler records.
A security flaw in a website run by the government of West Bengal in India exposed the lab results of at least hundreds of thousands of residents, though likely millions, who took a COVID-19 test.
The website is part of the West Bengal government’s mass coronavirus testing program. Once a COVID-19 test result is ready, the government sends a text message to the patient with a link to its website containing their test results.
But security researcher Sourajeet Majumder found that the link containing the patient’s unique test identification number was scrambled with base64 encoding, which can be easily converted using online tools. Because the identification numbers were incrementally sequenced, the website bug meant that anyone could change that number in their browser’s address bar and view other patients’ test results.
The test results contain the patient’s name, sex, age, postal address, and if the patient’s lab test result came back positive, negative, or inconclusive for COVID-19.
Majumder told TechCrunch that he was concerned a malicious attacker could scrape the site and sell the data. “This is a privacy violation if somebody else gets access to my private information,” he said.
Majumder reported the vulnerability to India’s CERT, the country’s dedicated cybersecurity response unit, which acknowledged the issue in an email. He also contacted the West Bengal government’s website manager, who did not respond. TechCrunch independently confirmed the vulnerability and also reached out to the West Bengal government, which pulled the website offline, but did not return our requests for comment.
TechCrunch held our report until the vulnerability was fixed or no longer presented a risk. At the time of publication, the affected website remains offline.
It’s not known exactly how many COVID-19 lab results were exposed because of this security lapse, or if anyone other than Majumder discovered the vulnerability. At the time the website was pulled offline at the end of February, the state government had tested more than 8.5 million residents for COVID-19.
West Bengal is one of the most populated states of India, with about 90 million residents. Since the start of the pandemic, the state government has recorded more than 10,000 coronavirus deaths.
It’s the latest of several security incidents in the past few months to hit India and its response to the coronavirus pandemic.
Last May, India’s largest cell network Jio admitted a security lapse after a security researcher found a database containing the company’s coronavirus symptom checker, which Jio had launched months earlier.
In October, a security researcher found Dr Lal PathLabs left hundreds of spreadsheets containing millions of patient booking records — including for COVID-19 tests — on a public storage server that was not protected with a password, allowing anyone to access sensitive patient data.
Send tips securely over Signal and WhatsApp to +1 646-755-8849. You can also send files or documents using SecureDrop.
Brave, the privacy-focused browser co-founded by ex-Mozilla CEO Brendan Eich, is getting ready to launch an own-brand search engine for desktop and mobile.
Today it’s announced the acquisition of an open source search engine developed by the team behind the (now defunct) Cliqz anti-tracking search-browser combo. The tech will underpin the forthcoming Brave Search engine — meaning it will soon be pitching its millions of users on an entirely ‘big tech’-free search and browsing experience.
“Under the hood, nearly all of today’s search engines are either built by, or rely on, results from Big Tech companies. In contrast, the Tailcat search engine is built on top of a completely independent index, capable of delivering the quality people expect but without compromising their privacy,” Brave writes in a press release announcing the acquisition.
“Tailcat does not collect IP addresses or use personally identifiable information to improve search results.”
Cliqz, which was a privacy-focused European fork of Mozilla’s Firefox browser, got shuttered last May after its majority investor, Hubert Burda Media, called time on the multi-year effort to build momentum for an alternative to Google — blaming tougher trading conditions during the pandemic for forcing it to pull the plug sooner than it would have liked.
The former Cliqz dev team, who had subsequently been working on Tailcat, are moving to Brave as part of the acquisition. The engineering team is led by Dr Josep M Pujol — who is quoted in Brave’s PR saying it’s “excited to be working on the only real private search/browser alternative to Big Tech”.
“Tailcat is a fully independent search engine with its own search index built from scratch,” Eich told TechCrunch. “Tailcat as Brave Search will offer the same privacy guarantees that Brave has in its browser.
“Brave will provide the first private browser+search alternative to the Big Tech platforms, and will make it seamless for users to browse and search with guaranteed privacy. Also, owing to its transparent nature, Brave Search will address algorithmic biases and prevent outright censorship.”
Brave getting into the search business is a reflection of its confidence that privacy is becoming mainstream, per Eich. He points to “unprecedented” growth in usage of its browser over the past year — up from 11M monthly active users to 26M+ — which he says has mirrored the surge in usage earlier this year seen by the (not-for-profit) e2e encrypted messaging app Signal (after Facebook-owned WhatsApp announced a change to its privacy policies to allow for increased data-sharing with Facebook through WhatsApp business accounts).
“We expect to see even greater demand for Brave in 2021 as more and more users demand real privacy solutions to escape Big Tech’s invasive practices,” he added in a statement. “Brave’s mission is to put the user first, and integrating privacy-preserving search into our platform is a necessary step to ensure that user privacy is not plundered to fuel the surveillance economy.”
Brave Search will be offered as a choice to users alongside a roster of more established third parties (Google, Bing, Qwant, Ecosia etc) which they can select as their browser default.
It will also potentially become the default (i.e. if users don’t pick their own) in future, per Eich.
“We will continue to support ‘open search’ with multiple alternative engines,” he confirmed. “User choice is a permanent principle at Brave. Brave will continue to offer multiple alternative choices for the user’s default search engine, and we think our users will seek unmatched privacy with Brave Search. When ready, we hope to make Brave Search the default engine in Brave.”
Asked how the quality of Tailcat-powered results vs Google Eich described it as “quite good”, adding that it “will only get better with adoption”.
“Google’s ‘long tail’ is hard for any engine to beat but we have a plan to compete on that front too, once integrated into the Brave browser,” he told us in an email interview, arguing that Google’s massive size does offer some competitive opportunities for a search rival. “There are aspects where Google is falling behind. It is difficult for them to innovate in search when that’s the main source of their revenue.
“They are risk-averse against experimenting with new techniques and transparency, while under pressure from shareholders to tie their own businesses into scarce search engine results page (SERP) area, and pressure from search engine optimization (SEO).”
“On questions such as censorship, community feedback, and algorithmic transparency, we think we can do better from the get-go. Unlike other search engines, we believe that the only way to make big improvements is to build afresh, with the know-how that comes from building,” he added. “The option of using Bing (as other search offerings do) instead of building the index exists but it will get you only as far as Bing in terms of quality (and as with such offerings, you’ll be wholly dependent on Bing).”
Brave is aiming for general availability of Brave Search by the summer — if not late spring, per Eich. Users interested in testing an early iteration can sign up for a waitlist here. (A test version is slated as coming in “the next few weeks”.)
The name Tailcat is unlikely to be widely familiar as it was an internal project that Cliqz had not implemented into its browser before it was shut down.
Eich says development had been continuing at Burda — “in order to develop a full-fledged search engine”. (When the holding company announced the shuttering of Cliqz, last April, it stated that Cliqz’s browser and search technologies would be shut down but also said it would draw out a team of experts — to work on technical issues in areas like AI and search.)
“Cliqz offered the SERP-based search engine but had not implemented Tailcat in its browser yet,” said Eich. “After Cliqz shut down last April, a development team at Burda continued to work on the search technology under the new project name Tailcat in order to develop a full-fledged search engine. The team hoped to find a long-term home for their work to continue their mission, and are thrilled to be part of Brave.”
The financial terms of the acquisition are not being disclosed — but we’ve confirmed that Burda is becoming a Brave shareholder as part of the deal.
“We are very happy that our technology is being used at Brave and that, as a result, a genuine, privacy-friendly alternative to Google is being created in the core web functions of browsing and searching,” said Paul-Bernhard Kallen, CEO of Hubert Burda Media, in a supporting statement. “As a Brave stakeholder we will continue to be involved in this exciting project.”
While Brave started out focused on building an alternative browser — with the idea of rethinking the predominate ad-funded Internet business model by baking in a cryptocurrency rewards system to generate payments for content creators (and pay users for their attention) — it now talks about itself as a pro-privacy “super app”.
Currently, the Brave Browser bundles a privacy-preserving ad platform (Brave Ads); news reader (Brave Today); and offers a Firewall+VPN service — which it will be further adding to with the forthcoming search engine (Brave Search), and a privacy-preserving video-conferencing service (Brave Together) that’s also in the pipeline.
The unifying brand proposition for its ‘super app’ is a pledge to provide users with genuine control over their online experience — in contrast to mainstream alternatives.
Israeli fraud prevention startup Identiq has raised $47 million at Series A as the company eyes international growth, driven in large part by the spike in online spending during the pandemic.
The round was led by Insight Partners and Entrée Capital, with participation from Amdocs, Sony Innovation Fund by IGV, as well as existing investors Vertex Ventures Israel, Oryzn Capital, and Slow Ventures.
Fraud prevention is big business, which is slated to be worth $145 billion by 2026, ballooning by eightfold in size compared to 2018. But it’s a data hungry industry, fraught with security and privacy risks, having to rely on sharing enormous sets of consumer data in order to learn who legitimate customers are in order to weed out the fraudsters, and therefore.
Identiq takes a different, more privacy-friendly approach to fraud prevention, without having to share a customer’s data with a third-party.
“Before now, the only way companies could solve this problem was by exposing the data they were given by the user to a third party data provider for validation, creating huge privacy problems,” Identiq’s chief executive Itay Levy told TechCrunch. “We solved this by allowing these companies to validate that the data they’ve been given matches the data of other companies that already know and trust the user, without sharing any sensitive information at all.”
When an Identiq customer — such as an online store — sees a new customer for the first time, the store can ask other stores in Identiq’s network if they know or trust that new customer. This peer-to-peer network uses cryptography to help online stores anonymously vet new customers to help weed out bad actors, like fraudsters and scammers, without needing to collect private user data.
So far, the company says it already counts Fortune 500 companies as customers.
Identiq said it plans to use the $47 million raise to hire and grow the company’s workforce, and aims to scale up its support for its international customers.
Many schools that use fever scanners and symptom checkers have not rigorously studied if the technology has slowed the spread of Covid-19 on campuses.
Facebook was ordered to pay $650 million Friday for running afoul of an Illinois law designed to protect the state’s residents from invasive privacy practices.
That law, the Biometric Information Privacy Act (BIPA), is a powerful state measure that’s tripped up tech companies in recent years. The suit against Facebook was first filed in 2015, alleging that Facebook’s practice of tagging people in photos using facial recognition without their consent violated state law.
1.6 million Illinois residents will receive at least $345 under the final settlement ruling in California federal court. The final number is $100 higher than the $550 million Facebook proposed in 2020, which a judge deemed inadequate. Facebook disabled the automatic facial recognition tagging features in 2019, making it opt-in instead and addressing some of the privacy criticisms echoed by the Illinois class action suit.
A cluster of lawsuits accused Microsoft, Google and Amazon of breaking the same law last year after Illinois residents’ faces were used to train their facial recognition systems without explicit consent.
The Illinois privacy law has tangled up some of tech’s giants, but BIPA has even more potential to impact smaller companies with questionable privacy practices. The controversial facial recognition software company Clearview AI now faces its own BIPA-based class action lawsuit in the state after the company failed to dodge the suit by pushing it out of state courts.
A $650 million settlement would be enough to crush any normal company, though Facebook can brush it off much like it did with the FTC’s record-setting $5 billion penalty in 2019. But the Illinois law isn’t without teeth. For Clearview, it was enough to make the company pull out of business in the state altogether.
The law can’t punish a behemoth like Facebook in the same way, but it is one piece in a regulatory puzzle that poses an increasing threat to the way tech’s data brokers have done business for years. With regulators at the federal, state and legislative level proposing aggressive measures to rein in tech, the landmark Illinois law provides a compelling framework that other states could copy and paste. And if big tech thinks navigating federal oversight will be a nightmare, a patchwork of aggressive state laws governing how tech companies do business on a state-by-state basis is an alternate regulatory future that could prove even less palatable.
The New York Police Department has been testing Digidog, which it says can be deployed in dangerous situations and keep officers safer, but some fear it could become an aggressive surveillance tool.
Massachusetts is one of the first states to put legislative guardrails around the use of facial recognition technology in criminal investigations.
TikTok parent company ByteDance has agreed to a $92 million deal to settle class-action lawsuits alleging that the company illegally collected and used underage TikTok users’ personal data.
The proposed settlement (PDF) would require TikTok to pay out up to $92 million to members of the class and to change some of its data-collection processes and disclosures going forward.
The suit, which rolled up more than 20 related lawsuits, mostly filed on behalf of minors, alleged that TikTok violated both state and federal privacy laws, including the Computer Fraud and Abuse Act and the Video Privacy and Protection Act, through its use of data.
Jamaica’s JamCOVID app and website were taken offline late on Thursday following a third security lapse, which exposed quarantine orders on more than half a million travelers to the island.
JamCOVID was set up last year to help the government process travelers arriving on the island. Quarantine orders are issued by the Jamaican Ministry of Health and instruct travelers to stay in their accommodation for two weeks to prevent the spread of COVID-19.
These orders contain the traveler’s name and the address of where they are ordered to stay.
But a security researcher told TechCrunch that the quarantine orders were publicly accessible from the JamCOVID website but were not protected with a password. Although the files were accessible from anyone’s web browser, the researcher asked not to be named for fear of legal repercussions from the Jamaican government.
More than 500,000 quarantine orders were exposed, some dating back to March 2020.
TechCrunch shared these details with the Jamaica Gleaner, which was first to report on the security lapse after the news outlet verified the data spillage with local cybersecurity experts.
Amber Group, which was contracted to build and maintain the JamCOVID coronavirus dashboard and immigration service, pulled the service offline a short time after TechCrunch and the Jamaica Gleaner contacted the company on Thursday evening. JamCOVID’s website was replaced with a holding page that said the site was “under maintenance.” At the time of publication, the site had returned.
Amber Group’s chief executive Dushyant Savadia did not return a request for comment.
Matthew Samuda, a minister in Jamaica’s Ministry of National Security, also did not respond to a request for comment or our questions — including if the Jamaican government plans to continue its contract or relationship with Amber Group.
This is the third security lapse involving JamCOVID in the past two weeks.
Last week, Amber Group secured an exposed cloud storage server hosted on Amazon Web Services that was left open and public, despite containing more than 70,000 negative COVID-19 lab results and over 425,000 immigration documents authorizing travel to the island. Savadia said in response that there were “no further vulnerabilities” with the app. Days later, the company fixed a second security lapse after leaving a file containing private keys and passwords for the service on the JamCOVID server.
The Jamaican government has repeatedly defended Amber Group, which says it provided the JamCOVID technology to the government “for free.” Amber Group’s Savadia has previously been quoted as saying that the company built the service in “three days.”
In a statement on Thursday, Jamaica’s prime minister Andrew Holness said JamCOVID “continues to be a critical element” of the country’s immigration process and that the government was “accelerating” to migrate the JamCOVID database — though specifics were not given.
An earlier version of this report misspelled the Jamaican Gleaner newspaper. We regret the error.
The 11-month-old audio social network is compelling. It also has some very grown-up problems.
I installed Firefox 86 on my Ubuntu workstation using Snap to be certain I wouldn’t accidentally mess with my working system configuration. [credit: Jim Salter ]
Mozilla released Firefox 86 yesterday, and the browser is now available for download and installation for all major operating systems, including Android. Along with the usual round of bug fixes and under-the-hood updates, the new build offers a couple of high-profile features—multiple Picture-in-Picture video-watching support, and (optional) stricter cookie separation, which Mozilla is branding Total Cookie Protection.
Taking Firefox 86 for a spin
Firefox 86 became the default download at mozilla.org on Tuesday—but as an Ubuntu 20.04 user, I didn’t want to leave the Canonical-managed repositories just to test the new version. This is one scenario in which snaps truly excel—providing you with a containerized version of an application, easily installed but guaranteed not to mess with your “real” operating system.
As it turns out, Firefox’s snap channel didn’t get the message about build 86 being the new default—the
latest/default snap is still on build 85. In order to get the new version, I needed to
snap refresh firefox --channel=latest/candidate.
Mozilla has further beefed up anti-tracking measures in its Firefox browser. In a blog post yesterday it announced that Firefox 86 has an extra layer of anti-cookie tracking built into the enhanced tracking protection (ETP) strict mode — which it’s calling ‘Total Cookie Protection’.
This “major privacy advance”, as it bills it, prevents cross-site tracking by siloing third party cookies per website.
Mozilla likens this to having a separate cookie jar for each site — so, for e.g., Facebook cookies aren’t stored in the same tub as cookies for that sneaker website where you bought your latest kicks and so on.
The new layer of privacy wrapping “provides comprehensive partitioning of cookies and other site data between websites in Firefox”, explains Mozilla.
Along with another anti-tracking feature it announced last month — targeting so called ‘supercookies’ — aka sneaky trackers that store user IDs in “increasingly obscure” parts of the browser (like Flash storage, ETags, and HSTS flags), i.e. where it’s difficult for users to delete or block them — the features combine to “prevent websites from being able to ‘tag’ your browser, thereby eliminating the most pervasive cross-site tracking technique”, per Mozilla.
There’s a “limited exception” for cross-site cookies when they are needed for non-tracking purposes — Mozilla gives the example of popular third-party login providers.
“Only when Total Cookie Protection detects that you intend to use a provider, will it give that provider permission to use a cross-site cookie specifically for the site you’re currently visiting. Such momentary exceptions allow for strong privacy protection without affecting your browsing experience,” it adds.
Tracker blocking has long been an arms race against the adtech industry’s determination to keep surveilling web users — and thumbing its nose at the notion of consent to spy on people’s online business — pouring resource into devising fiendish new techniques to try to keep watching what Internet users are doing. But this battle has stepped up in recent years as browser makers have been taking a tougher pro-privacy/anti-tracker stance.
Mozilla, for example, started making tracker blocking the default back in 2018 — going on make ETP the default in Firefox in 2019, blocking cookies from companies identified as trackers by its partner, Disconnect.
While Apple’s Safari browser added an ‘Intelligent Tracking Prevention’ (ITP) feature in 2017 — applying machine learning to identify trackers and segregate the cross-site scripting data to protect users’ browsing history from third party eyes.
Google has also put the cat among the adtech pigeons by announcing a planned phasing out of support for third party cookies in Chrome — which it said would be coming within two years back in January 2020 — although it’s still working on this ‘privacy sandbox’ project, as it calls it (now under the watchful eye of UK antitrust regulators).
Google has been making privacy strengthening noises since 2019, in response to the rest of the browser market responding to concern about online privacy.
In April last year it rolled back a change that had made it harder for sites to access third-party cookies, citing concerns that sites were able to perform essential functions during the pandemic — though this was resumed in July. But it’s fair to say that the adtech giant remains the laggard when it comes to executing on its claimed plan to beef up privacy.
Given Chrome’s marketshare, that leaves most of the world’s web users exposed to more tracking than they otherwise would be by using a different, more privacy-pro-active browser.
And as Mozilla’s latest anti-cookie tracking feature shows the race to outwit adtech’s allergy to privacy (and consent) also isn’t the sort that has a finish line. So being slow to do privacy protection arguably isn’t very different to not offering much privacy protection at all.
To wit: One worrying development — on the non-cookie based tracking front — is detailed in this new paper by a group of privacy researchers who conducted an analysis of CNAME tracking (aka a DNS-based anti-tracking evasion technique) and found that use of the sneaky anti-tracking evasion method had grown by around a fifth in just under two years.
The technique has been raising mainstream concerns about ‘unblockable’ web tracking since around 2019 — when developers spotted the technique being used in the wild by a French newspaper website. Since then use has been rising, per the research.
In a nutshell the CNAME tracking technique cloaks the tracker by injecting it into the first-party context of the visited website — via the content being embedded through a subdomain of the site which is actually an alias for the tracker domain.
“This scheme works thanks to a DNS delegation. Most often it is a DNS CNAME record,” writes one of the paper authors, privacy and security researcher Lukasz Olejnik, in a blog post about the research. “The tracker technically is hosted in a subdomain of the visited website.
“Employment of such a scheme has certain consequences. It kind of fools the fundamental web security and privacy protections — to think that the user is wilfully browsing the tracker website. When a web browser sees such a scheme, some security and privacy protections are relaxed.”
Don’t be fooled by the use of the word ‘relaxed’ — as Olejnik goes on to emphasize that the CNAME tracking technique has “substantial implications for web security and privacy”. Such as browsers being tricked into treating a tracker as legitimate first-party content of the visited website (which, in turn, unlocks “many benefits”, such as access to first-party cookies — which can then be sent on to remote, third-party servers controlled by the trackers so the surveilling entity can have its wicked way with the personal data).
So the risk is that a chunk of the clever engineering work being done to protect privacy by blocking trackers can be sidelined by getting under the anti-trackers’ radar.
The researchers found one (infamous) tracker provider, Criteo, reverting its tracking scripts to the custom CNAME cloak scheme when it detected the Safari web browser in use — as, presumably, a way to circumvent Apple’s ITP.
There are further concerns over CNAME tracking too: The paper details how, as a consequence of current web architecture, the scheme “unlocks a way for broad cookie leaks”, as Olejnik puts it — explaining how the upshot of the technique being deployed can be “many unrelated, legitimate cookies” being sent to the tracker subdomain.
Olejnik documented this concern in a study back in 2014 — but he writes that the problem has now exploded: “As the tip of the iceberg, we found broad data leaks on 7,377 websites. Some data leaks happen on almost every website using the CNAME scheme (analytics cookies commonly leak). This suggests that this scheme is actively dangerous. It is harmful to web security and privacy.”
The researchers found cookies leaking on 95% of the studies websites.
They also report finding leaks of cookies set by other third-party scripts, suggesting leaked cookies would in those instances allow the CNAME tracker to track users across websites.
In some instances they found that leaked information contained private or sensitive information — such as a user’s full name, location, email address and (in an additional security concern) authentication cookie.
The paper goes on to raise a number of web security concerns, such as when CNAME trackers are served over HTTP not HTTPS, which they found happened often, and could facilitate man-in-the-middle attacks.
Defending against the CNAME cloaking scheme will require some major browsers to adopt new tricks, per the researchers — who note that while Firefox (global marketshare circa 4%) does offer a defence against the technique Chrome does not.
Engineers on the WebKit engine that underpins Apple’s Safari browser have also been working on making enhancements to ITP aimed at counteracting CNAME tracking.
The Brave browser also announced changes last fall aimed at combating CNAME cloaking.
“In version 1.25.0, uBlock Origin gained the ability to detect and block CNAME-cloaked requests using Mozilla’s terrific browser.dns API. However, this solution only works in Firefox, as Chromium does not provide the browser.dns API. To some extent, these requests can be blocked using custom DNS servers. However, no browsers have shipped with CNAME-based adblocking protection capabilities available and on by default,” it wrote.
“In Brave 1.17, Brave Shields will now recursively check the canonical name records for any network request that isn’t otherwise blocked using an embedded DNS resolver. If the request has a CNAME record, and the same request under the canonical domain would be blocked, then the request is blocked. This solution is on by default, bringing enhanced privacy protections to millions of users.”
But the browser with the largest marketshare, Chrome, has work to do, per the researchers, who write:
Because Chrome does not support a DNS resolution API for extensions, the [uBlock version 1.25 under Firefox] defense could not be applied to this browser. Consequently, we find that four of the CNAME-based trackers (Oracle Eloqua, Eulerian, Criteo, and Keyade) are blocked by uBlock Origin on Firefox but not on the Chrome version.
Many of us already live with artificial intelligence now, but researchers say interactions with the technology will become increasingly personalized.
The magazine’s Ethicist columnist on what should determine eligibility for the Covid-19 vaccine and more.
As live audio chat app Clubhouse ascends in popularity around the world, concerns about its data practices also grow.
The app is currently only available on iOS, so some developers set out in a race to create Android, Windows and Mac versions of the service. While these endeavors may not be ill-intentioned, the fact that it takes programmers little effort to reverse engineer and fork Clubhouse — that is, when developers create new software based on its original code — is sounding an alarm about the app’s security.
The common goal of these unofficial apps, as of now, is to broadcast Clubhouse audio feeds in real-time to users who cannot access the app otherwise because they don’t have an iPhone. One such effort is called Open Clubhouse, which describes itself as a “third-party web application based on flask to play Clubhouse audio.” The developer confirmed to TechCrunch that Clubhouse blocked its service five days after its launch without providing an explanation.
“[Clubhouse] asks a lot of information from users, analyzes those data and even abuses them. Meanwhile, it restricts how people use the app and fails to give them the rights they deserve. To me, this constitutes monopoly or exploitation,” said Open Clubhouse’s developer nicknamed AiX.
Clubhouse cannot be immediately reached for comment on this story.
AiX wrote the program “for fun” and wanted it to broaden Clubhouse’s access to more people. Another similar effort came from a developer named Zhuowei Zhang, who created Hipster House to let those without an invite browse rooms and users, and those with an invite to join rooms as a listener though they can’t speak — Clubhouse is invite-only at the moment. Zhang stopped developing the project, however, after noticing a better alternative.
These third-party services, despite their innocuous intentions, can be exploited for surveillance purposes, as Jane Manchun Wong, a researcher known for uncovering upcoming features in popular apps through reverse engineering, noted in a tweet.
“Even if the intent of that webpage is to bring Clubhouse to non-iOS users, without a safeguard, it could be abused,” said Wong, referring to a website rerouting audio data from Clubhouse’s public rooms.
Clubhouse lets people create public chat rooms, which are available to any user who joins before a room reaches its maximum capacity, and private rooms, which are only accessible to room hosts and users authorized by the hosts.
But not all users are aware of the open nature of Clubhouse’s public rooms. During its brief window of availability in China, the app was flooded with mainland Chinese debating politically sensitive issues from Taiwan to Xinjiang, which are heavily censored in the Chinese cybserspace. Some vigilant Chinese users speculated the possibility of being questioned by the police for delivering sensitive remarks. While no such event has been publicly reported, the Chinese authorities have banned the app since February 8.
Clubhouse’s design is by nature at odds with the state of communication it aims to achieve. The app encourages people to use their real identity — registration requires a phone number and an existing user’s invite. Inside a room, everyone can see who else is there. This setup instills trust and comfort in users when they speak as if speaking at a networking event.
But the third-party apps that are able to extract Clubhouse’s audio feeds show that the app isn’t even semi-public: It’s public.
More troublesome is that users can “ghost listen,” as developer Zerforschung found. That is, users can hear a room’s conversation without having their profile displayed to the room participants. Eavesdropping is made possible by establishing communication directly with Agora, a service provider employed by Clubhouse. As multiple security researchers found, Clubhouse relies on Agora’s real-time audio communication technology. Sources have also confirmed the partnership with TechCrunch.
Some technical explanation is needed here. When a user joins a chatroom on Clubhouse, it makes a request to Agora’s infrastructure, as the Stanford Internet Observatory discovered. To make the request, the user’s phone contacts Clubhouse’s application programming interface (API), which then creates “tokens”, the basic building block in programming that authenticates an action, to establish a communication pathway for the app’s audio traffic.
Now, the problem is there can be a disconnect between Clubhouse and Agora, allowing the Clubhouse end, which manages user profiles, to be inactive while the Agora end, which transmits audio data, remains active, as technology analyst Daniel Sinclair noted. That’s why users can continue to eavesdrop on a room without having their profile displayed to the room’s participants.
The Agora partnership has sparked other forms of worries. The company, which operates mainly from the U.S. and China, noted in its IPO prospectus that its data may be subject to China’s cybersecurity law, which requires network operators in China to assist police investigations. That possibility, as the Stanford Internet Observatory points out, is contingent on whether Clubhouse stores its data in China.
While the Clubhouse API is banned in China, the Agora API appears unblocked. Tests by TechCrunch find that users currently need a VPN to join a room, an action managed by Clubhouse, but can listen to the room conversation, which is facilitated by Agora, with the VPN off. What’s the safest way for China-based users to access the app, given the official attitude is that it should not exist? It’s also worth noting that the app was not available on the Chinese App Store even before its ban, and Chinese users had downloaded the app through workarounds.
The Clubhouse team may be overwhelmed by data questions in the past few days, but these early observations from researchers and hackers may urge it to fix its vulnerabilities sooner, paving its way to grow beyond its several million loyal users and $1 billion valuation mark.
The prospect of Web users being tracked by the sites they visit has prompted several countermeasures over the years, including using Privacy Badger or an alternate anti-tracking extension, enabling private or incognito browsing sessions, or clearing cookies. Now, websites have a new way to defeat all three.
The technique leverages the use of favicons, the tiny icons that websites display in users’ browser tabs and bookmark lists. Researchers from the University of Chicago said in a new paper that most browsers cache the images in a location that’s separate from the ones used to store site data, browsing history, and cookies. Websites can abuse this arrangement by loading a series of favicons on visitors’ browsers that uniquely identify them over an extended period of time.
Powerful tracking vector
“Overall, while favicons have long been considered a simple decorative resource supported by browsers to facilitate websites’ branding, our research demonstrates that they introduce a powerful tracking vector that poses a significant privacy threat to users,” the researchers wrote. They continued:
Last month, Facebook-owned WhatsApp announced it would delay enforcement of its new privacy terms, following a backlash from confused users which later led to a legal challenge in India and various regulatory investigations. WhatsApp users had misinterpreted the privacy updates as an indication that the app would begin sharing more data — including their private messages — with Facebook. Today, the company is sharing the next steps it’s taking to try to rectify the issue and clarify that’s not the case.
The mishandling of the privacy update on WhatsApp’s part led to widespread confusion and misinformation. In reality, WhatsApp had been sharing some information about its users with Facebook since 2016, following its acquisition by Facebook.
But the backlash is a solid indication of much user trust Facebook has since squandered. People immediately suspected the worst, and millions fled to alternative messaging apps, like Signal and Telegram, as a result.
Following the outcry, WhatsApp attempted to explain that the privacy update was actually focused on optional business features on the app, which allow business to see the content of messages between it and the end user, and give the businesses permission to use that information for its own marketing purposes, including advertising on Facebook. WhatsApp also said it labels conversations with businesses that are using hosting services from Facebook to manage their chats with customers, so users were aware.
In the weeks since the debacle, WhatsApp says it spent time gathering user feedback and listening to concerns from people in various countries. The company found that users wanted assurance that WhatsApp was not reading their private messages or listening to their conversations, and that their communications were end-to-end encrypted. Users also said they wanted to know that WhatsApp wasn’t keeping logs of who they were messaging or sharing contact lists with Facebook.
These latter concerns seem valid, given that Facebook recently made its messaging systems across Facebook, Messenger and Instagram interoperable. One has to wonder when similar integrations will make their way to WhatsApp.
Today, WhatsApp says it will roll out new communications to users about the privacy update, which follows the Status update it offered back in January aimed at clarifying points of confusion. (See below).
In a few weeks, WhatsApp will begin to roll out a small, in-app banner that will ask users to re-review the privacy policies — a change the company said users have shown to prefer over the pop-up, full-screen alert it displayed before.
When users click on “to review,” they’ll be shown a deeper summary of the changes, including added details about how WhatsApp works with Facebook. The changes stress that WhatsApp’s update don’t impact the privacy of users’ conversations, and reiterate the information about the optional business features.
Eventually, WhatsApp will begin to remind users to review and accept its updates to keep using WhatsApp. According to its prior announcement, it won’t be enforcing the new policy until May 15.
Users will still need to be aware that their communications with businesses are not as secure as their private messages. This impacts a growing number of WhatsApp users, 175 million of which now communicate with businesses on the app, WhatsApp said in October.
In today’s blog post about the changes, WhatsApp also took a big swipe at rival messaging apps that used the confusion over the privacy update to draw in WhatsApp’s fleeing users by touting their own app’s privacy.
“We’ve seen some of our competitors try to get away with claiming they can’t see people’s messages – if an app doesn’t offer end-to-end encryption by default that means they can read your messages,” WhatsApp’s blog post read.
This seems to be a comment directed specifically towards Telegram, which often touts its “heavily encrypted” messaging app as more private alternative. But Telegram doesn’t offer end-to-end encryption by default, as apps like WhatsApp and Signal do. It uses “transport layer” encryption that protects the connection from the user to the server, a Wired article citing cybersecurity professionals explained in January. When users want an end-to-end encrypted experience for their one-on-one chats, they can enable the “secret chats” feature instead. (And this feature isn’t even available for group chats.)
In addition, WhatsApp fought back against the characterization that it’s somehow less safe because it has some limited data on users.
“Other apps say they’re better because they know even less information than WhatsApp. We believe people are looking for apps to be both reliable and safe, even if that requires WhatsApp having some limited data,” the post read. “We strive to be thoughtful on the decisions we make and we’ll continue to develop new ways of meeting these responsibilities with less information, not more,” it noted.
A security lapse by a Jamaican government contractor has exposed immigration records and COVID-19 test results for hundreds of thousands of travelers who visited the island over the past year.
The Jamaican government contracted Amber Group to build the JamCOVID19 website and app, which the government uses to publish daily coronavirus figures and allows residents to self-report their symptoms. The contractor also built the website to pre-approve travel applications to visit the island during the pandemic, a process that requires travelers to upload a negative COVID-19 test result before they board their flight if they come from high-risk countries, including the United States.
But a cloud storage server storing those uploaded documents was left unprotected and without a password, and was publicly spilling out files onto the open web.
Many of the victims whose information was found on the exposed server are Americans.
The data is now secure after TechCrunch contacted Amber Group’s chief executive Dushyant Savadia, who did not comment when reached prior to publication.
The storage server, hosted on Amazon Web Services, was set to public. It’s not known for how long the data was unprotected, but contained more than 70,000 negative COVID-19 lab results, over 425,000 immigration documents authorizing travel to the island — which included the traveler’s name, date of birth and passport numbers — and over 250,000 quarantine orders dating back to June 2020, when Jamaica reopened its borders to visitors after the pandemic’s first wave. The server also contained more than 440,000 images of travelers’ signatures.
Two U.S. travelers whose lab results were among the exposed data told TechCrunch that they uploaded their COVID-19 results through the Visit Jamaica website before their travel. Once lab results are processed, travelers receive a travel authorization that they must present before boarding their flight.
Both of these documents, as well as quarantine orders that require visitors to shelter in place and several passports, were on the exposed storage server.
Travelers who are staying outside Jamaica’s so-called “resilient corridor,” a zone that covers a large portion of the island’s population, are told to install the app built by Amber Group that tracks their location and is tracked by the Ministry of Health to ensure visitors stay within the corridor. The app also requires that travelers record short “check-in” videos with a daily code sent by the government, along with their name and any symptoms.
The server exposed more than 1.1 million of those daily updating check-in videos.
The server also contained dozens of daily timestamped spreadsheets named “PICA,” likely for the Jamaican passport, immigration and citizenship agency, but these were restricted by access permissions. But the permissions on the storage server were set so that anyone had full control of the files inside, such as allowing them to be downloaded or deleted altogether. (TechCrunch did neither, as doing so would be unlawful.)
Stephen Davidson, a spokesperson for the Jamaican Ministry of Health, did not comment when reached, or say if the government planned to inform travelers of the security lapse.
Savadia founded Amber Group in 2015 and soon launched its vehicle-tracking system, Amber Connect.
According to one report, Amber’s Savadia said the company developed JamCOVID19 “within three days” and made it available to the Jamaican government in large part for free. The contractor is billing other countries, including Grenada and the British Virgin Islands, for similar implementations, and is said to be looking for other government customers outside the Caribbean.
Savadia would not say what measures his company put in place to protect the data of paying governments.
Jamaica has recorded at least 19,300 coronavirus cases on the island to date, and more than 370 deaths.
Send tips securely over Signal and WhatsApp to +1 646-755-8849. You can also send files or documents using our SecureDrop. Learn more.
Facebook has been fined again by Italy’s competition authority — this time the penalty is €7 million (~$8.4M) — for failing to comply with an earlier order related to how it informs users about the commercial uses it makes of their data.
The AGCM began investigating certain commercial practices by Facebook back in 2018, including the information it provided to users at sign up and the lack of an opt out for advertising. Later the same year it went on to fine Facebook €10M for two violations of the country’s Consumer Code.
But the watchdog’s action did not stop there. It went on to launch further proceedings against Facebook in 2020 — saying the tech giant was still failing to inform users “with clarity and immediacy” about how it monetizes their data.
“Facebook Ireland Ltd. and Facebook Inc. have not complied with the warning to remove the incorrect practice on the use of user data and have not published the corrective declaration requested by the Authority,” the AGCM writes in a press release today (issued in Italian; which we’ve translated with Google Translate).
The authority said Facebook is still misleading users who register on its platform by not informing them — “immediately and adequately” — at the point of sign up that it will collect and monetize their personal data. Instead it found Facebook emphasizes its service’s ‘gratuitousness’.
“The information provided by Facebook was generic and incomplete and did not provide an adequate distinction between the use of data necessary for the personalization of the service (with the aim of facilitating socialization with other users) and the use of data to carry out targeted advertising campaigns,” the AGCM goes on.
It had already fined Facebook €5M over the same issue of failing to provide adequate information about its use of people’s data. But it also ordered it to correct the practice — and publish an “amendment” notice on its website and apps for users in Italy. Neither of which Facebook has done, per the regulator.
Facebook, meanwhile, has been fighting the AGCM’s order via the Italian legal system — making a petition to the Council of State.
A hearing of Facebook’s appeal against the non-compliance proceedings took place in September last year and a decision is still pending.
Reached for comment on AGCM’s action, a Facebook spokesperson told us: “We note the Italian Competition Authority’s announcement today, but we await the Council of State decision on our appeal against the Authority’s initial findings.”
“Facebook takes privacy extremely seriously and we have already made changes, including to our Terms of Service, to further clarify how Facebook uses data to provide its service and to provide tailored advertising,” it added.
Last year, at the time the AGCM instigated further proceedings against it, Facebook told us it had amended the language of its terms of service back in 2019 — to “further clarify” how it makes money, as it put it.
However while the tech giant appears to have removed a direct “claim of gratuity” it had previously been presenting users at the point of registration, the Italian watchdog is still not happy with how far it’s gone in its presentation to new users — saying it’s still not being “immediate and clear” enough in how it provides information on the collection and use of their data for commercial purposes.
The authority points out that this is key information for people to weigh up in deciding whether or not to join Facebook — given the economic value Facebook gains via the transfer of their personal data.
For its part, Facebook argues that it’s fair to describe a service as ‘free’ if there’s no monetary charge for use. Although it has also made changes to how it describes this value exchange to users — including dropping its former slogan that “Facebook is free and always will be” in favor of some fuzzier phrasing.
On the arguably more salient legal point that Facebook is also appealing — related to the lack of a direct opt out for Facebook users to prevent their data being used for targeted ads — Facebook denies there’s any lack of consent to see here, claiming it does not give any user information to third parties unless the person has chosen to share their information and give consent.
Rather it says this consent process happens off its own site, on a case by case basis, i.e. when people decide whether or not to install third party apps or use Facebook Login to log into a third-party websites etc — and where, it argues, they will be asked by those third parties whether they want Facebook to share their data.
(Facebook’s lead data supervisor in Europe, Ireland’s DPC, has an open investigation into Facebook on exactly this issue of so-called ‘forced consent’ — with complaints filed the moment Europe’s General Data Protection Regulation begun being applied in May 2018.)
The tech giant also flags on-site tools and settings it does offer its own users — such as ‘Why Am I Seeing This Ad’, ‘Ads Preferences’ and ‘Manage Activity’ — which it claims increase transparency and control for Facebook users.
It also points to the ‘Off Facebook Activity‘ setting it launched last year — which shows users some information about which third party services are sending their data to Facebook and lets them disconnect that information from their account. Though there’s no way for users to request the third party delete their data via Facebook. (That requires going to each third party service individually to make a request.)
Last year a German court ruled against a consumer rights challenge to Facebook’s use of the self-promotional slogan that its service is “free and always will be” — on the grounds that the company does not require users to literally hand over monetary payments in exchange for using the service. Although the court found against Facebook on a number of other issues bundled into the challenge related to how it handles user data.
In another interesting development last year, Germany’s federal court also unblocked a separate legal challenge to Facebook’s use of user data which has been brought by the country’s competition watchdog. If that landmark challenge prevails Facebook could be forced to stop combining user data across different services and from the social plug-ins and tracking pixels it embeds in third parties’ digital services.
The company is also now facing rising challenges to its unfettered use of people’s data via the private sector, with Apple set to switch on an opt-in consent mechanism for app tracking on iOS this spring. Browser makers have also been long stepping up action against consentless tracking — including Google, which is working on phasing out support for third party cookies on Chrome.
TikTok is facing a fresh round of regulatory complaints in Europe where consumer protection groups have filed a series of coordinated complaints alleging multiple breaches of EU law.
The European Consumer Organisation (BEUC) has lodged a complaint against the video sharing site with the European Commission and the bloc’s network of consumer protection authorities, while consumer organisations in 15 countries have alerted their national authorities and urged them to investigate the social media giant’s conduct, BEUC said today.
The complaints include claims of unfair terms, including in relation to copyright and TikTok’s virtual currency; concerns around the type of content children are being exposed to on the platform; and accusations of misleading data processing and privacy practices.
Details of the alleged breaches are set out in two reports associated with the complaints: One covering issues with TikTok’s approach to consumer protection, and another focused on data protection and privacy.
On child safety, the report accuses TikTok of failing to protect children and teenagers from hidden advertising and “potentially harmful” content on its platform.
“TikTok’s marketing offers to companies who want to advertise on the app contributes to the proliferation of hidden marketing. Users are for instance triggered to participate in branded hashtag challenges where they are encouraged to create content of specific products. As popular influencers are often the starting point of such challenges the commercial intent is usually masked for users. TikTok is also potentially failing to conduct due diligence when it comes to protecting children from inappropriate content such as videos showing suggestive content which are just a few scrolls away,” the BEUC writes in a press release.
TikTok has already faced a regulatory intervention in Italy this year in response to child safety concerns — in that instance after the death of a ten year old girl in the country. Local media had reported that the child died of asphyxiation after participating in a ‘black out’ challenge on TikTok — triggering the emergency intervention by the DPA.
Soon afterwards TikTok agreed to reissue an age gate to verify the age of every user in Italy, although the check merely asks the user to input a date to confirm their age so seems trivially easy to circumvent.
In the BEUC’s report, the consumer rights group draws attention to TikTok’s flimsy age gate, writing that: “In practice, it is very easy for underage users to register on the platform as the age verification process is very loose and only self-declaratory.”
From the report:
In France, 45% of children below 13 have indicated using the app. In the United Kingdom, a 2020 study from the Office for Telecommunications (OFCOM) revealed that 50% of children between eight and 15 upload videos on TikTok at least weekly. In Czech Republic, a 2019 study found out that TikTok is very popular among children aged 11-12. In Norway, a news article reported that 32% of children aged 10-11 used TikTok in 2019. In the United States, The New York Times revealed that more than one-third of daily TikTok users are 14 or younger, and many videos seem to come from children who are below 13. The fact that many underage users are active on the platform does not come as a surprise as recent studies have shown that, on average, a majority of children owns mobile phones earlier and earlier (for example, by the age of seven in the UK).
A recent EU-backed study also found that age checks on popular social media platforms are “basically ineffective” as they can be circumvented by children of all ages simply by lying about their age.
A virtual currency feature it offers is also highlighted as problematic in consumer rights terms.
TikTok lets users purchase digital coins which they can use to buy virtual gifts for other users (which can in turn be converted by the user back to fiat). But BEUC says its ‘Virtual Item Policy’ contains “unfair terms and misleading practices” — pointing to how it claims an “absolute right” to modify the exchange rate between the coins and the gifts, thereby “potentially skewing the financial transaction in its own favour”.
While TikTok displays the price to buy packs of its virtual coins there is no clarity over the process it applies for the conversion of these gifts into in-app diamonds (which the gift-receiving user can choose to redeem for actual money, remitted to them via PayPal or another third party payment processing tool).
“The amount of the final monetary compensation that is ultimately earned by the content provider remains obscure,” BEUC writes in the report, adding: “According to TikTok, the compensation is calculated ‘based on various factors including the number of diamonds that the user has accrued’… TikTok does not indicate how much the app retains when content providers decide to convert their diamonds into cash.”
“Playful at a first glance, TikTok’s Virtual Item Policy is highly problematic from the point of view of consumer rights,” it adds.
On data protection and privacy, the social media platform is also accused of a whole litany of “misleading” practices — including (again) in relation to children. Here the complaint accuses TikTok of failing to clearly inform users about what personal data is collected, for what purpose, and for what legal reason — as is required under Europe’s General Data Protection Regulation (GDPR).
Other issues flagged in the report include the lack of any opt-out from personal data being processed for advertising (aka ‘forced consent’ — something tech giants like Facebook and Google have also been accused); the lack of explicit consent for processing sensitive personal data (which has special protections under GDPR); and an absence of security and data protection by design, among other issues.
We’ve reached out to the Irish Data Protection Commission (DPC), which is TikTok’s lead supervisor for data protection issues in the EU, about the complaint and will update this report with any response.
France’s data watchdog, the CNIL, already opened an investigation into TikTok last year — prior to the company shifting its regional legal base to Ireland (meaning data protection complaints must now be funnelled through the Irish DPC as a result of via the GDPR’s one-stop-shop mechanism — adding to the regulatory backlog).
Ausloos suggests such sudden massive shifts are a deliberate tactic to evade regulatory scrutiny of data-exploiting practices — as “constant flux” can have the effect of derailing and/or resetting research work being undertaken to build a case for enforcement — also pointing out that resource-strapped regulators may be reluctant to bring cases against companies ‘after the fact’ (i.e. if they’ve since changed a practice).
The upshot of breaches that iterate is that repeat violations of the law may never be enforced.
It’s also true that a frequent refrain of platforms at the point of being called out (or called up) on specific business practices is to claim they’ve since changed how they operate — seeking to use that a defence to limit the impact of regulatory enforcement or indeed a legal ruling. (Aka: ‘Move fast and break regulatory accountability’.)
Nonetheless, Ausloos says the complainants’ hope now is that the two years of documentation undertaken on the TikTok case will help DPAs build cases.
Commenting on the complaints in a statement, Monique Goyens, DG of BEUC, said: “In just a few years, TikTok has become one of the most popular social media apps with millions of users across Europe. But TikTok is letting its users down by breaching their rights on a massive scale. We have discovered a whole series of consumer rights infringements and therefore filed a complaint against TikTok.
“Children love TikTok but the company fails to keep them protected. We do not want our youngest ones to be exposed to pervasive hidden advertising and unknowingly turned into billboards when they are just trying to have fun.
“Together with our members — consumer groups from across Europe — we urge authorities to take swift action. They must act now to make sure TikTok is a place where consumers, especially children, can enjoy themselves without being deprived of their rights.”
Reached for comment on the complaints, a TikTok spokesperson told us:
Facebook CEO Mark Zuckerberg told employees close to him, “we need to inflict pain” on Apple for comments by Apple CEO Tim Cook that Zuckerberg described as “extremely glib.”
This and other insights into an ongoing rift between the two companies appeared in a report in The Wall Street Journal this weekend. The article indicates that based on first-hand reports, Zuckerberg has taken Cook and Apple’s public criticisms of Facebook’s privacy policies, whether direct or indirect, as personal affronts.
For example, Cook publicly responded to Facebook’s 2018 Cambridge Analytica scandal by saying such a scandal would never happen to Apple because Apple does not treat its customers like products. When asked what he would do in Zuckerberg’s position, he said, “I wouldn’t be in this situation,” calling Facebook’s approach “an invasion of privacy.” This was one of the comments that has led Zuckerberg to see Apple as an opponent.
Sweden’s data protection authority, the IMY, has fined the local police authority €250,000 ($300k+) for unlawful use of the controversial facial recognition software, Clearview AI, in breach of the country’s Criminal Data Act.
As part of the enforcement the police must conduct further training and education of staff in order to avoid any future processing of personal data in breach of data protection rules and regulations.
The authority has also been ordered to inform people whose personal data was sent to Clearview — when confidentiality rules allow it to do so, per the IMY.
Its investigation found that the police had used the facial recognition tool on a number of occasions and that several employees had used it without prior authorization.
Earlier this month Canadian privacy authorities found Clearview had breached local laws when it collected photos of people to plug into its facial recognition database without their knowledge or permission.
“IMY concludes that the Police has not fulfilled its obligations as a data controller on a number of accounts with regards to the use of Clearview AI. The Police has failed to implement sufficient organisational measures to ensure and be able to demonstrate that the processing of personal data in this case has been carried out in compliance with the Criminal Data Act. When using Clearview AI the Police has unlawfully processed biometric data for facial recognition as well as having failed to conduct a data protection impact assessment which this case of processing would require,” the Swedish data protection authority writes in a press release.
The IMY’s full decision can be found here (in Swedish).
“There are clearly defined rules and regulations on how the Police Authority may process personal data, especially for law enforcement purposes. It is the responsibility of the Police to ensure that employees are aware of those rules,” added Elena Mazzotti Pallard, legal advisor at IMY, in a statement.
The fine (SEK2.5M in local currency) was decided on the basis of an overall assessment, per the IMY, though it falls quite a way short of the maximum possible under Swedish law for the violations in question — which the watchdog notes would be SEK10M. (The authority’s decision notes that not knowing the rules or having inadequate procedures in place are not a reason to reduce a penalty fee so it’s not entirely clear why the police avoided a bigger fine.)
The data authority said it was not possible to determine what had happened to the data of the people whose photos the police authority had sent to Clearview — such as whether the company still stored the information. So it has also ordered the police to take steps to ensure Clearview deletes the data.
The IMY said it investigated the police’s use of the controversial technology following reports in local media.
Just over a year ago, US-based Clearview AI was revealed by the New York Times to have amassed a database of billions of photos of people’s faces — including by scraping public social media postings and harvesting people’s sensitive biometric data without individuals’ knowledge or consent.
European Union data protection law puts a high bar on the processing of special category data, such as biometrics.
Ad hoc use by police of a commercial facial recognition database — with seemingly zero attention paid to local data protection law — evidently does not meet that bar.
Last month it emerged that the Hamburg data protection authority had instigating proceedings against Clearview following a complaint by a German resident over consentless processing of his biometric data.
The Hamburg authority cited Article 9 (1) of the GDPR, which prohibits the processing of biometric data for the purpose of uniquely identifying a natural person, unless the individual has given explicit consent (or for a number of other narrow exceptions which it said had not been met) — thereby finding Clearview’s processing unlawful.
However the German authority only made a narrow order for the deletion of the individual complainant’s mathematical hash values (which represent the biometric profile).
It did not order deletion of the photos themselves. It also did not issue a pan-EU order banning the collection of any European resident’s photos as it could have done and as European privacy campaign group, noyb, had been pushing for.
noyb is encouraging all EU residents to use forms on Clearview AI’s website to ask the company for a copy of their data and ask it to delete any data it has on them, as well as to object to being included in its database. It also recommends that individuals who finds Clearview holds their data submit a complaint against the company with their local DPA.
European Union lawmakers are in the process of drawing up a risk-based framework to regulate applications of artificial intelligence — with draft legislation expected to be put forward this year although the Commission intends it to work in concert with data protections already baked into the EU’s General Data Protection Regulation (GDPR).
Earlier this month the controversial facial recognition company was ruled illegal by Canadian privacy authorities — who warned they would “pursue other actions” if the company does not follow recommendations that include stopping the collection of Canadians’ data and deleting all previously collected images.
Clearview said it had stopped providing its tech to Canadian customers last summer.
It is also facing a class action lawsuit in the U.S. citing Illinois’ biometric protection laws.
Last summer the UK and Australian data protection watchdogs announced a joint investigation into Clearview’s personal data handling practices. That probe is ongoing.
Virginia is poised to follow in California’s footsteps any minute now and become the second state in the country to adopt a comprehensive online data protection law for consumers.
If adopted, the Consumer Data Protection Act would apply to entities of a certain size that do business in Virginia or have users based in Virginia. The bill enjoys broad popular support among state lawmakers; it passed 89-9 in the Virginia House and unanimously (39-0) in the state Senate, and Democratic Gov. Ralph Northam is widely expected to sign it into law without issue in the coming days.
In the absence of a general-purpose federal privacy framework, states all over the nation are very slowly stepping in with their own solutions. The Virginia law is somewhat modeled on California’s landmark Consumer Privacy Act, which was signed into law in 2018 and took effect on January 1, 2020. Legislatures in several other states—including Minnesota, New York, North Dakota, Oklahoma, and Washington—have some kind of data privacy bills currently under consideration.
The Duchess of Sussex sued after The Mail on Sunday published extracts of a letter she had written to her estranged father in 2018.
The weird deal Oracle arranged at the behest of the Trump administration to buy TikTok without actually acquiring it has been permanently back-burnered, according to a new report.
The transaction, which has gone effectively nowhere since it was first announced, is now “shelved,” the ever-popular “people familiar with the situation” told The Wall Street Journal. This move effectively puts an end to a saga that played out over many months and many tweets.
Back in August 2020 (roughly a hundred years ago, it now feels like), former President Donald Trump issued an executive order declaring TikTok and another China-based app, WeChat, to be a “national emergency.” A week later, a second order (PDF) gave TikTok’s parent company, Beijing-based ByteDance, 90 days to divest the app to a US owner or cease operations in the States.
The European Union’s lead data protection supervisor has recommended that a ban on targeted advertising based on tracking Internet users’ digital activity be included in a major reform of digital services rules which aims to increase operators’ accountability, among other key goals.
The European Data Protection Supervisor (EDPS), Wojciech Wiewiorówski, made the call for a ban on surveillance-based targeted ads in reference to the Commission’s Digital Services Act (DSA) — following a request for consultation from EU lawmakers.
The DSA legislative proposal was introduced in December, alongside the Digital Markets Act (DMA) — kicking off the EU’s (often lengthy) co-legislative process which involves debate and negotiations in the European Parliament and Council on amendments before any final text can be agreed for approval. This means battle lines are being drawn to try to influence the final shape of the biggest overhaul to pan-EU digital rules for decades — with everything to play for.
The intervention by Europe’s lead data protection supervisor calling for a ban on targeted ads is a powerful pre-emptive push against attempts to water down legislative protections for consumer interests.
The Commission had not gone so far in its proposal — but big tech lobbyists are certainly pushing in the opposite direction so the EDPS taking a strong line here looks important.
In his opinion on the DSA the EDPS writes that “additional safeguards” are needed to supplement risk mitigation measures proposed by the Commission — arguing that “certain activities in the context of online platforms present increasing risks not only for the rights of individuals, but for society as a whole”.
Online advertising, recommender systems and content moderation are the areas the EDPS is particularly concerned about.
“Given the multitude of risks associated with online targeted advertising, the EDPS urges the co-legislators to consider additional rules going beyond transparency,” he goes on. “Such measures should include a phase-out leading to a prohibition of targeted advertising on the basis of pervasive tracking, as well as restrictions in relation to the categories of data that can be processed for targeting purposes and the categories of data that may be disclosed to advertisers or third parties to enable or facilitate targeted advertising.”
It’s the latest regional salvo aimed at mass-surveillance-based targeted ads after the European Parliament called for tighter rules back in October — when it suggested EU lawmakers should consider a phased in ban.
Again, though, the EDPS is going a bit further here in actually calling for one. (Facebook’s Nick Clegg will be clutching his pearls.)
More recently, the CEO of European publishing giant Axel Springer, a long time co-conspirator of adtech interests, went public with a (rather protectionist-flavored) rant about US-based data-mining tech platforms turning citizens into “the marionettes of capitalist monopolies” — calling for EU lawmakers to extend regional privacy rules by prohibiting platforms from storing personal data and using it for commercial gain at all.
Apple CEO, Tim Cook, also took to the virtual stage of a (usually) Brussels based conference last month to urge Europe to double down on enforcement of its flagship General Data Protection Regulation (GDPR).
In the speech Cook warned that the adtech ‘data complex’ is fuelling a social catastrophe by driving the spread of disinformation as it works to profit off of mass manipulation. He went on to urge lawmakers on both sides of the pond to “send a universal, humanistic response to those who claim a right to users’ private information about what should not and will not be tolerated”. So it’s not just European companies (and institutions) calling for pro-privacy reform of adtech.
The iPhone maker is preparing to introduce stricter limits on tracking on its smartphones by making apps ask users for permission to track, instead of just grabbing their data — a move that’s naturally raised the hackles of the adtech sector, which relies on mass surveillance to power ‘relevant’ ads.
Hence the adtech industry has resorted to crying ‘antitrust‘ as a tactic to push competition regulators to block platform-level moves against its consentless surveillance. And on that front it’s notable than the EDPS’ opinion on the DMA, which proposes extra rules for intermediating platforms with the most market power, reiterates the vital links between competition, consumer protection and data protection law — saying these three are “inextricably linked policy areas in the context of the online platform economy”; and that there “should be a relationship of complementarity, not a relationship where one area replaces or enters into friction with another”.
Wiewiorówski also takes aim at recommender systems in his DSA opinion — saying these should not be based on profiling by default to ensure compliance with regional data protection rules (where privacy by design and default is supposed to be the legal default).
Here too be calls for additional measures to beef up the Commission’s legislative proposal — with the aim of “further promot[ing] transparency and user control”.
This is necessary because such system have “significant impact”, the EDPS argues.
The role of content recommendation engines in driving Internet users towards hateful and extremist points of view has long been a subject of public scrutiny. Back in 2017, for example, UK parliamentarians grilled a number of tech companies on the topic — raising concerns that AI-driven tools, engineered to maximize platform profit by increasing user engagement, risked automating radicalization, causing damage not just to the individuals who become hooked on hateful views the algorithms feeds them but cascading knock-on harms for all of us as societal cohesion is eaten away in the name of keeping the eyeballs busy.
Yet years on little information is available on how such algorithmic recommender systems work because the private companies that operate and profit off these AIs shield the workings as proprietary business secrets.
The Commission’s DSA proposal takes aim at this sort of secrecy as a bar to accountability — with its push for transparency obligations. The proposed obligations (in the initial draft) include requirements for platforms to provide “meaningful” criteria used to target ads; and explain the “main parameters” of their recommender algorithms; as well as requirements to foreground user controls (including at least one “nonprofiling” option).
However the EDPS wants regional lawmakers to go further in the service of protecting individuals from exploitation (and society as a whole from the toxic byproducts that flow from an industry based on harvesting personal data to manipulate people).
On content moderation, Wiewiorówski’s opinion stresses that this should “take place in accordance with the rule of law”. Though the Commission draft has favored leaving it with platforms to interpret the law.
“Given the already endemic monitoring of individuals’ behaviour, particularly in the context of online platforms, the DSA should delineate when efforts to combat ‘illegal content’ legitimise the use of automated means to detect, identify and address illegal content,” he writes, in what looks like a tacit recognition of recent CJEU jurisprudence in this area.
“Profiling for purposes of content moderation should be prohibited unless the provider can demonstrate that such measures are strictly necessary to address the systemic risks explicitly identified by the DSA,” he adds.
The EDPS has also suggested minimum interoperability requirements for very large platforms, and for those designated as ‘gatekeepers’ (under the DMA), and urges lawmakers to work to promote the development of technical standards to help with this at the European level.
On the DMA, he also urges amendments to ensure the proposal “complements the GDPR effectively”, as he puts it, calling for “increasing protection for the fundamental rights and freedoms of the persons concerned, and avoiding frictions with current data protection rules”.
Among the EDPS’ specific recommendations are: That the DMA makes it clear that gatekeeper platforms must provide users with easier and more accessible consent management; clarification to the scope of data portability envisaged in the draft; and rewording of a provision that requires gatekeepers to provide other businesses with access to aggregated user data — again with an eye on ensuring “full consistency with the GDPR”.
The opinion also raises the issue of the need for “effective anonymisation” — with the EDPS calling for “re-identification tests when sharing query, click and view data in relation to free and paid search generated by end users on online search engines of the gatekeeper”.
ePrivacy reform emerges from stasis
Wiewiorówski’s contributions to shaping incoming platform regulations come on the same day that the European Council has finally reached agreement on its negotiating position for a long-delayed EU reform effort around existing ePrivacy rules.
In a press release announcing the development, the Commission writes that Member States agreed on a negotiating mandate for revised rules on the protection of privacy and confidentiality in the use of electronic communications services.
“These updated ‘ePrivacy’ rules will define cases in which service providers are allowed to process electronic communications data or have access to data stored on end-users’ devices,” it writes, adding: “Today’s agreement allows the Portuguese presidency to start talks with the European Parliament on the final text.”
Reform of the ePrivacy directive has been stalled for years as conflicting interests locked horns — putting paid to the (prior) Commission’s hopes that the whole effort could be done and dusted in 2018. (The original ePrivacy reform proposal came out in January 2017; four years later the Council has finally settled on its arguing mandate.)
The fact that the GDPR was passed first appears to have upped the stakes for data-hungry ePrivacy lobbyists — in both the adtech and telco space (the latter having a keen interest in removing existing regulatory barriers on comms data in order that it can exploit the vast troves of user data which Internet giants running rival messaging and VoIP services have long been able to).
There’s a concerted effort to try to use ePrivacy to undo consumer protections baked into GDPR — including attempts to water down protections provided for sensitive personal data. So the stage is set for an ugly rights battle as negotiations kick off with the European Parliament.
Metadata and cookie consent rules are also bound up with ePrivacy so there’s all sorts of messy and contested issues on the table here.
Digital rights advocacy group Access Now summed up the ePrivacy development by slamming the Council for “hugely” missing the mark.
“The reform is supposed to strengthen privacy rights in the EU [but] States poked so many holes into the proposal that it now looks like French Gruyère,” said Estelle Massé, senior policy analyst at Access Now, in a statement. “The text adopted today is below par when compared to the Parliament’s text and previous versions of government positions. We lost forward-looking provisions for the protection of privacy while several surveillance measures have been added.”
The group said it will be pushing to restore requirements for service providers to protect online users’ privacy by default and for the establishment of clear rules against online tracking beyond cookies, among other policy preferences.
The Council, meanwhile, appears to be advocating for a highly dilute (and so probably useless) flavor of ‘do not track’ — by suggesting users should be able to give consent to the use of “certain types of cookies by whitelisting one or several providers in their browser settings”, per the Commission.
“Software providers will be encouraged to make it easy for users to set up and amend whitelists on their browsers and withdraw consent at any moment,” it adds in its press release.
Clearly the devil will be in the detail of the Council’s position there. (The European Parliament has, by contrast, previously clearly endorsed a “legally binding and enforceable” Do Not Track mechanism for ePrivacy so, again, the stage is set for clashes.)
Encryption is another likely bone of ePrivacy contention.
As security and privacy researcher, Dr Lukasz Olejnik, noted back in mid 2017, the parliament strongly backed end-to-end encryption as a means of protecting the confidentiality of comms data — saying then that Member States should not impose any obligations on service providers to weaken strong encryption.
So it’s notable that the Council does not have much to say about e2e encryption — at least in the PR version of its public position. (A line in this that runs: “As a main rule, electronic communications data will be confidential. Any interference, including listening to, monitoring and processing of data by anyone other than the end-user will be prohibited, except when permitted by the ePrivacy regulation” is hardly reassuring, either.)
It certainly looks like a worrying omission given recent efforts at the Council level to advocate for ‘lawful’ access to encrypted data. Digital and humans rights groups will be buckling up for a fight.
Police in Minneapolis obtained a search warrant ordering Google to turn over sets of account data on vandals accused of sparking violence in the wake of the police killing of George Floyd last year, TechCrunch has learned.
The death of Floyd, a Black man killed by a white police officer in May 2020, prompted thousands to peacefully protest across the city. But violence soon erupted, which police say began with a masked man seen in a viral video using an umbrella to smash windows of an auto-parts store in south Minneapolis. The AutoZone store was the first among dozens of buildings across the city set on fire in the days following.
The search warrant compelled Google to provide police with the account data on anyone who was “within the geographical region” of the AutoZone store when the violence began on May 27, two days after Floyd’s death.
These so-called geofence warrants — or reverse-location warrants — are frequently directed at Google in large part because the search and advertising giant collects and stores vast databases of geolocation data on billions of account holders who have “location history” turned on. Geofence warrants allow police to cast a digital dragnet over a crime scene and ask tech companies for records on anyone who entered a geographic area at a particular time. But critics say these warrants are unconstitutional as they also gather the account information on innocent passers-by.
TechCrunch learned of the search warrant from Minneapolis resident Said Abdullahi, who received an email from Google stating that his account information was subject to the warrant, and would be given to the police.
But Abdullahi said he had no part in the violence and was only in the area to video the protests when the violence began at the AutoZone store.
The warrant said police sought “anonymized” account data from Google on any phone or device that was close to the AutoZone store and the parking lot between 5:20pm and 5:40pm (CST) on May 27, where dozens of the people in the area had gathered.
When reached, Minneapolis police spokesperson John Elder, citing an ongoing investigation, would not answer specific questions about the warrant, including for what reason the warrant was issued.
According to a police affidavit, police said the protests had been relatively peaceful until the afternoon of May 27, when a masked umbrella-wielding man began smashing the windows of the AutoZone store, located across the street from a Minneapolis police precinct where hundreds of protesters had gathered. Several videos show protesters confronting the masked man.
Police said they spent significant resources on trying to identify the so-called “Umbrella Man,” who they say was the catalyst for widespread violence across the city.
“This was the first fire that set off a string of fires and looting throughout the precinct and the rest of the city,” the affidavit read. At least two people were killed in the unrest. (Erika Christensen, a Minneapolis police investigator who filed the affidavit, was not made available for an interview.)
Police accuse the Umbrella Man of creating an “atmosphere of hostility and tension” whose sole aim was to “incite violence.” (TechCrunch is not linking to the affidavit as the police would not say if the suspect had been charged with a crime.) The affidavit also links the suspect to a white supremacist group called the Aryan Cowboys, and to an incident weeks later where a Muslim woman was harassed.
Multiple videos of the protests around the time listed on the warrant appear to line up with the window-smashing incident. Other videos of the scene at the time of the warrant show hundreds of other people in the vicinity. Police were positioned on rooftops and used tear gas and rubber bullets to control the crowds.
Law enforcement across the U.S. are increasingly relying on geofence warrants to solve crimes where a suspect is not known. Police have defended the use of these warrants because they can help identify potential suspects who entered a certain geographic region where a crime was committed. The warrants typically ask for “anonymized information,” but allow police to go back and narrow their requests on potential suspects of interest.
When allowed by law, Google notifies account holders of when law enforcement demands access to the user’s data. According to a court filing in 2019, Google said the number of geofence warrants it received went up by 1,500% between 2017 and 2018, and more than 500% between 2018 and 2019, but has yet to provide a specific number of warrants
Google reportedly received over 180 geofence warrants in a single week in 2019. When asked about more recent figures, a Google spokesperson declined to comment on the record.
Read more on TechCrunch
Civil liberties groups have criticized the use of dragnet search warrants. The American Civil Liberties Union said that geofence warrants “circumvent constitutional checks on police surveillance.” One district court in Virginia said geofence warrants violated the constitution because the majority of individuals whose data is collected will have “nothing whatsoever” to do with the crimes under investigation.
Reports in the past year have implicated people whose only connection to a crime is simply being nearby.
NBC News reported the case of one Gainesville, Fla. resident, who was told by Google that his account information would be given to police investigating a burglary. But the resident was able to prove that he had no connection to the burglary, thanks to an app on his phone that tracked his activity.
In 2019, Google gave federal agents investigating several arson attacks in Milwaukee, Wis. close to 1,500 user records in response to geofence warrant, thought to be one of the largest grabs of account data to date.
But lawmakers are beginning to push back. New York state lawmakers introduced a bill last year that would, if passed, ban geofence warrants across the state, citing the risk of police targeting protesters. Rep. Kelly Armstrong (R-ND) grilled Google chief executive Sundar Pichai at a House Judiciary subcommittee hearing last year. “People would be terrified to know that law enforcement could grab general warrants and get everyone’s information everywhere,” said Armstrong.
Abdullahi told TechCrunch that he had several videos documenting the protests on the day and that he has retained a lawyer to try to prevent Google from giving his account information to Minneapolis police.
“Police assumed everybody in that area that day is guilty,” he said. “If one person did something criminal, [the police] should not go after the whole block of people,” he said.
Send tips securely over Signal and WhatsApp to +1 646-755-8849. You can also send files or documents with SecureDrop.
Welcome back to This Week in Apps, the weekly TechCrunch series that recaps the latest in mobile OS news, mobile applications and the overall app economy.
The app industry is as hot as ever, with a record 218 billion downloads and $143 billion in global consumer spend in 2020.
Consumers last year also spent 3.5 trillion minutes using apps on Android devices alone. And in the U.S., app usage surged ahead of the time spent watching live TV. Currently, the average American watches 3.7 hours of live TV per day, but now spends four hours per day on their mobile devices.
Apps aren’t just a way to pass idle hours — they’re also a big business. In 2019, mobile-first companies had a combined $544 billion valuation, 6.5x higher than those without a mobile focus. In 2020, investors poured $73 billion in capital into mobile companies — a figure that’s up 27% year-over-year.
This week, we’re taking a look at Clubhouse’s breakout moment — or moments, to be fair. Also, the App Store’s rules were updated, Parler’s CEO was fired and other companies began raising their own red flags about Apple’s privacy changes.
This Week in Apps will soon be a newsletter! Sign up here: techcrunch.com/newsletters
Clubhouse goes mainstream
The invite-only audio platform has been on a roll, and has already hosted big names in tech, media and entertainment, including Drake, Estelle, Tiffany Haddish, Kevin Hart, Jared Leto, Ashton Kutcher, and others in the Silicon Valley tech scene. But this week was a breakout if there ever was one, when on Monday, Tesla and SpaceX founder Elon Musk showed up on Clubhouse, topping the app’s limit of 5,000 people in a single room. With others unable to get in, fans livestreamed the event to other platforms like YouTube, live-tweeted, and set up breakout rooms for the overflow. Musk was later joined by “Vlad The Stock Impaler,” aka Robinhood CEO Vlad Tenev, who of course talked about the GameStop saga — and was then interviewed by Musk himself.
Then on Thursday, Clubhouse saw yet another famous guest: Facebook CEO Mark Zuckerberg, who casually went by “Zuck23” when he joined “The Good Time Show” talk show on the app, as Musk had done before him.
The format of the social media network allowed the execs to informally address a wide audience of listeners with whatever they want to talk about — in Musk’s case, that was space travel, crypto, AI and vaccines, among other things. Zuckerberg, meanwhile, used the time to talk about AR/VR and its future in business and remote work. (If you thought Zoom meetings were bad…).
(And who knows, maybe he wanted give the app a try for other reasons, too.)
There is something unsettling about this whole arrangement, of course. Soft-balled questions lobbed at billionaires, journalists blocked from rooms, and so on — all on an app financed by a VC firm, Andreessen Horowitz (a16z), that’s said to be interested in cutting out the media middleman, to “go direct” instead. (Not coincidentally, the room inviting the big name guests was co-hosted by a16z’s Andreessen and its new GP, Sriram Krishnan, who is described as having an “optimistic” outlook — perhaps a valuable commodity when much of the media does not.)
Regardless of the machinations behind the scenes that made it happen, it’s hard to ignore an app where the biggest names in tech show up to just chat — or even interview one another.
Where is all this going?, is a valid question to be raised. Some have described Clubhouse as the late-night talk show equivalent. A place where interviews aren’t about asking the hard questions, but rather about whatever the guest came there to say or promote. And that’s fine, of course — as long as everyone understands that when big names arrive, they may do so with an agenda, even when it seems they’re just there for fun.
In any event, Clubhouse proved this week it’s no longer a buzzy newcomer. For now, at least, it’s decidedly in the game.
Companies (besides Facebook) warn investors about Apple’s privacy changes
So far, it may have seemed as if the only two businesses taking real issue with Apple’s privacy changes, including the coming changes to IDFA, were Facebook and Google. Facebook took out full-page ads and weighed lawsuits. Google delayed iOS app updates while it figured out privacy labels. But as other companies reported their fourth-quarter earnings, IDFA impacts were also topping their list of concerns.
In Snapchat CEO Evan Spiegel’s prepared remarks, he alerted investors to the potential disruption to Snap’s ad business, saying that the privacy changes “will present another risk of interruption” to advertising demand. He noted that it was unclear what the long-term consequences of those changes may be, too. Unity, meanwhile, attached a number to it: IDFA changes would reduce its revenue by about 3%, or $30 million, in 2021.
It may be that no one really knows how damaging the IDFA update will be until it rolls out. These are only estimates based on tests and assumptions about user behavior. Plus, there are reports poking holes in Facebook’s claims, which had said that small businesses would suffer a 60% cut in revenues. Those are surely overstated, Harvard Business Review wrote, saying Facebook had cherry-picked and amplified its numbers.
Nevertheless, Facebook is already testing ways to encourage users to accept its tracking. The company on Monday began showing some users prompts that explained why it wants to track and asked users to opt in so Facebook can “provide a better ads experience.” Users could tap “allow” or “don’t allow” in response to the prompt.
Apple updates its App Store Rules
Apple said these were moderate changes — just clarifications and tweaks that had been under way for some time. For example, the new App Store Guidelines now include instructions about how developers should implement the new App Tracking Transparency rules. Another section details how developers can now file an appeal upon an app review rejection.
Other changes are more semantic in nature — changing person-to-person experiences to “services” to broaden the scope, for example, or to clarify how gaming companies can offer a single subscription that works across a variety of standalone apps.
To see what actually changed, go here.
Parler CEO fired
Parler — the app banned from the App Store, Google Play, Amazon AWS, using Okta, etc., etc. — fired its CEO, John Matze, this week after struggling to bring the app back online. According to reports from NPR and others, the firing was due to his disagreement with conservative donor Rebekah Mercer, who controls Parler’s board. Matze argued the app would need to crack down on domestic terrorism and groups that incite violence in order to succeed, he says, but claims he was met with silence. Parler, meanwhile, said those statements were misleading.
After Parler’s rapid deplatforming following the events at the Capitol, other alternative social networks climbed up the charts to take its place. But these apps have not proven themselves to have much staying power. Instead, the top charts are once again filled with the usual: Facebook, Instagram, YouTube, TikTok, Snapchat, etc.
Maybe it’s actually no fun yelling about the world when no one is around to challenge you or fight back?
Apps with earnings news
- Snap beats with revenue of $911 million in Q4, up 62% YoY, versus $857.4 million expected. Snap’s DAU’s climbed 22% YoY to 265M. But stock dropped over a weak Q1 forecast.
- PayPal reported stronger-than-expected, pandemic-fueled earnings with EPS up 25.58% YoY to $1.08, beating the estimate of $1.00. Revenue was $6.12 billion up 23.28% YoY year, which beat the estimate of $6.09 billion. The company added 16 million net new accounts, bringing the total to 277 million.
- Related, Venmo’s TPV grew 60% year over year to $47 billion, and its customer base grew 32%, ending just shy of 70 million accounts. The company expects its revenues will approach $900 million in 2021.
- Spotify reports revenue growth of 17% YoY to €2.17 billion; 345M MAUs, up 27% YoY; and paid subs to 155 million, up by 24%.
- The iOS 14.5 beta arrives with a number of notable new features, including most notably, ATT and the ability to unlock your iPhone when wearing a mask, as long as you’re also wearing an Apple Watch. Other changes include worldwide dual-SIM 5G support, AirPlay 2 support for Apple Fitness+, support for PlayStation 5 DualSense and Xbox Series X controllers, support for T-Mobile’s standalone 5G network, a new Siri feature for calling emergency services, a toggle to disable emergency alert sounds, emoji search for iPad, and other small changes in the Reminders, News and Podcasts apps, and more.
- Code in the iOS 14.5 beta also suggests new financial features like Apple Card Family for multiuser accounts and a new framework FinHealth that gives automated suggestions to improve your finances.
- Apple rolls out new and updated design resources for building apps across its platforms, including iOS 14 and iPad OS 14, tvOS 14 and macOS Bir Sur. On mobile, the new design resources for Sketch have been rebuilt to support color variables, and include numerous minor improvements and bug fixes.
- Apple’s services saw a significant outage this week that impacted, among other things, the App Store, leading to blank pages, broken search results and more.
- Certain U.S. states will allow casino, sports and lottery games from March 1, 2021. Google already announced a change to Play Store policies, to allow these. In Apple’s updated App Store Guidelines, out this week, it also added “gambling” as one of the app categories that had to be submitted by a legal entity — an indication that it was opening its doors, too.
- App Store growth hit a six-month high in January 2021, Morgan Stanley said, citing Sensor Tower data that indicated App Store net revenue grew 35% YoY in the month. In Japan and Germany, growth reached 60% and in the U.S. it was 42% YoY, due to pandemic impacts.
- Some users are saying third-party apps have been crashing after syncing an iPad or iPhone with an M1 Mac.
- Huawei’s HarmonyOS is being pitched as an original in-house creation, but Ars Technica took a deep dive and found it was really just an Android fork.
- “Millions” of Ford vehicles will use Google’s Android OS to power their infotainment systems, starting in 2023.
- Google is said to be exploring its own alternative to Apple’s new anti-tracking feature, which may seem counterintuitive, as Google is in the ads business. But according to a report from Bloomberg, the company is looking into a solution that’s “less stringent” than Apple’s. That could provide some pushback in terms of setting an industry standard.
- YouTube launches Clips, a short-form video feature that lets users clip 5 to 60 seconds of a video and share with others, similar to Twitch’s clips feature. The feature is in limited alpha testing.
- Epic Games is warning Australia’s market regulator to take action against Apple for using its market power to force developers to pay a 30% commission on paid apps and IAP. Epic is suing Apple in the country, but wants the regulator to step in now.
- In the U.S., a judge orders a 7-hour deposition from Tim Cook in the Epic vs. Apple lawsuit.
- Google hasn’t killed game streaming service Stadia yet, but it did announce this week it’s stepping away from first-party games. The company also announced the Stadia Games and Entertainment head Jade Raymond was leaving the company, while the existing staff would be moved to other projects.
- Amazon Luna’s game streaming service expands to more Android devices, including Pixel 3, 3XL, 3a, 3a XL; Samsung S9, S9+, Note 9. The service was already available on new Pixel, Samsung and OnePlus devices, among others.
- Color of Change launches The Pedestal Project, an AR experience on Instagram that allows users to place statues of racial justice leaders on the empty pedestals where confederate leaders once stood (or anywhere else). At launch, there are three featured leaders included: Rep. John Lewis, Alicia Garza and Chelsea Miller.
- TikTok partners with WPP to give WPP agencies access to ad products and APIs that are still in development, including new AR formats.
Security & Privacy
- YouTube adds its App Store privacy label, detailing the data it uses to track users. This includes your physical address, email address, phone number, user and device ID, as well as data linked to you for third-party advertising and for app functionality, product personalization and more.
- Venmo is turning into a financial super app with additions that include crypto, budgeting, saving and shopping with Honey — all of which are planned for this year.
- Robinhood CEO Vlad Tenev has been asked to testify before the House Financial Services Committee on February 18, over the GameStop debacle. The app still hasn’t recovered its reputation — Play Store reviews have gone back down to 1.0 stars, even after a purge.
- Reddit has its best-ever month in terms of installs, thanks to the “meme stocks” frenzy driven by users of the r/wallstreetbets forum. The app gained 6.6 million downloads in January 2021, up 43% month-over-month, growing its total installs to date to 122.5 million across iOS and Android.
- Cash App also this week had to halt buying meme stocks like GameStop, AMC, and Nokia after being notified by its clearing broker of increased capital requirements.
- Robinhood raises another $2.4 billion from shareholders after its $1 billion raise from investors to help it ride out the meme stock trading frenzy.
- Joompay, a European rival to Venmo and TransferWise, has now launched in the market after obtaining a Luxembourg Electronic Money Institution (EMI) license.
Social & Photos
- Snapchat’s TikTok rival “Spotlight” now has 100 million MAUs, the company said during earnings, and is receiving an average of 175,000 video submissions per day. But Snap is heavily fueling this growth by paying out over $1 million per day to the top-performing videos — everyone wants to be TikTok, it seems.
- TikTok says it will now downrank “unsubstantiated” claims that fact checkers can’t verify. The app will also place a warning banner overtop these videos and discourage users from sharing them with pop-up messages.
- TikTok owner ByteDance sues Tencent over alleged monopoly practices. The suit claims that Tencent’s WeChat and QQ messaging services won’t allow links to Douyin, the Chinese version of TikTok.
- Instagram confirms it’s developing a “Vertical Stories” feed that will allow users to flip through users’ stories vertically, similar to TikTok.
- IRL, an events website and mobile app, has topped 10 million monthly users as it revamps itself into a social network for events, now including user profiles, group events, and chat.
- Instagram bans around 400 accounts linked to hacker forum OGUsers, where members buy and sell stolen social media accounts. The hackers used SIM-swapping attacks, harassment and extortion to take over the accounts of “OG” Instagram users who have coveted short usernames or those with unique words. Twitter and TikTok also took action to target OGUsers members, the companies confirmed.
- Instagram adds “Recently Deleted,” a new feature that lets you review and recover deleted content. The company says it added protections to stop hackers from accessing your account to reach these items. Deleted stories that are not in your archive will stay in the folder for up to 24 hours. Everything else will be automatically deleted 30 days later.
- Triller ditches its plans to do a Super Bowl ad and will now host a fan contest instead. The app has struggled to present a challenge to TikTok in the U.S. market.
- Daily Twitter usage remained consistent despite Trump ban, according to data from Apptopia.
Communication and Messaging
- Element, a client for federal chat protocol Matrix, was removed from the Play Store this week, for abusive content. But Google made a mistake. This was a third-party client, not the content’s host. And it had already removed the content, based on its own rules. For those unfamiliar, Element is an open network that offers both unencrypted public chatrooms as well as E2EE content. Eventually, the developer got a call from a Google VP who helped the app get reinstated. But the situation, which resulted in 24 hours of downtime, raised a question of how well app stores are prepared to moderate issues that crop up in decentralized platforms and services.
- Clubhouse CEO Paul Davison confirmed the company will introduce a subscription tool that will allow creators to make money from their rooms.
- Telegram, benefitting from the shift to private messaging and the WhatsApp backlash, became the most-downloaded app overall in January 2021, across both app stores and on Google Play. On the App Store, it was No. 4 and TikTok was No. 1.
Streaming Services and Media
- Apple-owned Shazam adds iOS 14 widgets for the first time, allowing you to quickly ID any song that’s playing and see your history.
- Spotify adds new playlists, podcasts and takeovers for Black History Month, and creates a new “Black History Is Now” hub in the app.
- The U.S. version of the Discovery+ mobile app gets more first-month downloads (3.3 million) than HBO Max did (3.1 million), Apptopia found. But it’s not an apples-to-apples comparison, as existing HBO NOW users were upgraded to Max.
Health & Fitness
- The Google Fit app on Pixel devices is getting an update that will allow your phone’s camera to measure pulse and breathing rates.
- Microsoft rebrands its document scanner app Office Lens to Microsoft Lens and adds new features, including Image to Text, an Immersive Reader, a QR Code Scanner and the ability to scan up to 100 pages. Lens also now integrates with Teams, so users can record short videos to be sent through Team chats. Uh, TikTok’s about documents, I guess?
Government & Policy
- Myanmar’s military government orders telecoms to block Facebook until February 7, following coup. The government, which seized power following an election, said the social network is contributing to instability in the country.
- TikTok will recheck the age of every user in Italy, following an emergency order from the GPDP issued after the January 22 death of a 10-year-old girl who tried the “blackout challenge” she saw on the app. On February 9, every user will have to go through the TikTok age-gate again.
Funding and M&A
- Uber buys alcohol delivery service Drizly for $1.1 billion. Drizly’s website and app let users order alcohol in markets across the U.S. but is often hampered by local liquor laws. Gross bookings were up 300% YoY, ahead of the deal.
- Vivino, a wine recommendation and marketplace app, raises $155 million Series D led by Sweden’s Kinnevik. The app now has 50 million users and data set of 1.5 billion photos of wine labels.
- Mobile ad platform and games publisher AppLovin acquires Berlin-based mobile ad attribution company Adjust in what’s being reported as a $1 billion deal, but is reportedly less. The deal comes at a time when the ad attribution market is being dramatically altered by Apple’s ATT. Mobile Dev Memo explains the deal will give Applovin visibility into which games and driving conversions for Adjust customers, to benefit its own ad campaigns.
- Latitude, a startup that uses AI to build storylines for games, raises $3.3 million in seed funding. Its first title is AI Dungeon, an open-ended text adventure game.
- Chinese social gaming startup Guangzhou Quwan Network Technology raises $100 million Series B from Matrix Partners China and Orchid Asia Group Management. The company provides instant voice messaging, social gaming, esports and game distribution and operates voice chat app TT Voice, which has over 100 million users.
- Consumer trading app Flink, a sort of Robinhood for the Mexican market, raises $12 million Series A led by Accel.
- Commuting platform Hip, which offers both an online dashboard and mobile app, raises $12 million led by NFX and Magenta Venture Partners. The app works with bus and shuttle providers to plan routes for commuters and offers COVID-19 tracing services.
- Bot MD, a Signapore-based app that offers doctors an AI chatbot for looking up important information, raises $5 million Series A led by Monk’s Hill Ventures. The funds will help the app to expand elsewhere in the Asia-Pacific region, including Indonesia, the Philippines, Malaysia and India.
- Meditation and sleep app Expectful raises $3 million in seed funding for its app aimed at new mothers. The company plans to expand the app to become a broader wellness resource for hopeful, expecting and new parents.
- Brightwheel, an app that allows preschools, daycare providers and camps to communicate with parents raises $55 million in a round led by Addition, valuing the business at $600+ million. Laurene Powell Jobs’s Emerson Collective and Jeff Weiner’s Next Play Ventures also participated.
- ELSA, a Google-backed language learning app co-founded in 2015 by Vietnamese entrepreneur Vu Van and engineer Xavier Anguera, raises $15 million a round co-led by Vietnam Investments Group and SIG.
- Financial super app Djamo gets Y Combinator backing for its solution for consumers in Francophone Africa.
- Bumble IPO filing sets price range for up to $1B. The dating app makers aims to sell 34.5 million shares at $28 to $30 apiece, valuing the business potentially at $6.46B.
Reese’s Book Club
Actress and producer Reese Witherspoon’s media company Hello Sunshine has launched an app for Reese’s Book Club — the book club that focuses on diverse voices where women are the center of their stories. The book club today has nearly 2 million Instagram followers and 38 book picks that made The New York Times bestseller list. Its books have also been adapted into film and TV projects, including Hulu’s “Little Fires Everywhere,” upcoming Amazon series “Daisy Jones and the Six, Netflix’s “From Scratch,” and forthcoming film “Where the Crawdads Sing.”
The new app lets users keep track of the new monthly picks, browse past selections, join community discussions with fellow readers, hear from authors, compete for prizes and, soon, buy exclusives items that will help fund The Readership, a pay-it-forward platform aimed at amplifying diverse voices and promoting literacy, which may include efforts like installing book nooks in local communities and supporting indie booksellers.
Everyone’s favorite snarky weather app received a major overhaul toward the end of January, which includes a redesigned interface, new icons, tools to design the UI how you want it (an “interface maker”), new “secret locations” (a fun Easter egg) and more. The app has also switched to a vertical layout that fills the screen with information, which also includes smart cards that bubble up with weather info when it’s needed. Carrot Weather is also now a free download with subscriptions, instead of a paid app.