Google Fi is getting end-to-end encrypted phone calls

Google Fi is getting end-to-end encrypted phone calls

Enlarge

Google’s MVNO cell phone service, Google Fi, is getting a surprise new feature: encrypted phone calls. Encrypted voice chats via messaging apps have been available for a while, but this is the first time we’ve seen a company hijack the regular phone system for end-to-end encrypted calls. Open the phone app, dial a number, and your call can be encrypted.

End-to-end encryption is not a normal phone standard, so both parties on the call will need to be firmly in the Google Fi ecosystem for the feature to work. Google’s description says that “calls between two Android phones on Fi will be secured with end-to-end encryption by default.” Google Fi works on the iPhone, too, but given that Google would have to use Apple’s default phone app, it can’t add encryption.

For encrypted Fi-to-Fi calls, Google will show a new “Encrypted by Google Fi” message in both users’ phone apps, along with the ubiquitous lock icon. The company says there will be “unique audio cues” as well.

Read 2 remaining paragraphs | Comments

#encryption, #google-fi, #security, #tech

Ransomware victims panicked while FBI secretly held REvil decryption key

Circular seal against a marble wall.

Enlarge / The seal of the Federal Bureau of Investigation (FBI) is seen at the J. Edgar Hoover building in Washington, D.C. (credit: Andrew Harrer/Bloomberg)

For three weeks during the REvil ransomeware attack this summer, the FBI secretly withheld the key that would have decrypted data and computers on up to 1,500 networks, including those run by hospitals, schools, and businesses.

The FBI had penetrated the REvil gang’s servers to obtain the key, but after discussing it with other agencies, the bureau decided to wait before sending it to victims for fear of tipping off the criminals, The Washington Post reports. The FBI hadn’t want to tip off the REvil gang and had hoped to take down their operations, sources told the Post.

Instead, REvil went dark on July 13 before the FBI could step in. For reasons that haven’t been explained, the FBI didn’t cough up the key until July 21.

Read 6 remaining paragraphs | Comments

#biz-it, #encryption, #fbi, #ransomware, #revil, #russian-hacking

A new app helps Iranians hide messages in plain sight

An anti-government graffiti that reads in Farsi "Death to the dictator" is sprayed at a wall north of Tehran on September 30, 2009.

Enlarge / An anti-government graffiti that reads in Farsi “Death to the dictator” is sprayed at a wall north of Tehran on September 30, 2009. (credit: Getty Images)

Amid ever-increasing government Internet control, surveillance, and censorship in Iran, a new Android app aims to give Iranians a way to speak freely.

Nahoft, which means “hidden” in Farsi, is an encryption tool that turns up to 1,000 characters of Farsi text into a jumble of random words. You can send this mélange to a friend over any communication platform—Telegram, WhatsApp, Google Chat, etc.—and then they run it through Nahoft on their device to decipher what you’ve said.

Released last week on Google Play by United for Iran, a San Francisco–based human rights and civil liberties group, Nahoft is designed to address multiple aspects of Iran’s Internet crackdown. In addition to generating coded messages, the app can also encrypt communications and embed them imperceptibly in image files, a technique known as steganography. Recipients then use Nahoft to inspect the image file on their end and extract the hidden message.

Read 13 remaining paragraphs | Comments

#biz-it, #demonstrations, #encryption, #iran, #policy

Ireland probes TikTok’s handling of kids’ data and transfers to China

Ireland’s Data Protection Commission (DPC) has yet another ‘Big Tech’ GDPR probe to add to its pile: The regulator said yesterday it has opened two investigations into video sharing platform TikTok.

The first covers how TikTok handles children’s data, and whether it complies with Europe’s General Data Protection Regulation.

The DPC also said it will examine TikTok’s transfers of personal data to China, where its parent entity is based — looking to see if the company meets requirements set out in the regulation covering personal data transfers to third countries.

TikTok was contacted for comment on the DPC’s investigation.

A spokesperson told us:

“The privacy and safety of the TikTok community, particularly our youngest members, is a top priority. We’ve implemented extensive policies and controls to safeguard user data and rely on approved methods for data being transferred from Europe, such as standard contractual clauses. We intend to fully cooperate with the DPC.”

The Irish regulator’s announcement of two “own volition” enquiries follows pressure from other EU data protection authorities and consumers protection groups which have raised concerns about how TikTok handles’ user data generally and children’s information specifically.

In Italy this January, TikTok was ordered to recheck the age of every user in the country after the data protection watchdog instigated an emergency procedure, using GDPR powers, following child safety concerns.

TikTok went on to comply with the order — removing more than half a million accounts where it could not verify the users were not children.

This year European consumer protection groups have also raised a number of child safety and privacy concerns about the platform. And, in May, EU lawmakers said they would review the company’s terms of service.

On children’s data, the GDPR sets limits on how kids’ information can be processed, putting an age cap on the ability of children to consent to their data being used. The age limit varies per EU Member State but there’s a hard cap for kids’ ability to consent at 13 years old (some EU countries set the age limit at 16).

In response to the announcement of the DPC’s enquiry, TikTok pointed to its use of age gating technology and other strategies it said it uses to detect and remove underage users from its platform.

It also flagged a number of recent changes it’s made around children’s accounts and data — such as flipping the default settings to make their accounts privacy by default and limiting their exposure to certain features that intentionally encourage interaction with other TikTok users if those users are over 16.

While on international data transfers it claims to use “approved methods”. However the picture is rather more complicated than TikTok’s statement implies. Transfers of Europeans’ data to China are complicated by there being no EU data adequacy agreement in place with China.

In TikTok’s case, that means, for any personal data transfers to China to be lawful, it needs to have additional “appropriate safeguards” in place to protect the information to the required EU standard.

When there is no adequacy arrangement in place, data controllers can, potentially, rely on mechanisms like Standard Contractual Clauses (SCCs) or binding corporate rules (BCRs) — and TikTok’s statement notes it uses SCCs.

But — crucially — personal data transfers out of the EU to third countries have faced significant legal uncertainty and added scrutiny since a landmark ruling by the CJEU last year which invalidated a flagship data transfer arrangement between the US and the EU and made it clear that DPAs (such as Ireland’s DPC) have a duty to step in and suspend transfers if they suspect people’s data is flowing to a third country where it might be at risk.

So while the CJEU did not invalidate mechanisms like SCCs entirely they essentially said all international transfers to third countries must be assessed on a case-by-case basis and, where a DPA has concerns, it must step in and suspend those non-secure data flows.

The CJEU ruling means just the fact of using a mechanism like SCCs doesn’t mean anything on its own re: the legality of a particular data transfer. It also amps up the pressure on EU agencies like Ireland’s DPC to be pro-active about assessing risky data flows.

Final guidance put out by the European Data Protection Board, earlier this year, provides details on the so-called ‘special measures’ that a data controller may be able to apply in order to increase the level of protection around their specific transfer so the information can be legally taken to a third country.

But these steps can include technical measures like strong encryption — and it’s not clear how a social media company like TikTok would be able to apply such a fix, given how its platform and algorithms are continuously mining users’ data to customize the content they see and in order to keep them engaged with TikTok’s ad platform.

In another recent development, China has just passed its first data protection law.

But, again, this is unlikely to change much for EU transfers. The Communist Party regime’s ongoing appropriation of personal data, through the application of sweeping digital surveillance laws, means it would be all but impossible for China to meet the EU’s stringent requirements for data adequacy. (And if the US can’t get EU adequacy it would be ‘interesting’ geopolitical optics, to put it politely, were the coveted status to be granted to China…)

One factor TikTok can take heart from is that it does likely have time on its side when it comes to the’s EU enforcement of its data protection rules.

The Irish DPC has a huge backlog of cross-border GDPR investigations into a number of tech giants.

It was only earlier this month that Irish regulator finally issued its first decision against a Facebook-owned company — announcing a $267M fine against WhatsApp for breaching GDPR transparency rules (but only doing so years after the first complaints had been lodged).

The DPC’s first decision in a cross-border GDPR case pertaining to Big Tech came at the end of last year — when it fined Twitter $550k over a data breach dating back to 2018, the year GDPR technically begun applying.

The Irish regulator still has scores of undecided cases on its desk — against tech giants including Apple and Facebook. That means that the new TikTok probes join the back of a much criticized bottleneck. And a decision on these probes isn’t likely for years.

On children’s data, TikTok may face swifter scrutiny elsewhere in Europe: The UK added some ‘gold-plaiting’ to its version of the EU GDPR in the area of children’s data — and, from this month, has said it expects platforms meet its recommended standards.

It has warned that platforms that don’t fully engage with its Age Appropriate Design Code could face penalties under the UK’s GDPR. The UK’s code has been credited with encouraging a number of recent changes by social media platforms over how they handle kids’ data and accounts.

#apps, #articles, #china, #communist-party, #data-controller, #data-protection, #data-protection-commission, #data-protection-law, #data-security, #encryption, #europe, #european-data-protection-board, #european-union, #general-data-protection-regulation, #ireland, #italy, #max-schrems, #noyb, #personal-data, #privacy, #social, #social-media, #spokesperson, #tiktok, #united-kingdom, #united-states

What China’s new data privacy law means for US tech firms

China enacted a sweeping new data privacy law on August 20 that will dramatically impact how tech companies can operate in the country. Officially called the Personal Information Protection Law of the People’s Republic of China (PIPL), the law is the first national data privacy statute passed in China.

Modeled after the European Union’s General Data Protection Regulation, the PIPL imposes protections and restrictions on data collection and transfer that companies both inside and outside of China will need to address. It is particularly focused on apps using personal information to target consumers or offer them different prices on products and services, and preventing the transfer of personal information to other countries with fewer protections for security.

The PIPL, slated to take effect on November 1, 2021, does not give companies a lot of time to prepare. Those that already follow GDPR practices, particularly if they’ve implemented it globally, will have an easier time complying with China’s new requirements. But firms that have not implemented GDPR practices will need to consider adopting a similar approach. In addition, U.S. companies will need to consider the new restrictions on the transfer of personal information from China to the U.S.

Implementation and compliance with the PIPL is a much more significant task for companies that have not implemented GDPR principles.

Here’s a deep dive into the PIPL and what it means for tech firms:

New data handling requirements

The PIPL introduces perhaps the most stringent set of requirements and protections for data privacy in the world (this includes special requirements relating to processing personal information by governmental agencies that will not be addressed here). The law broadly relates to all kinds of information, recorded by electronic or other means, related to identified or identifiable natural persons, but excludes anonymized information.

The following are some of the key new requirements for handling people’s personal information in China that will affect tech businesses:

Extra-territorial application of the China law

Historically, China regulations have only been applied to activities inside the country. The PIPL is similar in applying the law to personal information handling activities within Chinese borders. However, similar to GDPR, it also expands its application to the handling of personal information outside China if the following conditions are met:

  • Where the purpose is to provide products or services to people inside China.
  • Where analyzing or assessing activities of people inside China.
  • Other circumstances provided in laws or administrative regulations.

For example, if you are a U.S.-based company selling products to consumers in China, you may be subject to the China data privacy law even if you do not have a facility or operations there.

Data handling principles

The PIPL introduces principles of transparency, purpose and data minimization: Companies can only collect personal information for a clear, reasonable and disclosed purpose, and to the smallest scope for realizing the purpose, and retain the data only for the period necessary to fulfill that purpose. Any information handler is also required to ensure the accuracy and completeness of the data it handles to avoid any negative impact on personal rights and interests.

#asia, #china, #column, #computer-security, #data-protection, #data-security, #ec-china, #ec-column, #ec-east-asia, #encryption, #european-union, #general-data-protection-regulation, #government, #internet, #iphone, #privacy, #tc

WhatsApp “end-to-end encrypted” messages aren’t that private after all

WhatsApp logo

Enlarge / The security of Facebook’s popular messaging app leaves several rather important devils in its details. (credit: WhatsApp)

Yesterday, independent newsroom ProPublica published a detailed piece examining the popular WhatsApp messaging platform’s privacy claims. The service famously offers “end-to-end encryption,” which most users interpret as meaning that Facebook, WhatsApp’s owner since 2014, can neither read messages itself nor forward them to law enforcement.

This claim is contradicted by the simple fact that Facebook employs about 1,000 WhatsApp moderators whose entire job is—you guessed it—reviewing WhatsApp messages that have been flagged as “improper.”

End-to-end encryption—but what’s an “end”?

This snippet from WhatsApp's <a href="https://faq.whatsapp.com/general/security-and-privacy/end-to-end-encryption/">security and privacy</a> page seems easy to misinterpret.

This snippet from WhatsApp’s security and privacy page seems easy to misinterpret. (credit: Jim Salter)

The loophole in WhatsApp’s end-to-end encryption is simple: The recipient of any WhatsApp message can flag it. Once flagged, the message is copied on the recipient’s device and sent as a separate message to Facebook for review.

Read 14 remaining paragraphs | Comments

#biz-it, #encrypted-messaging, #encryption, #end-to-end-encryption, #facebook, #tech, #whatsapp

UK offers cash for CSAM detection tech targeted at e2e encryption

The UK government is preparing to spend over half a million dollars to encourage the development of detection technologies for child sexual exploitation material (CSAM) that can be bolted on to end-to-end encrypted messaging platforms to scan for the illegal material, as part of its ongoing policy push around Internet and child safety.

In a joint initiative today, the Home Office and the Department for Digital, Media, Culture and Sport (DCMS) announced a “Tech Safety Challenge Fund” — which will distribute up to £425,000 (~$584k) to five organizations (£85k/$117k each) to develop “innovative technology to keep children safe in environments such as online messaging platforms with end-to-end encryption”.

A Challenge statement for applicants to the program adds that the focus is on solutions that can be deployed within e2e encrypted environments “without compromising user privacy”.

“The problem that we’re trying to fix is essentially the blindfolding of law enforcement agencies,” a Home Office spokeswoman told us, arguing that if tech platforms go ahead with their “full end-to-end encryption plans, as they currently are… we will be completely hindered in being able to protect our children online”.

While the announcement does not name any specific platforms of concern, Home Secretary Priti Patel has previously attacked Facebook’s plans to expand its use of e2e encryption — warning in April that the move could jeopardize law enforcement’s ability to investigate child abuse crime.

Facebook-owned WhatsApp also already uses e2e encryption so that platform is already a clear target for whatever ‘safety’ technologies might result from this taxpayer-funded challenge.

Apple’s iMessage and FaceTime are among other existing mainstream messaging tools which use e2e encryption.

So there is potential for very widespread application of any ‘child safety tech’ developed through this government-backed challenge. (Per the Home Office, technologies submitted to the Challenge will be evaluated by “independent academic experts”. The department was unable to provide details of who exactly will assess the projects.)

Patel, meanwhile, is continuing to apply high level pressure on the tech sector on this issue — including aiming to drum up support from G7 counterparts.

Writing in paywalled op-ed in Tory-friendly newspaper, The Telegraph, she trails a meeting she’ll be chairing today where she says she’ll push the G7 to collectively pressure social media companies to do more to address “harmful content on their platforms”.

“The introduction of end-to-end encryption must not open the door to even greater levels of child sexual abuse. Hyperbolic accusations from some quarters that this is really about governments wanting to snoop and spy on innocent citizens are simply untrue. It is about keeping the most vulnerable among us safe and preventing truly evil crimes,” she adds.

“I am calling on our international partners to back the UK’s approach of holding technology companies to account. They must not let harmful content continue to be posted on their platforms or neglect public safety when designing their products. We believe there are alternative solutions, and I know our law enforcement colleagues agree with us.”

In the op-ed, the Home Secretary singles out Apple’s recent move to add a CSAM detection tool to iOS and macOS to scan content on user’s devices before it’s uploaded to iCloud — welcoming the development as a “first step”.

“Apple state their child sexual abuse filtering technology has a false positive rate of 1 in a trillion, meaning the privacy of legitimate users is protected whilst those building huge collections of extreme child sexual abuse material are caught out. They need to see th[r]ough that project,” she writes, urging Apple to press ahead with the (currently delayed) rollout.

Last week the iPhone maker said it would delay implementing the CSAM detection system — following a backlash led by security experts and privacy advocates who raised concerns about vulnerabilities in its approach, as well as the contradiction of a ‘privacy-focused’ company carrying out on-device scanning of customer data. They also flagged the wider risk of the scanning infrastructure being seized upon by governments and states who might order Apple to scan for other types of content, not just CSAM.

Patel’s description of Apple’s move as just a “first step” is unlikely to do anything to assuage concerns that once such scanning infrastructure is baked into e2e encrypted systems it will become a target for governments to widen the scope of what commercial platforms must legally scan for.

However the Home Office’s spokeswoman told us that Patel’s comments on Apple’s CSAM tech were only intended to welcome its decision to take action in the area of child safety — rather than being an endorsement of any specific technology or approach. (And Patel does also write: “But that is just one solution, by one company. Greater investment is essential.”)

The Home Office spokeswoman wouldn’t comment on which types of technologies the government is aiming to support via the Challenge fund, either, saying only that they’re looking for a range of solutions.

She told us the overarching goal is to support ‘middleground’ solutions — denying the government is trying to encourage technologists to come up with ways to backdoor e2e encryption.

In recent years in the UK GCHQ has also floated the controversial idea of a so-called ‘ghost protocol’ — that would allow for state intelligence or law enforcement agencies to be invisibly CC’d by service providers into encrypted communications on a targeted basis. That proposal was met with widespread criticism, including from the tech industry, which warned it would undermine trust and security and threaten fundamental rights.

It’s not clear if the government has such an approach — albeit with a CSAM focus — in mind here now as it tries to encourage the development of ‘middleground’ technologies that are able to scan e2e encrypted content for specifically illegal stuff.

In another concerning development, earlier this summer, guidance put out by DCMS for messaging platforms recommended that they “prevent” the use of e2e encryption for child accounts altogether.

Asked about that, the Home Office spokeswoman told us the tech fund is “not too different” and “is trying to find the solution in between”.

“Working together and bringing academics and NGOs into the field so that we can find a solution that works for both what social media companies want to achieve and also make sure that we’re able to protect children,” said said, adding: “We need everybody to come together and look at what they can do.”

There is not much more clarity in the Home Office guidance to suppliers applying for the chance to bag a tranche of funding.

There it writes that proposals must “make innovative use of technology to enable more effective detection and/or prevention of sexually explicit images or videos of children”.

“Within scope are tools which can identify, block or report either new or previously known child sexual abuse material, based on AI, hash-based detection or other techniques,” it goes on, further noting that proposals need to address “the specific challenges posed by e2ee environments, considering the opportunities to respond at different levels of the technical stack (including client-side and server-side).”

General information about the Challenge — which is open to applicants based anywhere, not just in the UK — can be found on the Safety Tech Network website.

The deadline for applications is October 6.

Selected applicants will have five months, between November 2021 and March 2022 to deliver their projects.

When exactly any of the tech might be pushed at the commercial sector isn’t clear — but the government may be hoping that by keeping up the pressure on the tech sector platform giants will develop this stuff themselves, as Apple has been.

The Challenge is just the latest UK government initiative to bring platforms in line with its policy priorities — back in 2017, for example, it was pushing them to build tools to block terrorist content — and you could argue it’s a form of progress that ministers are not simply calling for e2e encryption to be outlawed, as they frequently have in the past.

That said, talk of ‘preventing’ the use of e2e encryption — or even fuzzy suggestions of “in between” solutions — may not end up being so very different.

What is different is the sustained focus on child safety as the political cudgel to make platforms comply. That seems to be getting results.

Wider government plans to regulate platforms — set out in a draft Online Safety bill, published earlier this year — have yet to go through parliamentary scrutiny. But in one already baked in change, the country’s data protection watchdog is now enforcing a children’s design code which stipulates that platforms need to prioritize kids’ privacy by default, among other recommended standards.

The Age Appropriate Design Code was appended to the UK’s data protection bill as an amendment — meaning it sits under wider legislation that transposed Europe’s General Data Protection Regulation (GDPR) into law, which brought in supersized penalties for violations like data breaches. And in recent months a number of social media giants have announced changes to how they handle children’s accounts and data — which the ICO has credited to the code.

So the government may be feeling confident that it has finally found a blueprint for bringing tech giants to heel.

#apple, #csam, #csam-detection, #e2e-encryption, #encrypted-communications, #encryption, #end-to-end-encryption, #europe, #facebook, #g7, #general-data-protection-regulation, #home-office, #law-enforcement, #policy, #privacy, #social-media, #tc, #uk-government, #united-kingdom, #whatsapp

ProtonMail logged IP address of French activist after order by Swiss authorities

ProtonMail, a hosted email service with a focus on end-to-end encrypted communications, has been facing criticism after a police report showed that French authorities managed to obtain the IP address of a French activist who was using the online service. The company has communicated widely about the incident, stating that it doesn’t log IP addresses by default and it only complies with local regulation — in that case Swiss law. While ProtonMail didn’t cooperate with French authorities, French police sent a request to Swiss police via Europol to force the company to obtain the IP address of one of its users.

For the past year, a group of people have taken over a handful of commercial premises and apartments near Place Sainte Marthe in Paris. They want to fight against gentrification, real estate speculation, Airbnb and high-end restaurants. While it started as a local conflict, it quickly became a symbolic campaign. They attracted newspaper headlines when they started occupying premises rented by Le Petit Cambodge — a restaurant that was targeted by the November 13th, 2015 terrorist attacks in Paris.

On September 1st, the group published an article on Paris-luttes.info, an anticapitalist news website, summing up different police investigations and legal cases against some members of the group. According to their story, French police sent an Europol request to ProtonMail in order to uncover the identity of the person who created a ProtonMail account — the group was using this email address to communicate. The address has also been shared on various anarchist websites.

The next day, @MuArF on Twitter shared an abstract of a police report detailing ProtonMail’s reply. According to @MuArF, the police report is related to the ongoing investigation against the group who occupied various premises around Place Sainte-Marthe. It says that French police received a message on Europol. That message contains details about the ProtonMail account.

Here’s what the report says:

  • The company PROTONMAIL informs us that the email address has been created on … The IP address linked to the account is the following: …
  • The device used is a … device identified with the number …
  • The data transmitted by the company is limited to that due to the privacy policy of PROTONMAIL TECHNOLOGIES.”

ProtonMail’s founder and CEO Andy Yen reacted to the police report on Twitter without mentioning the specific circumstances of that case in particular. “Proton must comply with Swiss law. As soon as a crime is committed, privacy protections can be suspended and we’re required by Swiss law to answer requests from Swiss authorities,” he wrote.

In particular, Andy Yen wants to make it clear that his company didn’t cooperate with French police nor Europol. It seems like Europol acted as the communication channel between French authorities and Swiss authorities. At some point, Swiss authorities took over the case and sent a request to ProtonMail directly. The company references these requests as “foreign requests approved by Swiss authorities” in its transparency report.

TechCrunch contacted ProtonMail founder and CEO Andy Yen with questions about the case.

One key question is exactly when the targeted account holder was notified that their data had been requested by Swiss authorities since — per ProtonMail — notification is obligatory under Swiss law.

However, Yen told us that — “for privacy and legal reasons” — he is unable to comment on specific details of the case or provide “non-public information on active investigations”, adding: “You would have to direct these inquiries to the Swiss authorities.”

At the same time, he did point us to this public page, where ProtonMail provides information for law enforcement authorities seeking data about users of its end-to-end encrypted email service, including setting out a “ProtonMail user notification policy”.

Here the company reiterates that Swiss law “requires a user to be notified if a third party makes a request for their private data and such data is to be used in a criminal proceeding” — however it also notes that “in certain circumstances” a notification “can be delayed”.

Per this policy, Proton says delays can affect notifications if: There is a temporary prohibition on notice by the Swiss legal process itself, by Swiss court order or “applicable Swiss law”; or where “based on information supplied by law enforcement, we, in our absolute discretion, believe that providing notice could create a risk of injury, death, or irreparable damage to an identifiable individual or group of individuals.”

“As a general rule though, targeted users will eventually be informed and afforded the opportunity to object to the data request, either by ProtonMail or by Swiss authorities,” the policy adds.

So, in the specific case, it looks likely that ProtonMail was either under legal order to delay notification to the account holder — given what appears to be up to eight months between the logging being instigated and disclosure of it — or it had been provided with information by the Swiss authorities which led it to conclude that delaying notice was essential to avoid a risk of “injury, death, or irreparable damage” to a person or persons (NB: it is unclear what “irreparable damage” means in this context, and whether it could be interpreted figuratively — as ‘damage’ to a person’s/group’s interests, for example, such as to a criminal investigation, not solely bodily harm — which would make the policy considerably more expansive).

In either scenario the level of transparency being afforded to individuals by Swiss law having a mandatory notification requirement when a person’s data has been requested looks severely limited if the same law authorities can, essentially, gag notifications — potentially for long periods (seemingly more than half a year in this specific case).

ProtonMail’s public disclosures also log an alarming rise in requests for data by Swiss authorities.

According to its transparency report, ProtonMail received 13 orders from Swiss authorities back in 2017 — but that had swelled to over three and a half thousand (3,572!) by 2020.

The number of foreign requests to Swiss authorities which are being approved has also risen, although not as steeply — with ProtonMail reporting receiving 13 such requests in 2017 — rising to 195 in 2020.

The company says it complies with lawful requests for user data but it also says it contests orders where it does not believe them to be lawful. And its reporting shows an increase in contested orders — with ProtonMail contesting three orders back in 2017 but in 2020 it pushed back against 750 of the data requests it received.

Per ProtonMail’s privacy policy, the information it can provide on a user account in response to a valid request under Swiss law may include account information provided by the user (such as an email address); account activity/metadata (such as sender, recipient email addresses; IP addresses incoming messages originated from; the times messages were sent and received; message subjects etc); total number of messages, storage used and last login time; and unencrypted messages sent from external providers to ProtonMail. As an end-to-end encrypted email provider, it cannot decrypt email data so is unable to provide information on the contents of email, even when served with a warrant.

However in its transparency report, the company also signals an additional layer of data collection which it may be (legally) obligated to carry out — writing that: “In addition to the items listed in our privacy policy, in extreme criminal cases, ProtonMail may also be obligated to monitor the IP addresses which are being used to access the ProtonMail accounts which are engaged in criminal activities.”

In general though, unless you are based 15 miles offshore in international waters, it is not possible to ignore court orders Andy Yen

It’s that IP monitoring component which has caused such alarm among privacy advocates now — and no small criticism of Proton’s marketing claims as a ‘user privacy centric’ company.

It has faced particular criticism for marketing claims of providing “anonymous email” and for the wording of the caveat in its transparency disclosure — where it talks about IP logging only occurring in “extreme criminal cases”.

Few would agree that anti-gentrification campaigners meet that bar.

At the same time, Proton does provide users with an onion address — meaning activists concerned about tracking can access its encrypted email service using Tor which makes it harder for their IP address to be tracked. So it is providing tools for users to protect themselves against IP monitoring (as well as protect the contents of their emails from being snooped on), even though its own service can, in certain circumstances, be turned into an IP monitoring tool by Swiss law enforcement.

In the backlash around the revelation of the IP logging of the French activists, Yen said via Twitter that ProtonMail will be providing a more prominent link to its onion address on its website:

Proton does also offer a VPN service of its own — and Yen has claimed that Swiss law does not allow it to log its VPN users’ IP addresses. So it’s interesting to speculate whether the activists might have been able to evade the IP logging if they had been using both Proton’s end-to-end encrypted email and its VPN service…

“If they were using Tor or ProtonVPN, we would have been able to provide an IP, but it would be the IP of the VPN server, or the IP of the Tor exit node,” Yen told TechCrunch when we asked about this.

“We do protect against this threat model via our Onion site (protonmail.com/tor),” he added. “In general though, unless you are based 15 miles offshore in international waters, it is not possible to ignore court orders.”

“The Swiss legal system, while not perfect, does provide a number of checks and balances, and it's worth noting that even in this case, approval from three authorities in two countries was required, and that's a fairly high bar which prevents most (but not all) abuse of the system.”

In a public response on Reddit, Proton also writes that it is “deeply concerned” about the case — reiterating that it was unable to contest the order in this instance.

“The prosecution in this case seems quite aggressive,” it added. “Unfortunately, this is a pattern we have increasingly seen in recent years around the world (for example in France where terror laws are inappropriately used). We will continue to campaign against such laws and abuses.”

Zooming out, in another worrying development that could threaten the privacy of internet users in Europe, European Union lawmakers have signaled they want to work to find ways to enable lawful access to encrypted data — even as they simultaneously claim to support strong encryption.

Again, privacy campaigners are concerned.

ProtonMail and a number of other end-to-end encrypted services warned in an open letter in January that EU lawmakers risk setting the region on a dangerous path toward backdooring encryption if they continue in this direction.

#backdoor, #encryption, #europe, #policy, #privacy, #proton, #protonmail

Apple’s dangerous path

Hello friends, and welcome back to Week in Review.

Last week, we dove into the truly bizarre machinations of the NFT market. This week, we’re talking about something that’s a little bit more impactful on the current state of the web — Apple’s NeuralHash kerfuffle.

If you’re reading this on the TechCrunch site, you can get this in your inbox from the newsletter page, and follow my tweets @lucasmtny


the big thing

In the past month, Apple did something it generally has done an exceptional job avoiding — the company made what seemed to be an entirely unforced error.

In early August — seemingly out of nowhere** — the company announced that by the end of the year they would be rolling out a technology called NeuralHash that actively scanned the libraries of all iCloud Photos users, seeking out image hashes that matched known images of child sexual abuse material (CSAM). For obvious reasons, the on-device scanning could not be opted out of.

This announcement was not coordinated with other major consumer tech giants, Apple pushed forward on the announcement alone.

Researchers and advocacy groups had almost unilaterally negative feedback for the effort, raising concerns that this could create new abuse channels for actors like governments to detect on-device information that they regarded as objectionable. As my colleague Zach noted in a recent story, “The Electronic Frontier Foundation said this week it had amassed more than 25,000 signatures from consumers. On top of that, close to 100 policy and rights groups, including the American Civil Liberties Union, also called on Apple to abandon plans to roll out the technology.”

(The announcement also reportedly generated some controversy inside of Apple.)

The issue — of course — wasn’t that Apple was looking at find ways that prevented the proliferation of CSAM while making as few device security concessions as possible. The issue was that Apple was unilaterally making a massive choice that would affect billions of customers (while likely pushing competitors towards similar solutions), and was doing so without external public input about possible ramifications or necessary safeguards.

A long story short, over the past month researchers discovered Apple’s NeuralHash wasn’t as air tight as hoped and the company announced Friday that it was delaying the rollout “to take additional time over the coming months to collect input and make improvements before releasing these critically important child safety features.”

Having spent several years in the tech media, I will say that the only reason to release news on a Friday morning ahead of a long weekend is to ensure that the announcement is read and seen by as few people as possible, and it’s clear why they’d want that. It’s a major embarrassment for Apple, and as with any delayed rollout like this, it’s a sign that their internal teams weren’t adequately prepared and lacked the ideological diversity to gauge the scope of the issue that they were tackling. This isn’t really a dig at Apple’s team building this so much as it’s a dig on Apple trying to solve a problem like this inside the Apple Park vacuum while adhering to its annual iOS release schedule.

illustration of key over cloud icon

Image Credits: Bryce Durbin / TechCrunch /

Apple is increasingly looking to make privacy a key selling point for the iOS ecosystem, and as a result of this productization, has pushed development of privacy-centric features towards the same secrecy its surface-level design changes command. In June, Apple announced iCloud+ and raised some eyebrows when they shared that certain new privacy-centric features would only be available to iPhone users who paid for additional subscription services.

You obviously can’t tap public opinion for every product update, but perhaps wide-ranging and trail-blazing security and privacy features should be treated a bit differently than the average product update. Apple’s lack of engagement with research and advocacy groups on NeuralHash was pretty egregious and certainly raises some questions about whether the company fully respects how the choices they make for iOS affect the broader internet.

Delaying the feature’s rollout is a good thing, but let’s all hope they take that time to reflect more broadly as well.

** Though the announcement was a surprise to many, Apple’s development of this feature wasn’t coming completely out of nowhere. Those at the top of Apple likely felt that the winds of global tech regulation might be shifting towards outright bans of some methods of encryption in some of its biggest markets.

Back in October of 2020, then United States AG Bill Barr joined representatives from the UK, New Zealand, Australia, Canada, India and Japan in signing a letter raising major concerns about how implementations of encryption tech posed “significant challenges to public safety, including to highly vulnerable members of our societies like sexually exploited children.” The letter effectively called on tech industry companies to get creative in how they tackled this problem.


other things

Here are the TechCrunch news stories that especially caught my eye this week:

LinkedIn kills Stories
You may be shocked to hear that LinkedIn even had a Stories-like product on their platform, but if you did already know that they were testing Stories, you likely won’t be so surprised to hear that the test didn’t pan out too well. The company announced this week that they’ll be suspending the feature at the end of the month. RIP.

FAA grounds Virgin Galactic over questions about Branson flight
While all appeared to go swimmingly for Richard Branson’s trip to space last month, the FAA has some questions regarding why the flight seemed to unexpectedly veer so far off the cleared route. The FAA is preventing the company from further launches until they find out what the deal is.

Apple buys a classical music streaming service
While Spotify makes news every month or two for spending a massive amount acquiring a popular podcast, Apple seems to have eyes on a different market for Apple Music, announcing this week that they’re bringing the classical music streaming service Primephonic onto the Apple Music team.

TikTok parent company buys a VR startup
It isn’t a huge secret that ByteDance and Facebook have been trying to copy each other’s success at times, but many probably weren’t expecting TikTok’s parent company to wander into the virtual reality game. The Chinese company bought the startup Pico which makes consumer VR headsets for China and enterprise VR products for North American customers.

Twitter tests an anti-abuse ‘Safety Mode’
The same features that make Twitter an incredibly cool product for some users can also make the experience awful for others, a realization that Twitter has seemingly been very slow to make. Their latest solution is more individual user controls, which Twitter is testing out with a new “safety mode” which pairs algorithmic intelligence with new user inputs.


extra things

Some of my favorite reads from our Extra Crunch subscription service this week:

Our favorite startups from YC’s Demo Day, Part 1 
“Y Combinator kicked off its fourth-ever virtual Demo Day today, revealing the first half of its nearly 400-company batch. The presentation, YC’s biggest yet, offers a snapshot into where innovation is heading, from not-so-simple seaweed to a Clearco for creators….”

…Part 2
“…Yesterday, the TechCrunch team covered the first half of this batch, as well as the startups with one-minute pitches that stood out to us. We even podcasted about it! Today, we’re doing it all over again. Here’s our full list of all startups that presented on the record today, and below, you’ll find our votes for the best Y Combinator pitches of Day Two. The ones that, as people who sift through a few hundred pitches a day, made us go ‘oh wait, what’s this?’

All the reasons why you should launch a credit card
“… if your company somehow hasn’t yet found its way to launch a debit or credit card, we have good news: It’s easier than ever to do so and there’s actual money to be made. Just know that if you do, you’ve got plenty of competition and that actual customer usage will probably depend on how sticky your service is and how valuable the rewards are that you offer to your most active users….”


Thanks for reading, and again, if you’re reading this on the TechCrunch site, you can get this in your inbox from the newsletter page, and follow my tweets @lucasmtny

Lucas Matney

#american-civil-liberties-union, #apple, #apple-inc, #apple-music, #artificial-intelligence, #australia, #bryce-durbin, #bytedance, #canada, #china, #computing, #electronic-frontier-foundation, #encryption, #extra-crunch, #facebook, #federal-aviation-administration, #icloud, #india, #ios, #iphone, #japan, #linkedin, #new-zealand, #pico, #richard-branson, #siri, #spotify, #tech-media, #technology, #united-kingdom, #united-states, #virgin-galactic, #virtual-reality, #y-combinator

UK now expects compliance with children’s privacy design code

In the UK, a 12-month grace period for compliance with a design code aimed at protecting children online expires today — meaning app makers offering digital services in the market which are “likely” to be accessed by children (defined in this context as users under 18 years old) are expected to comply with a set of standards intended to safeguard kids from being tracked and profiled.

The age appropriate design code came into force on September 2 last year however the UK’s data protection watchdog, the ICO, allowed the maximum grace period for hitting compliance to give organizations time to adapt their services.

But from today it expects the standards of the code to be met.

Services where the code applies can include connected toys and games and edtech but also online retail and for-profit online services such as social media and video sharing platforms which have a strong pull for minors.

Among the code’s stipulations are that a level of ‘high privacy’ should be applied to settings by default if the user is (or is suspected to be) a child — including specific provisions that geolocation and profiling should be off by default (unless there’s a compelling justification for such privacy hostile defaults).

The code also instructs app makers to provide parental controls while also providing the child with age-appropriate information about such tools — warning against parental tracking tools that could be used to silently/invisibly monitor a child without them being made aware of the active tracking.

Another standard takes aim at dark pattern design — with a warning to app makers against using “nudge techniques” to push children to provide “unnecessary personal data or weaken or turn off their privacy protections”.

The full code contains 15 standards but is not itself baked into legislation — rather it’s a set of design recommendations the ICO wants app makers to follow.

The regulatory stick to make them do so is that the watchdog is explicitly linking compliance with its children’s privacy standards to passing muster with wider data protection requirements that are baked into UK law.

The risk for apps that ignore the standards is thus that they draw the attention of the watchdog — either through a complaint or proactive investigation — with the potential of a wider ICO audit delving into their whole approach to privacy and data protection.

“We will monitor conformance to this code through a series of proactive audits, will consider complaints, and take appropriate action to enforce the underlying data protection standards, subject to applicable law and in line with our Regulatory Action Policy,” the ICO writes in guidance on its website. “To ensure proportionate and effective regulation we will target our most significant powers, focusing on organisations and individuals suspected of repeated or wilful misconduct or serious failure to comply with the law.”

It goes on to warn it would view a lack of compliance with the kids’ privacy code as a potential black mark against (enforceable) UK data protection laws, adding: “If you do not follow this code, you may find it difficult to demonstrate that your processing is fair and complies with the GDPR [General Data Protection Regulation] or PECR [Privacy and Electronics Communications Regulation].”

Tn a blog post last week, Stephen Bonner, the ICO’s executive director of regulatory futures and innovation, also warned app makers: “We will be proactive in requiring social media platforms, video and music streaming sites and the gaming industry to tell us how their services are designed in line with the code. We will identify areas where we may need to provide support or, should the circumstances require, we have powers to investigate or audit organisations.”

“We have identified that currently, some of the biggest risks come from social media platforms, video and music streaming sites and video gaming platforms,” he went on. “In these sectors, children’s personal data is being used and shared, to bombard them with content and personalised service features. This may include inappropriate adverts; unsolicited messages and friend requests; and privacy-eroding nudges urging children to stay online. We’re concerned with a number of harms that could be created as a consequence of this data use, which are physical, emotional and psychological and financial.”

“Children’s rights must be respected and we expect organisations to prove that children’s best interests are a primary concern. The code gives clarity on how organisations can use children’s data in line with the law, and we want to see organisations committed to protecting children through the development of designs and services in accordance with the code,” Bonner added.

The ICO’s enforcement powers — at least on paper — are fairly extensive, with GDPR, for example, giving it the ability to fine infringers up to £17.5M or 4% of their annual worldwide turnover, whichever is higher.

The watchdog can also issue orders banning data processing or otherwise requiring changes to services it deems non-compliant. So apps that chose to flout the children’s design code risk setting themselves up for regulatory bumps or worse.

In recent months there have been signs some major platforms have been paying mind to the ICO’s compliance deadline — with Instagram, YouTube and TikTok all announcing changes to how they handle minors’ data and account settings ahead of the September 2 date.

In July, Instagram said it would default teens to private accounts — doing so for under 18s in certain countries which the platform confirmed to us includes the UK — among a number of other child-safety focused tweaks. Then in August, Google announced similar changes for accounts on its video charing platform, YouTube.

A few days later TikTok also said it would add more privacy protections for teens. Though it had also made earlier changes limiting privacy defaults for under 18s.

Apple also recently got itself into hot water with the digital rights community following the announcement of child safety-focused features — including a child sexual abuse material (CSAM) detection tool which scans photo uploads to iCloud; and an opt in parental safety feature that lets iCloud Family account users turn on alerts related to the viewing of explicit images by minors using its Messages app.

The unifying theme underpinning all these mainstream platform product tweaks is clearly ‘child protection’.

And while there’s been growing attention in the US to online child safety and the nefarious ways in which some apps exploit kids’ data — as well as a number of open probes in Europe (such as this Commission investigation of TikTok, acting on complaints) — the UK may be having an outsized impact here given its concerted push to pioneer age-focused design standards.

The code also combines with incoming UK legislate which is set to apply a ‘duty of care’ on platforms to take a rboad-brush safety-first stance toward users, also with a big focus on kids (and there it’s also being broadly targeted to cover all children; rather than just applying to kids under 13s as with the US’ COPPA, for example).

In the blog post ahead of the compliance deadline expiring, the ICO’s Bonner sought to take credit for what he described as “significant changes” made in recent months by platforms like Facebook, Google, Instagram and TikTok, writing: “As the first-of-its kind, it’s also having an influence globally. Members of the US Senate and Congress have called on major US tech and gaming companies to voluntarily adopt the standards in the ICO’s code for children in America.”

“The Data Protection Commission in Ireland is preparing to introduce the Children’s Fundamentals to protect children online, which links closely to the code and follows similar core principles,” he also noted.

And there are other examples in the EU: France’s data watchdog, the CNIL, looks to have been inspired by the ICO’s approach — issuing its own set of right child-protection focused recommendations this June (which also, for example, encourage app makers to add parental controls with the clear caveat that such tools must “respect the child’s privacy and best interests”).

The UK’s focus on online child safety is not just making waves overseas but sparking growth in a domestic compliance services industry.

Last month, for example, the ICO announced the first clutch of GDPR certification scheme criteria — including two schemes which focus on the age appropriate design code. Expect plenty more.

Bonner’s blog post also notes that the watchdog will formally set out its position on age assurance this autumn — so it will be providing further steerage to organizations which are in scope of the code on how to tackle that tricky piece, although it’s still not clear how hard a requirement the ICO will support, with Bonner suggesting it could be actually “verifying ages or age estimation”. Watch that space. Whatever the recommendations are, age assurance services are set to spring up with compliance-focused sales pitches.

Children’s safety online has been a huge focus for UK policymakers in recent years, although the wider (and long in train) Online Safety (neé Harms) Bill remains at the draft law stage.

An earlier attempt by UK lawmakers to bring in mandatory age checks to prevent kids from accessing adult content websites — dating back to 2017’s Digital Economy Act — was dropped in 2019 after widespread criticism that it would be both unworkable and a massive privacy risk for adult users of porn.

But the government did not drop its determination to find a way to regulate online services in the name of child safety. And online age verification checks look set to be — if not a blanket, hardened requirement for all digital services — increasingly brought in by the backdoor, through a sort of ‘recommended feature’ creep (as the ORG has warned). 

The current recommendation in the age appropriate design code is that app makers “take a risk-based approach to recognising the age of individual users and ensure you effectively apply the standards in this code to child users”, suggesting they: “Either establish age with a level of certainty that is appropriate to the risks to the rights and freedoms of children that arise from your data processing, or apply the standards in this code to all your users instead.” 

At the same time, the government’s broader push on online safety risks conflicting with some of the laudable aims of the ICO’s non-legally binding children’s privacy design code.

For instance, while the code includes the (welcome) suggestion that digital services gather as little information about children as possible, in an announcement earlier this summer UK lawmakers put out guidance for social media platforms and messaging services — ahead of the planned Online Safety legislation — that recommends they prevent children from being able to use end-to-end encryption.

That’s right; the government’s advice to data-mining platforms — which it suggests will help prepare them for requirements in the incoming legislation — is not to use ‘gold standard’ security and privacy (e2e encryption) for kids.

So the official UK government messaging to app makers appears to be that, in short order, the law will require commercial services to access more of kids’ information, not less — in the name of keeping them ‘safe’. Which is quite a contradiction vs the data minimization push on the design code.

The risk is that a tightening spotlight on kids privacy ends up being fuzzed and complicated by ill-thought through policies that push platforms to monitor kids to demonstrate ‘protection’ from a smorgasbord of online harms — be it adult content or pro-suicide postings, or cyber bullying and CSAM.

The law looks set to encourage platforms to ‘show their workings’ to prove compliance — which risks resulting in ever closer tracking of children’s activity, retention of data — and maybe risk profiling and age verification checks (that could even end up being applied to all users; think sledgehammer to crack a nut). In short, a privacy dystopia.

Such mixed messages and disjointed policymaking seem set to pile increasingly confusing — and even conflicting — requirements on digital services operating in the UK, making tech businesses legally responsible for divining clarity amid the policy mess — with the simultaneous risk of huge fines if they get the balance wrong.

Complying with the ICO’s design standards may therefore actually be the easy bit.

 

#data-processing, #data-protection, #encryption, #europe, #general-data-protection-regulation, #google, #human-rights, #identity-management, #instagram, #online-harms, #online-retail, #online-safety, #policy, #privacy, #regulatory-compliance, #social-issues, #social-media, #social-media-platforms, #tc, #tiktok, #uk-government, #united-kingdom, #united-states

LOVE unveils a modern video messaging app with a business model that puts users in control

A London-headquartered startup called LOVE, valued at $17 million following its pre-seed funding, aims to redefine how people stay in touch with close family and friends. The company is launching a messaging app that offers a combination of video calling as well as asynchronous video and audio messaging, in an ad-free, privacy-focused experience with a number of bells and whistles, including artistic filters and real-time transcription and translation features.

But LOVE’s bigger differentiator may not be its product alone, but rather the company’s mission.

LOVE aims for its product direction to be guided by its user base in a democratic fashion as opposed to having the decisions made about its future determined by an elite few at the top of some corporate hierarchy. In addition, the company’s longer-term goal is ultimately to hand over ownership of the app and its governance to its users, the company says.

These concepts have emerged as part of bigger trends towards a sort of “web 3.0,” or next phase of internet development, where services are decentralized, user privacy is elevated, data is protected, and transactions take place on digital ledgers, like a blockchain, in a more distributed fashion.

LOVE’s founders are proponents of this new model, including serial entrepreneur Samantha Radocchia, who previously founded three companies and was an early advocate for the blockchain as the co-founder of Chronicled, an enterprise blockchain company focused on the pharmaceutical supply chain.

As someone who’s been interested in emerging technology since her days of writing her anthropology thesis on currency exchanges in “Second Life’s” virtual world, she’s now faculty at Singularity University, where she’s given talks about blockchain, A.I., Internet of Things, Future of Work, and other topics. She’s also authored an introductory guide to the blockchain with her book “Bitcoin Pizza.”

Co-founder Christopher Schlaeffer, meanwhile, held a number of roles at Deutsche Telekom, including Chief Product & Innovation Officer, Corporate Development Officer, and Chief Strategy Officer, where he along with Google execs introduced the first mobile phone to run Android. He was also Chief Digital Officer at the telecommunication services company VEON.

The two crossed paths after Schlaeffer had already begun the work of organizing a team to bring LOVE to the public, which includes co-founders Chief Technologist, Jim Reeves, also previously of VEON, and Chief Designer, Timm Kekeritz, previously an interaction designer at international design firm IDEO in San Francisco, design director at IXDS, and founder of design consultancy Raureif in Berlin, among other roles.

Explained Radocchia, what attracted her to join as CEO was the potential to create a new company that upholds more positive values than what’s often seen today —  in fact, the brand name “LOVE” is a reference to this aim. She was also interested in the potential to think through what she describes as “new business models that are not reliant on advertising or harvesting the data of our users,” she says.

To that end, LOVE plans to monetize without any advertising. While the company isn’t ready to explain its business model in full, it would involve users opting in to services through granular permissions and membership, we’re told.

“We believe our users will much rather be willing to pay for services they consciously use and grant permissions to in a given context than have their data used for an advertising model which is simply not transparent,” says Radocchia.

LOVE expects to share more about the model next year.

As for the LOVE app itself, it’s a fairly polished mobile messenger offering an interesting combination of features. Like any other video chat app, you can you video call with friends and family, either in one-on-one calls or in groups. Currently, LOVE supports up to 5 call participants, but expects to expand that as it scales. The app also supports video and audio messaging for asynchronous conversations. There are already tools that offer this sort of functionality on the market, of course — like WhatsApp, with its support for audio messages, or video messenger Marco Polo. But they don’t offer quite the same expanded feature set.

Image Credits: LOVE

For starters, LOVE limits its video messages to 60 seconds for brevity’s sake. (As anyone who’s used Marco Polo knows, videos can become a bit rambling, which makes it harder to catch up when you’re behind on group chats.) In addition, LOVE allows you to both watch the video content as well as read the real-time transcription of what’s being said — the latter which comes in handy not only for accessibility’s sake, but also for those times you want to hear someone’s messages but aren’t in a private place to listen or don’t have headphones. Conversations can also be translated into 50 different languages.

“A lot of the traditional communication or messenger products are coming from a paradigm that has always been text-based,” explains Radocchia. “We’re approaching it completely differently. So while other platforms have a lot of the features that we do, I think that…the perspective that we’ve approached it has completely flipped it on its head,” she continues. “As opposed to bolting video messages on to a primarily text-based interface, [LOVE is] actually doing it in the opposite way and adding text as a sort of a magically transcribed add-on — and something that you never, hopefully, need to be typing out on your keyboard again,” she adds.

The app’s user interface, meanwhile, has been designed to encourage eye-to-eye contact with the speaker to make conversations feel more natural. It does this by way of design elements where bubbles float around as you’re speaking and the bubble with the current speaker grows to pull your focus away from looking at yourself. The company is also working with the curator of Serpentine Gallery in London, Hans Ulrich-Obrist, to create new filters that aren’t about beautification or gimmicks, but are instead focused on introducing a new form of visual expression that makes people feel more comfortable on camera.

For the time being, this has resulted in a filter that slightly abstracts your appearance, almost in the style of animation or some other form of visual arts.

The app claims to use end-to-end encryption and the automatic deletion of its content after seven days — except for messages you yourself recorded, if you’ve chosen to save them as “memorable moments.”

“One of our commitments is to privacy and the right-to-forget,” says Radocchia. “We don’t want to be or need to be storing any of this information.”

LOVE has been soft-launched on the App Store where it’s been used with a number of testers and is working to organically grow its user base through an onboarding invite mechanism that asks users to invite at least three people to join you. This same onboarding process also carefully explains why LOVE asks for permissions — like using speech recognition to create subtitles, or

LOVE says its at valuation is around $17 million USD following pre-seed investments from a combination of traditional startup investors and strategic angel investors across a variety of industries, including tech, film, media, TV, and financial services. The company will raise a seed round this fall.

The app is currently available on iOS, but an Android version will arrive later in the year. (Note that LOVE does not currently support the iOS 15 beta software, where it has issues with speech transcription and in other areas. That should be resolved next week, following an app update now in the works.)

#a-i, #android, #animation, #app-store, #apps, #berlin, #blockchain, #ceo, #chief-digital-officer, #co-founder, #computing, #curator, #deutsche-telekom, #encryption, #facebook-messenger, #financial-services, #google, #ideo, #instant-messaging, #london, #love, #marco-polo, #messenger, #mobile, #mobile-applications, #recent-funding, #san-francisco, #serial-entrepreneur, #singularity-university, #social, #social-media, #software, #speaker, #startups, #technology, #whatsapp

Social platforms wrestle with what to do about the Taliban

With the hasty U.S. military withdrawal from Afghanistan underway after two decades occupying the country, social media platforms have a complex new set of policy decisions to make.

The Taliban has been social media-savvy for years, but those companies will face new questions as the notoriously brutal, repressive group seeks to present itself as Afghanistan’s legitimate governing body to the rest of the world. Given its ubiquity among political leaders and governments, social media will likely play an even more central role for the Taliban as it seeks to cement control and move toward governing.

Facebook has taken some early precautions to protect its users from potential reprisals as the Taliban seizes power. Through Twitter, Facebook’s Nathaniel Gleicher announced a set of new measures the platform rolled out over the last week. The company added a “one-click” way for people in Afghanistan to instantly lock their accounts, hiding posts on their timeline and preventing anyone they aren’t friends with from downloading or sharing their profile picture.

Facebook also removed the ability for users to view and search anyone’s friends list for people located in Afghanistan. On Instagram, pop-up alerts will provide Afghanistan-based users with information on how to quickly lock down their accounts.

The Taliban has long been banned on Facebook under the company’s rules against dangerous organizations. “The Taliban is sanctioned as a terrorist organization under US law… This means we remove accounts maintained by or on behalf of the Taliban and prohibit praise, support, and representation of them,” a Facebook spokesperson told the BBC.

The Afghan Taliban is actually not designated as a foreign terrorist organization by the U.S. State Department, but the Taliban operating out of Pakistan has held that designation since 2010. While it doesn’t appear on the list of foreign terrorist organizations, the Afghanistan-based Taliban is defined as a terror group according to economic sanctions that the U.S. put in place after 9/11.

While the Taliban is also banned from Facebook-owned WhatsApp, the platform’s end-to-end encryption makes enforcing those rules on WhatsApp more complex. WhatsApp is ubiquitous in Afghanistan and both the Afghan military and the Taliban have relied on the chat app to communicate in recent years. Though Facebook doesn’t allow the Taliban on its platforms, the group turned to WhatsApp to communicate its plans to seize control to the Afghan people and discourage resistance in what was a shockingly swift and frictionless sprint to power. The Taliban even set up WhatsApp number as a sort of help line for Afghans to report violence or crime, but Facebook quickly shut down the account.

Earlier this week, Facebook’s VP of content policy Monika Bickert noted that even if the U.S. does ultimately remove the Taliban from its lists of sanctioned terror groups, the platform would reevaluate and make its own decision. “… We would have to do a policy analysis on whether or not they nevertheless violate our dangerous organizations policy,” Bickert said.

Like Facebook, YouTube maintains that the Taliban is banned from its platform. YouTube’s own decision also appears to align with sanctions and could be subject to change if the U.S. approach to the Taliban shifts.

“YouTube complies with all applicable sanctions and trade compliance laws, including relevant U.S. sanctions,” a YouTube spokesperson told TechCrunch. “As such, if we find an account believed to be owned and operated by the Afghan Taliban, we terminate it. Further, our policies prohibit content that incites violence.”

On Twitter, Taliban spokesperson Zabihullah Mujahid has continued to share regular updates about the group’s activities in Kabul. Another Taliban representative, Qari Yousaf Ahmadi, also freely posts on the platform. Unlike Facebook and YouTube, Twitter doesn’t have a blanket ban on the group but will enforce its policies on a post-by-post basis.

If the Taliban expands its social media footprint, other platforms might be facing the same set of decisions. TikTok did not respond to TechCrunch’s request for comment, but previously told NBC that it considers the Taliban a terrorist organization and does not allow content that promotes the group.

The Taliban doesn’t appear to have a foothold beyond the most mainstream social networks, but it’s not hard to imagine the former insurgency turning to alternative platforms to remake its image as the world looks on.

While Twitch declined to comment on what it might do if the group were to use the platform, it does have a relevant policy that takes “off-service conduct” into account when banning users. That policy was designed to address reports of abusive behavior and sexual harassment among Twitch streamers.

The new rules also apply to accounts linked to violent extremism, terrorism, or other serious threats, whether those actions take place on or off Twitch. That definition would likely preclude the Taliban from establishing a presence on the platform, even if the U.S. lifts sanctions or changes its terrorist designations in the future.

#afghanistan, #encryption, #facebook, #kabul, #military, #pakistan, #social, #social-media, #social-media-platforms, #spokesperson, #taliban, #tc, #united-states, #whatsapp

Apple’s CSAM detection tech is under fire — again

Apple has encountered monumental backlash to a new child sexual abuse imagery (CSAM) detection technology it announced earlier this month. The system, which Apple calls NeuralHash, has yet to be activated for its billion-plus users, but the technology is already facing heat from security researchers who say the algorithm is producing flawed results.

NeuralHash is designed to identify known CSAM on a user’s device without having to possess the image or knowing the contents of the image. Because a user’s photos stored in iCloud are end-to-end encrypted so that even Apple can’t access the data, NeuralHash instead scans for known CSAM on a user’s device, which Apple claims is more privacy friendly as it limits the scanning to just photos rather than other companies which scan all of a user’s file.

Apple does this by looking for images on a user’s device that have the same hash — a string of letters and numbers that can uniquely identify an image — that are provided by child protection organizations like NCMEC. If NeuralHash finds 30 or more matching hashes, the images are flagged to Apple for a manual review before the account owner is reported to law enforcement. Apple says the chance of a false positive is about one in one trillion accounts.

But security experts and privacy advocates have expressed concern that the system could be abused by highly-resourced actors, like governments, to implicate innocent victims or to manipulate the system to detect other materials that authoritarian nation states find objectionable. NCMEC called critics the “screeching voices of the minority,” according to a leaked memo distributed internally to Apple staff.

Last night, Asuhariet Ygvar reverse-engineered Apple’s NeuralHash into a Python script and published code to GitHub, allowing anyone to test the technology regardless of whether they have an Apple device to test. In a Reddit post, Ygvar said NeuralHash “already exists” in iOS 14.3 as obfuscated code, but was able to reconstruct the technology to help other security researchers understand the algorithm better before it’s rolled out to iOS and macOS devices later this year.

It didn’t take long before others tinkered with the published code and soon came the first reported case of a “hash collision,” which in NeuralHash’s case is where two entirely different images produce the same hash. Cory Cornelius, a well-known research scientist at Intel Labs, discovered the hash collision. Ygvar confirmed the collision a short time later.

Hash collisions can be a death knell to systems that rely on cryptography to keep them secure, such as encryption. Over the years several well-known password hashing algorithms, like MD5 and SHA-1, were retired after collision attacks rendered them ineffective.

Kenneth White, a cryptography expert and founder of the Open Crypto Audit Project, said in a tweet: “I think some people aren’t grasping that the time between the iOS NeuralHash code being found and [the] first collision was not months or days, but a couple of hours.”

When reached, an Apple spokesperson declined to comment on the record. But in a background call where reporters were not allowed to quote executives directly or by name, Apple downplayed the hash collision and argued that the protections it puts in place — such as a manual review of photos before they are reported to law enforcement — are designed to prevent abuses. Apple also said that the version of NeuralHash that was reverse-engineered is a generic version, and not the complete version that will roll out later this year.

It’s not just civil liberties groups and security experts that are expressing concern about the technology. A senior lawmaker in the German parliament sent a letter to Apple chief executive Tim Cook this week saying that the company is walking down a “dangerous path” and urged Apple not to implement the system.

#algorithms, #apple, #apple-inc, #cryptography, #encryption, #github, #hash, #icloud, #law-enforcement, #password, #privacy, #python, #security, #sha-1, #spokesperson, #tim-cook

Evervault’s ‘encryption as a service’ is now open access

Dublin-based Evervault, a developer-focused security startup which sells encryption vis API and is backed by a raft of big name investors including the likes of Sequoia, Kleiner Perkins and Index Ventures, is coming out of closed beta today — announcing open access to its encryption engine.

The startup says some 3,000 developers are on its waitlist to kick the tyres of its encryption engine, which it calls E3.

Among “dozens” of companies in its closed preview are drone delivery firm Manna, fintech startup Okra, and healthtech company Vital. Evervault says it’s targeting its tools at developers at companies with a core business need to collect and process four types of data: Identity & contact data; Financial & transaction data; Health & medical data; and Intellectual property.

The first suite of products it offers on E3 are called Relay and Cages; the former providing a new way for developers to encrypt and decrypt data as it passes in and out of apps; the latter offering a secure method — using trusted execution environments running on AWS — to process encrypted data by isolating the code that processes plaintext data from the rest of the developer stack.

Evervault is the first company to get a product deployed on Amazon Web Services’ Nitro Enclaves, per founder Shane Curran.

“Nitro Enclaves are basically environments where you can run code and prove that the code that’s running in the data itself is the code that you’re meant to be running,” he tells TechCrunch. “We were the first production deployment of a product on AWS Nitro Enclaves — so in terms of the people actually taking that approach we’re the only ones.”

It shouldn’t be news to anyone to say that data breaches continue to be a serious problem online. And unfortunately it’s sloppy security practices by app makers — or even a total lack of attention to securing user data — that’s frequently to blame when plaintext data leaks or is improperly accessed.

Evervault’s fix for this unfortunate ‘feature’ of the app ecosystem is to make it super simple for developers to bake in encryption via an API — taking the strain of tasks like managing encryption keys. (“Integrate Evervault in 5 minutes by changing a DNS record and including our SDK,” is the developer-enticing pitch on its website.)

“At the high level what we’re doing… is we’re really focusing on getting companies from [a position of] not approaching security and privacy from any perspective at all — up and running with encryption so that they can actually, at the very least, start to implement the controls,” says Curran.

“One of the biggest problems that companies have these days is they basically collect data and the data sort of gets sprawled across both their implementation and their test sets as well. The benefit of encryption is that  you know exactly when data was accessed and how it was accessed. So it just gives people a platform to see what’s happening with the data and start implementing those controls themselves.”

With C-Suite executives paying increasing mind to the need to properly secure data — thanks to years of horrific data breach scandals (and breach déjà vu), and also because of updated data protection laws like Europe’s General Data Protection Regulation (GDPR) which has beefed up penalties for lax security and data misuse — a growing number of startups are now pitching services that promise to deliver ‘data privacy’, touting tools they claim will protect data while still enabling developers to extract useful intel.

Evervault’s website also deploys the term “data privacy” — which it tells us it defines to mean that “no unauthorized party has access to plaintext user/customer data; users/customers and authorized developers have full control over who has access to data (including when and for what purpose); and, plaintext data breaches are ended”. (So encrypted data could, in theory, still leak — but the point is the information would remain protected as a result of still being robustly encrypted.)

Among a number of techniques being commercialized by startups in this space is homomorphic encryption — a process that allows for analysis of encrypted data without the need to decrypt the data.

Evervault’s first offering doesn’t go that far — although its ‘encryption manifesto‘ notes that it’s keeping a close eye on the technique. And Curran confirms it is likely to incorporate the approach in time. But he says its first focus has been to get E3 up and running with an offering that can help a broad swathe of developers.

“Fully homomorphic [encryption] is great. The biggest challenge if you’re targeting software developers who are building normal services it’s very hard to build general purpose applications on top of it. So we take another approach — which is basically using trusted execution environments. And we worked with the Amazon Web Services team on being their first production deployment of their new product called Nitro Enclaves,” he tells TechCrunch.

“The bigger focus for us is less about the underlying technology itself and it’s more about taking what the best security practices are for companies that are already investing heavily in this and just making them accessible to average developers who don’t even know how encryption works,” Curran continues. “That’s where we get the biggest nuance of Evervault vs some of these others privacy and security companies — we build for developers who don’t normally think about security when they’re building things and try to build a great experience around that… so it’s really just about bridging the gap between ‘the start of art’ and bringing it to average developers.”

“Over time fully homomorphic encryption is probably a no-brainer for us but both in terms of performance and flexibility for your average developer to get up and running it didn’t really make sense for us to build on it in its current form. But it’s something we’re looking into. We’re really looking at what’s coming out of academia — and if we can fit it in there. But in the meantime it’s all this trusted execution environment,” he adds.

Curran suggests Evervault’s main competitor at this point is open source encryption libraries — so basically developers opting to ‘do’ the encryption piece themselves. Hence it’s zeroing in on the service aspect of its offering; taking on encryption management tasks so developers don’t have to, while also reducing their security risk by ensuring they don’t have to touch data in the clear.

“When we’re looking at those sort of developers — who’re already starting to think about doing it themselves — the biggest differentiator with Evervault is, firstly the speed of integration, but more importantly it’s the management of encrypted data itself,” Curran suggests. “With Evervault we manage the keys but we don’t store any data and our customers store encrypted data but they don’t store keys. So it means that even if they want to encrypt something with Evervault they never have all the data themselves in plaintext — whereas with open source encryption they’ll have to have it at some point before they do the encryption. So that’s really the base competitor that we see.”

“Obviously there are some other projects out there — like Tim Berners-Lee’s Solid project and so on. But it’s not clear that there’s anybody else taking the developer-experience focused approach to encryption specifically. Obviously there’s a bunch of API security companies… but encryption through an API is something we haven’t really come across in the past with customers,” he adds.

While Evervault’s current approach sees app makers’ data hosted in dedicated trusted execution environments running on AWS, the information still exists there as plaintext — for now. But as encryption continues to evolves it’s possible to envisage a future where apps aren’t just encrypted by default (Evervault’s stated mission is to “encrypt the web”) but where user data, once ingested and encrypted, never needs to be decrypted — as all processing can be carried out on ciphertext.

Homomorphic encryption has unsurprisingly been called the ‘holy grail’ of security and privacy — and startups like Duality are busy chasing it. But the reality on the ground, online and in app stores, remains a whole lot more rudimentary. So Evervault sees plenty of value in getting on with trying to raise the encryption bar more generally.

Curran also points out that plenty of developers aren’t actually doing much processing of the data they gather — arguing therefore that caging plaintext data inside a trusted execution environment can thus abstract away a large part of the risk related to these sort of data flows anyway. “The reality is most developers who are building software these days aren’t necessarily processing data themselves,” he suggests. “They’re actually just sort of collecting it from their users and then sharing it with third party APIs.

“If you look at a startup building something with Stripe — the credit card flows through their systems but it always ends up being passed on somewhere else. I think that’s generally the direction that most startups are going these days. So you can trust the execution — depending on the security of the silicon in an Amazon data center kind of makes the most sense.”

On the regulatory side, the data protection story is a little more nuanced than the typical security startup spin.

While Europe’s GDPR certainly bakes security requirements into law, the flagship data protection regime also provides citizens with a suite of access rights attached to their personal data — a key element that’s often overlooked in developer-first discussions of ‘data privacy’.

Evervault concedes that data access rights haven’t been front of mind yet, with the team’s initial focus being squarely on encryption. But Curran tells us it plans — “over time” — to roll out products that will “simplify access rights as well”.

“In the future, Evervault will provide the following functionality: Encrypted data tagging (to, for example, time-lock data usage); programmatic role-based access (to, for example, prevent an employee seeing data in plaintext in a UI); and, programmatic compliance (e.g. data localization),” he further notes on that.

 

#api, #aws, #cryptography, #developer, #dublin, #encryption, #europe, #evervault, #general-data-protection-regulation, #homomorphic-encryption, #nitro-enclaves, #okra, #privacy, #security, #sequoia, #shane-curran, #tim-berners-lee

Baffle lands $20M Series B to simplify data-centric encryption

California-based Baffle, a startup that aims to prevent data breaches by keeping data encrypted from production through processing, has raised $20 million in Series B funding.

Baffle was founded in 2015 to help thwart the increasing threats to enterprise assets in public and private clouds. Unlike many solutions that only encrypt data in-transit and at-rest, Baffle’s solution keeps data encrypted while it’s being processed by databases and applications through a “security mesh” that de-identifies sensitive data that it claims offers no performance impact to customers.

The startup says its goal is to make data breaches “irrelevant” by efficiently encrypting data wherever it may be, so that even if there is a security breach, the data will be unavailable and unusable by hackers.

“Most encryption is misapplied, and quite frankly, doesn’t do anything to protect your data,” the startup claims. “The protection measures that are most commonly used do nothing to protect you against modern hacks and breaches.”

Baffle supports all major cloud platforms, including AWS, Google Cloud and Microsoft Azure, and it’s currently used to protect more than 100 billion records in financial services, healthcare, retail, industrial IoT, and government, according to the startup. The company claims it stores records belonging to the top 5 global financial services companies and five of the top 25 global companies.

“Securing IT infrastructure—networks, devices, databases, lakes and warehouses—is never complete. Constant change makes it impossible to adopt a zero trust security posture without protecting the data itself,” said Ameesh Divatia, co-founder and CEO of Baffle.

The startup’s Series B funding round, which comes more than three years after it secured closed $6M in Series A financing, was led by new investor Celesta Capital with contributions from National Grid Partners, Lytical Ventures and Nepenthe Capital, and brings the startup’s total funding to date to $36.5 million.

Baffle, which says it has seen threefold revenue growth over the past year, tells TechCrunch that the funds will be used to help it grow to meet market demand and to invest further in product development. It also plans to double its headcount from 25 to 50 employees over the next 12 months.

“With this investment, we can meet market demand for data-centric cloud data protection that enables responsible digital information sharing and breaks the cycle of continuous data and privacy breaches,” Divatia added.

Read more:

#cloud, #computer-security, #cryptography, #data-protection, #data-security, #encryption, #security

Facebook brings end-to-end encryption to Messenger calls and Instagram DMs

Facebook has extended the option of using end-to-end encryption for Messenger voice calls and video calls.

End-to-end encryption (E2EE) — a security feature that prevents third-parties from eavesdropping on calls and chats — has been available for text conversations on Facebook’s flagship messaging service since 2016. Although the company has faced pressure to roll back its end-to-end encryption plans, Facebook is now extending this protection to both voice and video calls on Messenger, which means that “nobody else, including Facebook, can see or listen to what’s sent or said.”

“End-to-end encryption is already widely used by apps like WhatsApp to keep personal conversations safe from hackers and criminals,” Ruth Kricheli, director of product management for Messenger, said in a blog post on Friday. “It’s becoming the industry standard and works like a lock and key, where just you and the people in the chat or call have access to the conversation.”

Facebook has some other E2EE features in the works, too. It’s planning to start public tests of end-to-end encryption for group chats and calls in Messenger in the coming weeks and is also planning a limited test of E2EE for Instagram direct messages. Those involved in the trial will be able to opt-in to end-to-end encrypted messages and calls for one-on-one conversations carried out on the photo-sharing platform.

Beyond encryption, the social networking giant is also updating its expiring messages feature, which is similar to the ephemeral messages feature available on Facebook-owned WhatsApp. It’s now offering more options for people in the chat to choose the amount of time before all new messages disappear, from as few as 5 seconds to as long as 24 hours.

“People expect their messaging apps to be secure and private, and with these new features, we’re giving them more control over how private they want their calls and chats to be,” Kricheli added.

News of Facebook ramping up its E2EE rollout plans comes just days after the company changed its privacy settings — again.

#apps, #computing, #e2ee, #encryption, #end-to-end-encryption, #facebook, #facebook-messenger, #instagram, #messenger, #mobile-applications, #operating-systems, #product-management, #security, #social-media, #software, #whatsapp

Interview: Apple’s Head of Privacy details child abuse detection and Messages safety features

Last week, Apple announced a series of new features targeted at child safety on its devices. Though not live yet, the features will arrive later this year for users. Though the goals of these features are universally accepted to be good ones — the protection of minors and the limit of the spread of Child Sexual Abuse Material (CSAM), there have been some questions about the methods Apple is using.

I spoke to Erik Neuenschwander, Head of Privacy at Apple, about the new features launching for its devices. He shared detailed answers to many of the concerns that people have about the features and talked at length to some of the tactical and strategic issues that could come up once this system rolls out. 

I also asked about the rollout of the features, which come closely intertwined but are really completely separate systems that have similar goals. To be specific, Apple is announcing three different things here, some of which are being confused with one another in coverage and in the minds of the public. 

CSAM detection in iCloud Photos – A detection system called NeuralHash creates identifiers it can compare with IDs from the National Center for Missing and Exploited Children and other entities to detect known CSAM content in iCloud Photo libraries. Most cloud providers already scan user libraries for this information — Apple’s system is different in that it does the matching on device rather than in the cloud.

Communication Safety in Messages – A feature that a parent opts to turn on for a minor on their iCloud Family account. It will alert children when an image they are going to view has been detected to be explicit and it tells them that it will also alert the parent.

Interventions in Siri and search – A feature that will intervene when a user tries to search for CSAM-related terms through Siri and search and will inform the user of the intervention and offer resources.

For more on all of these features you can read our articles linked above or Apple’s new FAQ that it posted this weekend.

From personal experience, I know that there are people who don’t understand the difference between those first two systems, or assume that there will be some possibility that they may come under scrutiny for innocent pictures of their own children that may trigger some filter. It’s led to confusion in what is already a complex rollout of announcements. These two systems are completely separate, of course, with CSAM detection looking for precise matches with content that is already known to organizations to be abuse imagery. Communication Safety in Messages takes place entirely on the device and reports nothing externally — it’s just there to flag to a child that they are or could be about to be viewing explicit images. This feature is opt-in by the parent and transparent to both parent and child that it is enabled.

Apple’s Communication Safety in Messages feature. Image Credits: Apple

There have also been questions about the on-device hashing of photos to create identifiers that can be compared with the database. Though NeuralHash is a technology that can be used for other kinds of features like faster search in photos, it’s not currently used for anything else on iPhone aside from CSAM detection. When iCloud Photos is disabled, the feature stops working completely. This offers an opt-out for people but at an admittedly steep cost given the convenience and integration of iCloud Photos with Apple’s operating systems.

Though this interview won’t answer every possible question related to these new features, this is the most extensive on-the-record discussion by Apple’s senior privacy member. It seems clear from Apple’s willingness to provide access and its ongoing FAQ’s and press briefings (there have been at least 3 so far and likely many more to come) that it feels that it has a good solution here. 

Despite the concerns and resistance, it seems as if it is willing to take as much time as is necessary to convince everyone of that. 

This interview has been lightly edited for clarity.

TC: Most other cloud providers have been scanning for CSAM for some time now. Apple has not. Obviously there are no current regulations that say that you must seek it out on your servers, but there is some roiling regulation in the EU and other countries. Is that the impetus for this? Basically, why now?

Erik Neuenschwander: Why now comes down to the fact that we’ve now got the technology that can balance strong child safety and user privacy. This is an area we’ve been looking at for some time, including current state of the art techniques which mostly involves scanning through entire contents of users libraries on cloud services that — as you point out — isn’t something that we’ve ever done; to look through user’s iCloud Photos. This system doesn’t change that either, it neither looks through data on the device, nor does it look through all photos in iCloud Photos. Instead what it does is gives us a new ability to identify accounts which are starting collections of known CSAM.

So the development of this new CSAM detection technology is the watershed that makes now the time to launch this. And Apple feels that it can do it in a way that it feels comfortable with and that is ‘good’ for your users?

That’s exactly right. We have two co-equal goals here. One is to improve child safety on the platform and the second is to preserve user privacy, And what we’ve been able to do across all three of the features, is bring together technologies that let us deliver on both of those goals.

Announcing the Communications safety in Messages features and the CSAM detection in iCloud Photos system at the same time seems to have created confusion about their capabilities and goals. Was it a good idea to announce them concurrently? And why were they announced concurrently, if they are separate systems?

Well, while they are [two] systems they are also of a piece along with our increased interventions that will be coming in Siri and search. As important as it is to identify collections of known CSAM where they are stored in Apple’s iCloud Photos service, It’s also important to try to get upstream of that already horrible situation. So CSAM detection means that there’s already known CSAM that has been through the reporting process, and is being shared widely re-victimizing children on top of the abuse that had to happen to create that material in the first place. for the creator of that material in the first place. And so to do that, I think is an important step, but it is also important to do things to intervene earlier on when people are beginning to enter into this problematic and harmful area, or if there are already abusers trying to groom or to bring children into situations where abuse can take place, and Communication Safety in Messages and our interventions in Siri and search actually strike at those parts of the process. So we’re really trying to disrupt the cycles that lead to CSAM that then ultimately might get detected by our system.

The process of Apple’s CSAM detection in iCloud Photos system. Image Credits: Apple

Governments and agencies worldwide are constantly pressuring all large organizations that have any sort of end-to-end or even partial encryption enabled for their users. They often lean on CSAM and possible terrorism activities as rationale to argue for backdoors or encryption defeat measures. Is launching the feature and this capability with on-device hash matching an effort to stave off those requests and say, look, we can provide you with the information that you require to track down and prevent CSAM activity — but without compromising a user’s privacy?

So, first, you talked about the device matching so I just want to underscore that the system as designed doesn’t reveal — in the way that people might traditionally think of a match — the result of the match to the device or, even if you consider the vouchers that the device creates, to Apple. Apple is unable to process individual vouchers; instead, all the properties of our system mean that it’s only once an account has accumulated a collection of vouchers associated with illegal, known CSAM images that we are able to learn anything about the user’s account. 

Now, why to do it is because, as you said, this is something that will provide that detection capability while preserving user privacy. We’re motivated by the need to do more for child safety across the digital ecosystem, and all three of our features, I think, take very positive steps in that direction. At the same time we’re going to leave privacy undisturbed for everyone not engaged in the illegal activity.

Does this, creating a framework to allow scanning and matching of on-device content, create a framework for outside law enforcement to counter with, ‘we can give you a list, we don’t want to look at all of the user’s data but we can give you a list of content that we’d like you to match’. And if you can match it with this content you can match it with other content we want to search for. How does it not undermine Apple’s current position of ‘hey, we can’t decrypt the user’s device, it’s encrypted, we don’t hold the key?’

It doesn’t change that one iota. The device is still encrypted, we still don’t hold the key, and the system is designed to function on on-device data. What we’ve designed has a device side component — and it has the device side component by the way, for privacy improvements. The alternative of just processing by going through and trying to evaluate users data on a server is actually more amenable to changes [without user knowledge], and less protective of user privacy.

Our system involves both an on-device component where the voucher is created, but nothing is learned, and a server-side component, which is where that voucher is sent along with data coming to Apple service and processed across the account to learn if there are collections of illegal CSAM. That means that it is a service feature. I understand that it’s a complex attribute that a feature of the service has a portion where the voucher is generated on the device, but again, nothing’s learned about the content on the device. The voucher generation is actually exactly what enables us not to have to begin processing all users’ content on our servers which we’ve never done for iCloud Photos. It’s those sorts of systems that I think are more troubling when it comes to the privacy properties — or how they could be changed without any user insight or knowledge to do things other than what they were designed to do.

One of the bigger queries about this system is that Apple has said that it will just refuse action if it is asked by a government or other agency to compromise by adding things that are not CSAM to the database to check for them on-device. There are some examples where Apple has had to comply with local law at the highest levels if it wants to operate there, China being an example. So how do we trust that Apple is going to hew to this rejection of interference If pressured or asked by a government to compromise the system?

Well first, that is launching only for US, iCloud accounts, and so the hypotheticals seem to bring up generic countries or other countries that aren’t the US when they speak in that way, and the therefore it seems to be the case that people agree US law doesn’t offer these kinds of capabilities to our government. 

But even in the case where we’re talking about some attempt to change the system, it has a number of protections built in that make it not very useful for trying to identify individuals holding specifically objectionable images. The hash list is built into the operating system, we have one global operating system and don’t have the ability to target updates to individual users and so hash lists will be shared by all users when the system is enabled. And secondly, the system requires the threshold of images to be exceeded so trying to seek out even a single image from a person’s device or set of people’s devices won’t work because the system simply does not provide any knowledge to Apple for single photos stored in our service. And then, thirdly, the system has built into it a stage of manual review where, if an account is flagged with a collection of illegal CSAM material, an Apple team will review that to make sure that it is a correct match of illegal CSAM material prior to making any referral to any external entity. And so the hypothetical requires jumping over a lot of hoops, including having Apple change its internal process to refer material that is not illegal, like known CSAM and that we don’t believe that there’s a basis on which people will be able to make that request in the US. And the last point that I would just add is that it does still preserve user choice, if a user does not like this kind of functionality, they can choose not to use iCloud Photos and if iCloud Photos is not enabled no part of the system is functional.

So if iCloud Photos is disabled, the system does not work, which is the public language in the FAQ. I just wanted to ask specifically, when you disable iCloud Photos, does this system continue to create hashes of your photos on device, or is it completely inactive at that point?

If users are not using iCloud Photos, NeuralHash will not run and will not generate any vouchers. CSAM detection is a neural hash being compared against a database of the known CSAM hashes that are part of the operating system image. None of that piece, nor any of the additional parts including the creation of the safety vouchers or the uploading of vouchers to iCloud Photos is functioning if you’re not using iCloud Photos. 

In recent years, Apple has often leaned into the fact that on-device processing preserves user privacy. And in nearly every previous case and I can think of that’s true. Scanning photos to identify their content and allow me to search them, for instance. I’d rather that be done locally and never sent to a server. However, in this case, it seems like there may actually be a sort of anti-effect in that you’re scanning locally, but for external use cases, rather than scanning for personal use — creating a ‘less trust’ scenario in the minds of some users. Add to this that every other cloud provider scans it on their servers and the question becomes why should this implementation being different from most others engender more trust in the user rather than less?

I think we’re raising the bar, compared to the industry standard way to do this. Any sort of server side algorithm that’s processing all users photos is putting that data at more risk of disclosure and is, by definition, less transparent in terms of what it’s doing on top of the user’s library. So, by building this into our operating system, we gain the same properties that the integrity of the operating system provides already across so many other features, the one global operating system that’s the same for all users who download it and install it, and so it in one property is much more challenging, even how it would be targeted to an individual user. On the server side that’s actually quite easy — trivial. To be able to have some of the properties and building it into the device and ensuring it’s the same for all users with the features enable give a strong privacy property. 

Secondly, you point out how use of on device technology is privacy preserving, and in this case, that’s a representation that I would make to you, again. That it’s really the alternative to where users’ libraries have to be processed on a server that is less private.

The things that we can say with this system is that it leaves privacy completely undisturbed for every other user who’s not into this illegal behavior, Apple gain no additional knowledge about any users cloud library. No user’s iCloud Library has to be processed as a result of this feature. Instead what we’re able to do is to create these cryptographic safety vouchers. They have mathematical properties that say, Apple will only be able to decrypt the contents or learn anything about the images and users specifically that collect photos that match illegal, known CSAM hashes, and that’s just not something anyone can say about a cloud processing scanning service, where every single image has to be processed in a clear decrypted form and run by routine to determine who knows what? At that point it’s very easy to determine anything you want [about a user’s images] versus our system only what is determined to be those images that match a set of known CSAM hashes that came directly from NCMEC and and other child safety organizations. 

Can this CSAM detection feature stay holistic when the device is physically compromised? Sometimes cryptography gets bypassed locally, somebody has the device in hand — are there any additional layers there?

I think it’s important to underscore how very challenging and expensive and rare this is. It’s not a practical concern for most users though it’s one we take very seriously, because the protection of data on the device is paramount for us. And so if we engage in the hypothetical where we say that there has been an attack on someone’s device: that is such a powerful attack that there are many things that that attacker could attempt to do to that user. There’s a lot of a user’s data that they could potentially get access to. And the idea that the most valuable thing that an attacker — who’s undergone such an extremely difficult action as breaching someone’s device — was that they would want to trigger a manual review of an account doesn’t make much sense. 

Because, let’s remember, even if the threshold is met, and we have some vouchers that are decrypted by Apple. The next stage is a manual review to determine if that account should be referred to NCMEC or not, and that is something that we want to only occur in cases where it’s a legitimate high value report. We’ve designed the system in that way, but if we consider the attack scenario you brought up, I think that’s not a very compelling outcome to an attacker.

Why is there a threshold of images for reporting, isn’t one piece of CSAM content too many?

We want to ensure that the reports that we make to NCMEC are high value and actionable, and one of the notions of all systems is that there’s some uncertainty built in to whether or not that image matched, And so the threshold allows us to reach that point where we expect a false reporting rate for review of one in 1 trillion accounts per year. So, working against the idea that we do not have any interest in looking through users’ photo libraries outside those that are holding collections of known CSAM the threshold allows us to have high confidence that those accounts that we review are ones that when we refer to NCMEC, law enforcement will be able to take up and effectively investigate, prosecute and convict.

#apple, #apple-inc, #apple-photos, #china, #cloud-applications, #cloud-computing, #cloud-services, #computing, #cryptography, #encryption, #european-union, #head, #icloud, #ios, #iphone, #law-enforcement, #operating-system, #operating-systems, #privacy, #private, #siri, #software, #united-states, #webmail

Apple says it will begin scanning iCloud Photos for child abuse images

Later this year, Apple will roll out a technology that will allow the company to detect and report known child sexual abuse material to law enforcement in a way it says will preserve user privacy.

Apple told TechCrunch that the detection of child sexual abuse material (CSAM) is one of several new features aimed at better protecting the children who use its services from online harm, including filters to block potentially sexually explicit photos sent and received through a child’s iMessage account. Another feature will intervene when a user tries to search for CSAM-related terms through Siri and Search.

Most cloud services — Dropbox, Google, and Microsoft to name a few — already scan user files for content that might violate their terms of service or be potentially illegal, like CSAM. But Apple has long resisted scanning users’ files in the cloud by giving users the option to encrypt their data before it ever reaches Apple’s iCloud servers.

Apple said its new CSAM detection technology — NeuralHash — instead works on a user’s device, and can identify if a user uploads known child abuse imagery to iCloud without decrypting the images until a threshold is met and a sequence of checks to verify the content are cleared.

News of Apple’s effort leaked Wednesday when Matthew Green, a cryptography professor at Johns Hopkins University, revealed the existence of the new technology in a series of tweets. The news was met with some resistance from some security experts and privacy advocates, but also users who are accustomed to Apple’s approach to security and privacy that most other companies don’t have.

Apple is trying to calm fears by baking in privacy through multiple layers of encryption, fashioned in a way that requires multiple steps before it ever makes it into the hands of Apple’s final manual review.

NeuralHash will land in iOS 15 and macOS Monterey, slated to be released in the next month or two, and works by converting the photos on a user’s iPhone or Mac into a unique string of letters and numbers, known as a hash. Any time you modify an image slightly, it changes the hash and can prevent matching. Apple says NeuralHash tries to ensure that identical and visually similar images — such as cropped or edited images — result in the same hash.

Before an image is uploaded to iCloud Photos, those hashes are matched on the device against a database of known hashes of child abuse imagery, provided by child protection organizations like the National Center for Missing & Exploited Children (NCMEC) and others. NeuralHash uses a cryptographic technique called private set intersection to detect a hash match without revealing what the image is or alerting the user.

The results are uploaded to Apple but cannot be read on their own. Apple uses another cryptographic principle called threshold secret sharing that allows it only to decrypt the contents if a user crosses a threshold of known child abuse imagery in their iCloud Photos. Apple would not say what that threshold was, but said — for example — that if a secret is split into a thousand pieces and the threshold is ten images of child abuse content, the secret can be reconstructed from any of those ten images.

Read more on TechCrunch

It’s at that point Apple can decrypt the matching images, manually verify the contents, disable a user’s account and report the imagery to NCMEC, which is then passed to law enforcement. Apple says this process is more privacy mindful than scanning files in the cloud as NeuralHash only searches for known and not new child abuse imagery. Apple said that there is a one in one trillion chance of a false positive, but there is an appeals process in place in the event an account is mistakenly flagged.

Apple has published technical details on its website about how NeuralHash works, which was reviewed by cryptography experts.

But despite the wide support of efforts to combat child sexual abuse, there is still a component of surveillance that many would feel uncomfortable handing over to an algorithm, and some security experts are calling for more public discussion before Apple rolls the technology out to users.

A big question is why now and not sooner. Apple said its privacy-preserving CSAM detection did not exist until now. But companies like Apple have also faced considerable pressure from the U.S. government and its allies to weaken or backdoor the encryption used to protect their users’ data to allow law enforcement to investigate serious crime.

Tech giants have refused efforts to backdoor their systems, but have faced resistance against efforts to further shut out government access. Although data stored in iCloud is encrypted in a way that even Apple cannot access it, Reuters reported last year that Apple dropped a plan for encrypting users’ full phone backups to iCloud after the FBI complained that it would harm investigations.

The news about Apple’s new CSAM detection tool, without public discussion, also sparked concerns that the technology could be abused to flood victims with child abuse imagery that could result in their account getting flagged and shuttered, but Apple downplayed the concerns and said a manual review would review the evidence for possible misuse.

Apple said NeuralHash will roll out in the U.S. at first, but would not say if, or when, it would be rolled out internationally. Until recently, companies like Facebook were forced to switch off its child abuse detection tools across the bloc after the practice was inadvertently banned. Apple said the feature is technically optional in that you don’t have to use iCloud Photos, but will be a requirement if users do. After all, your device belongs to you but Apple’s cloud does not.

#apple, #apple-inc, #cloud-applications, #cloud-services, #computing, #cryptography, #encryption, #facebook, #federal-bureau-of-investigation, #icloud, #ios, #iphone, #johns-hopkins-university, #law-enforcement, #macos, #privacy, #security, #technology, #u-s-government, #united-states, #webmail

WhatsApp photos and videos can now disappear after a single viewing

WhatsApp said that it would soon let users send disappearing photos and videos and this week the feature will be rolling out to everybody. Anyone using the Facebook-owned messaging app can share a photo or video in “view once” mode, allowing a single viewing before the media in question goes poof. Media shared with “view once” selected will show up as opened after the intended audience takes a peek.

The company notes that the new feature could be helpful for an array of needs that definitely aren’t sending nudes, like sharing a photo of some clothes you tried on or giving someone your wifi password. In the fine print, the company would like to remind you that just because the photos or video will vanish, that doesn’t prevent someone from taking a screenshot (and you won’t know if they do).

Facebook says the new feature is a step to give users “even more control over their privacy,” a song it’s been singing since Mark Zuckerberg first declared a new “privacy-focused vision” for the company back in 2019. Facebook has made a few gestures toward letting people wrest control of their online privacy since then, streamlining audience controls on its core app and enabling disappearing messages in WhatsApp.

The company has also been talking a big game about bringing end-to-end encryption to its full stable of messaging services, which it plans to make interoperable in the future. WhatsApp enabled end-to-end encryption by default back in 2016, but for Messenger and Instagram, the hallmark privacy measure could still be years out.

#encryption, #facebook, #mark-zuckerberg, #messenger, #mobile-applications, #online-privacy, #private-message, #social, #social-media, #tc, #whatsapp

A Silicon Valley VC firm with $1.8B in assets was hit by ransomware

Advanced Technology Ventures, a Silicon Valley venture capital firm with more than $1.8 billion in assets under its management, was hit by a ransomware attack in July that saw cybercriminals steal personal information on the company’s private investors, or limited partners (LPs).

In a letter to the Maine attorney general’s office, ATV said it became aware of the attack on July 9 after its servers storing financial information had been encrypted by ransomware. By July 26, the ATV learned that data had been stolen from the servers before the files were encrypted, a common “double extortion” tactic used by ransomware groups, which then threaten to publish the files online if the ransom to decrypt the files is not paid.

The letter said ATV believes the names, email addresses, phone numbers and Social Security numbers of the individual investors in ATV’s funds were stolen in the attack. Some 300 individuals were affected by the incident, including one person in Maine, according to a listing on the Maine attorney general’s data breach notification portal.

Venture capital firms often do not disclose all of their LPs — the investors who have thrown millions into an investment vehicle — to the public. A number of pre-approved names may be included in an announcement, but overall, a company’s private investors try to stay that way: private. The reasons vary, but it comes down to secrecy and a degree of competitive advantage: The firm may not want competitors to know who is backing them, and an investor may not want others to know where their money is going. This particular attack likely stole key information on a hush-hush part of how venture money works.

ATV said it notified the FBI about the attack. A spokesperson for the FBI did not immediately comment when reached by TechCrunch. ATV’s managing director Mike Carusi did not respond to questions sent by TechCrunch on Monday.

The venture capital firm, based in Menlo Park, California with offices in Boston, was founded in 1979 and invests largely in technology, communications, software and services, and healthcare technology. The company was an early investor in many of the startups from the last decade, like software library Fandango, Host Analytics (now Planfun) and Apptegic (now Evergage). Its more recent investments include Tripwire, which was later sold to cybersecurity company Belden for $710 million; Cedexis, a network traffic monitoring startup acquired by Cisco in 2018; and Actifo, which was sold to Google in 2020.


Natasha Mascarenhas contributed reporting. Send tips securely over Signal and WhatsApp to +1 646-755-8849. You can also send TechCrunch files or documents using our SecureDrop.

#attorney-general, #atv, #boston, #california, #cedexis, #cisco, #cybercrime, #encryption, #fandango, #federal-bureau-of-investigation, #google, #healthcare-technology, #maine, #private-equity, #ransomware, #securedrop, #security, #signal, #software, #spokesperson, #venture-capital

VPN servers seized by Ukrainian authorities weren’t encrypted

A tunnel made of ones and zeroes.

Enlarge (credit: Getty Images)

Privacy-tools-seller Windscribe said it failed to encrypt company VPN servers that were recently confiscated by authorities in Ukraine, a lapse that made it possible for the authorities to impersonate Windscribe servers and capture and decrypt traffic passing through them.

The Ontario, Canada-based company said earlier this month that two servers hosted in Ukraine were seized as part of an investigation into activity that had occurred a year earlier. The servers, which ran the OpenVPN virtual private network software, were also configured to use a setting that was deprecated in 2018 after security research revealed vulnerabilities that could allow adversaries to decrypt data.

“On the disk of those two servers was an OpenVPN server certificate and its private key,” a Windscribe representative wrote in the July 8 post. “Although we have encrypted servers in high-sensitivity regions, the servers in question were running a legacy stack and were not encrypted. We are currently enacting our plan to address this.”

Read 8 remaining paragraphs | Comments

#biz-it, #encryption, #openvpn, #tech, #virtual-private-networks, #vpns, #windscribe

Nym gets $6M for its anonymous overlay mixnet to sell privacy as a service

Switzerland-based privacy startup Nym Technologies has raised $6 million, which is being loosely pegged as a Series A round.

Earlier raises included a $2.5M seed round in 2019. The founders also took in grant money from the European Union’s Horizon 2020 research fund during an earlier R&D phase developing the network tech.

The latest funding will be used to continue commercial development of network infrastructure which combines an old idea for obfuscating the metadata of data packets at the transport network layer (Mixnets) with a crypto inspired reputation and incentive mechanism to drive the required quality of service and support a resilient, decentralized infrastructure.

Nym’s pitch is it’s building “an open-ended anonymous overlay network that works to irreversibly disguise patterns in Internet traffic”.

Unsurprisingly, given its attention to crypto mechanics, investors in the Series A have strong crypto ties — and cryptocurrency-related use-cases are also where Nym expects its first users to come from — with the round led by Polychain Capital, with participation from a number of smaller European investors including Eden Block, Greenfield One, Maven11, Tioga, and 1kx.

Commenting in a statement, Will Wolf of Polychain Capital, said: “We’re incredibly excited to partner with the Nym team to further their mission of bringing robust, sustainable and permissionless privacy infrastructure to all Internet users. We believe the Nym network will provide the strongest privacy guarantees with the highest quality of service of any mixnet and thus may become a very valuable piece of core internet infrastructure.”

The Internet’s ‘original sin’ was that core infrastructure wasn’t designed with privacy in mind. Therefore the level of complicity involved in Mixnets — shuffling and delaying encrypted data packets in order to shield sender-to-recipient metadata from adversaries with a global view of a network — probably seemed like over engineering all the way back when the web’s scaffolding was being pieced together.

But then came Bitcoin and the crypto boom and — also in 2013 — the Snowden revelations which ripped the veil off the NSA’s ‘collect it all’ mantra, as Booz Allen Hamilton sub-contractor Ed risked it all to dump data on his own (and other) governments’ mass surveillance programs. Suddenly network level adversaries were front page news. And so was Internet privacy.

Since Snowden’s big reveal, there’s been a slow burn of momentum for privacy tech — with rising consumer awareness fuelling usage of services like e2e encrypted email and messaging apps. Sometimes in spurts and spikes, related to specific data breaches and scandals. Or indeed privacy-hostile policy changes by mainstream tech giants (hi Facebook!).

Legal clashes between surveillance laws and data protection rights are also causing growing b2b headaches, especially for US-based cloud services. While growth in cryptocurrencies is driving demand for secure infrastructure to support crypto trading.

In short, the opportunity for privacy tech, both b2b and consumer-facing, is growing. And the team behind Nym thinks conditions look ripe for general purpose privacy-focused networking tech to take off too.

Of course there is already a well known anonymous overlay network in existence: Tor, which does onion routing to obfuscate where traffic was sent from and where it ends up.

The node-hopping component of Nym’s network shares a feature with the Tor network. But Tor does not do packet mixing — and Nym’s contention is that a functional mixnet can provide even stronger network-level privacy.

It sets out the case on its website — arguing that “Tor’s anonymity properties can be defeated by an entity that is capable of monitoring the entire network’s ‘entry’ and ‘exit’ nodes” since it does not take the extra step of adding “timing obfuscation” or “decoy traffic” to obfuscate the patterns that could be exploited to deanonymize users.

“Although these kinds of attacks were thought to be unrealistic when Tor was invented, in the era of powerful government agencies and private companies, these kinds of attacks are a real threat,” Nym suggests, further noting another difference in that Tor’s design is “based on a centralized directory authority for routing”, whereas Nym fully decentralizes its infrastructure.

Proving that suggestion will be quite the challenge, of course. And Nym’s CEO is upfront in his admiration for Tor — saying it is the best technology for securing web browsing right now.

“Most VPNs and almost all cryptocurrency projects are not as secure or as private as Tor — Tor is the best we have right now for web browsing,” says Nym founder and CEO Harry Halpin. “We do think Tor made all the right decisions when they built the software — at the time there was no interest from venture capital in privacy, there was only interest from the US government. And the Internet was too slow to do a mixnet. And what’s happened is speed up 20 years, things have transformed.

“The US government is no longer viewed as a defender of privacy. And now — weirdly enough — all of a sudden venture capital is interested in privacy and that’s a really big change.”

With such a high level of complexity involved in what Nym’s doing it will, very evidently, need to demonstrate the robustness of its network protocol and design against attacks and vulnerabilities on an ongoing basis — such as those seeking to spot patterns or identify dummy traffic and be able to relink packets to senders and receivers.

The tech is open source but Nym confirms the plan is to use some of the Series A funding for an independent audit of new code.

It also touts the number of PhDs it’s hired to-date — and plans to hire a bunch more, saying it will be using the new round to more than double its headcount, including hiring cryptographers and developers, as well as marketing specialists in privacy.

The main motivation for the raise, per Halpin, is to spend on more R&D to explore — and (he hopes) — solve some of the more specific use-cases it’s kicking around, beyond the basic one of letting developers use the network to shield user traffic (a la Tor).

Nym’s whitepaper, for example, touts the possibility for the tech being used to enable users to prove they have the right to access a service without having to disclose their actual identity to the service provider.

Another big difference vs Tor is that Tor is a not-for-profit — whereas Nym wants to build a for-profit business around its Mixnet.

It intends to charge users for access to the network — so for the obfuscation-as-a-service of having their data packets mixed into a crowd of shuffled, encrypted and proxy node-hopped others.

But potentially also for some more bespoke services — with Nym’s team eyeing specific use-cases such as whether its network could offer itself as a ‘super VPN’ to the banking sector to shield their transactions; or provide a secure conduit for AI companies to carry out machine learning processing on sensitive data-sets (such as healthcare data) without risking exposing the information itself.

“The main reason we raised this Series A is we need to do more R&D to solve some of these use-cases,” says Halpin. “But what impressed Polychain was they said wow there’s all these people that are actually interested in privacy — that want to run these nodes, that actually want to use the software. So originally when we envisaged this startup we were imagining more b2b use-cases I guess and what I think Polychain was impressed with was there seemed to be demand from b2c; consumer demand that was much higher than expected.”

Halpin says they expect the first use-cases and early users to come from the crypto space — where privacy concerns routinely attach themselves to blockchain transactions.

The plan is to launch the software by the end of the year or early next, he adds.

“We will have at least some sort of chat applications — for example it’s very easy to use our software with Signal… so we do think something like Signal is an ideal use-case for our software — and we would like to launch with both a [crypto] wallet and a chat app,” he says. “Then over the next year or two — because we have this runway — we can work more on kind of higher speed applications. Things like try to find partnerships with browsers, with VPNs.”

At this (still fairly early) stage of the network’s development — an initial testnet was launched in 2019 — Nym’s eponymous network has amassed over 9,000 nodes. These distributed, crowdsourced providers are only earning a NYM reputation token for now, and it remains to be seen how much exchangeable crypto value they might earn in the future as suppliers of key infrastructure if/when usage takes off.

Why didn’t Mixnets as a technology take off before, though? After all the idea dates back to the 1980s. There’s a range of reasons, according to Halpin — issues with scalability being one of them one. And a key design “innovation” he points to vis-a-vis its implementation of Mixnet technology is the ability to keep adding nodes so the network is able to scale to meet demand.

Another key addition is that the Nym protocol injects dummy traffic packets into the shuffle to make it harder for adversaries to decode the path of any particular message — aiming to bolster the packet mixing process against vulnerabilities like correlation attacks.

While the Nym network’s crypto-style reputation and incentive mechanism — which works to ensure the quality of mixing (“via a novel proof of mixing scheme”, as its whitepaper puts it) — is another differentiating component Halpin flags.

“One of our core innovations is we scale by adding servers. And the question is how do we add servers? To be honest we added servers by looking at what everyone had learned about reputation and incentives from cryptocurrency systems,” he tells TechCrunch. “We copied that — those insights — and attached them to mix networks. So the combination of the two things ends up being pretty powerful.

“The technology does essentially three things… We mix packets. You want to think about an unencrypted packet like a card, an encrypted packet you flip over so you don’t know what the card says, you collect a bunch of cards and you shuffle them. That’s all that mixing is — it just randomly permutates the packets… Then you hand them to the next person, they shuffle them. You hand them to the third person, they shuffle them. And then they had the cards to whoever is at the end. And as long as different people gave you cards at the beginning you can’t distinguish those people.”

More generally, Nym also argues it’s an advantage to be developing mixnet technology that’s independent and general purpose — folding all sorts and types of traffic into a shuffled pack — suggesting it can achieve greater privacy for users’ packets in this pooled crowd vs similar tech offered by a single provider to only their own users (such as the ‘privacy relay’ network recently announced by Apple).

In the latter case, an attacker already knows that the relayed traffic is being sent by Apple users who are accessing iCloud services. Whereas — as a general purpose overlay layer — Nym can, in theory, provide contextual coverage to users as part of its privacy mix. So another key point is that the level of privacy available to Nym users scales as usage does.

Historical performance issues with bandwidth and latency are other reasons Halpin cites for Mixnets being largely left on the academic shelf. (There have been some other deployments, such as Loopix — which Nym’s whitepaper says its design builds on by extending it into a “general purpose incentivized mixnet architecture” — but it’s fair to say the technology hasn’t exactly gone mainstream.)

Nonetheless, Nym’s contention is the tech’s time is finally coming; firstly because technical challenges associated with Mixnets can be overcome — because of gains in Internet bandwidth and compute power; as well as through incorporating crypto-style incentives and other design tweaks it’s introducing (e.g. dummy traffic) — but also, and perhaps most importantly, because privacy concerns aren’t simply going to disappear.

Indeed, Halpin suggests governments in certain countries may ultimately decide their exposure to certain mainstream tech providers which are subject to state mass surveillance regimes — whether that’s the US version or China’s flavor or elsewhere —  simply isn’t tenable over the longer run and that trusting sensitive data to corporate VPNs based in countries subject to intelligence agency snooping is a fool’s game.

(And it’s interesting to note, for example, that the European Data Protection Supervisor is currently conducting a review of EU bodies use of mainstream US cloud services from AWS and Microsoft to check whether they are in compliance with last summer’s Schrems II ruling by the CJEU, which struck down the EU-US Privacy Shield deal, after again finding US surveillance law to be essentially incompatible with EU privacy rights… )

Nym is betting that some governments will — eventually — come looking for alternative technology solutions to the spying problem. Although government procurement cycles make that play a longer game.

In the near term, Halpin says they expect interest and usage for the metadata-obscuring tech to come from the crypto world where there’s a need to shield transactions from view of potential hackers.

“The websites that [crypto] people use — these exchanges — have also expressed interest,” he notes, flagging that Nym also took in some funding from Binance Labs, the VC arm of the cryptocurrency exchange, after it was chosen to go through the Lab’s incubator program in 2018.

The issue for crypto users is their networks are (relatively) small, per Halpin — which makes them vulnerable to deanonymization attacks.

“The thing with a small network is it’s easy for random people to observe this. For example people who want to hack your exchange wallet — which happens all the time. So what cryptocurrency exchanges and companies that deal with cryptocurrency are concerned about is typically they do not want the IP address of their wallet revealed for certain kinds of transactions,” he adds. “This is a real problem for cryptocurrency exchanges — and it’s not that their enemy is the NSA; their enemy could be — and almost always is — an unknown, often lone individual but highly skilled hacker. And these kinds of people can do network observations, on smaller networks like cryptocurrency networks, that are essentially are as powerful as what the NSA could do to the entire Internet.”

There are now a range of startups seeking to decentralize various aspects of Internet or common computing infrastructure — from file storage to decentralized DNS. And while some of these tout increased security and privacy as core benefits of decentralization — suggesting they can ‘fix’ the problem of mass surveillance by having an architecture that massively distributes data, Halpin argues that a privacy claim being routinely attached to decentralized infrastructure is misplaced. (He points to a paper he co-authored on this topic, entitled Systematizing Decentralization and Privacy: Lessons from 15 Years of Research and Deployments.)

“Almost all of those projects gain decentralization at the cost of privacy,” he argues. “Because any decentralized system is easier to observe because the crowd has been spread out… than a centralized system — to a large extent. If the adversary is sufficiently powerful enough all the participants in the system. And historically we believe that most people who are interested in decentralization are not expects in privacy and underestimate how easy it is to observe decentalized systems — because most of these systems are actually pretty small.”

He points out there are “only” 10,000 full nodes in Bitcoin, for example, and a similar amount in Ethereum — while other, newer and more nascent decentralized services are likely to have fewer nodes, maybe even just a few hundred or thousand.

And while the Nym network has a similar amount of nodes to Bitcoin, the difference is it’s a mixnet too — so it’s not just decentralized but it’s also using multiple layers of encryption and traffic mixing and the various other obfuscation steps which he says “none of these other people do”.

“We assume the enemy is observing everything in our software,” he adds. “We are not what we call ‘security through obscurity’ — security through obscurity means you assume the enemy just can’t see everything; isn’t looking at your software too carefully; doesn’t know where all your servers are. But — realistically — in an age of mass surveillance, the enemy will know where all your services are and they can observe all the packets coming in, all the packets coming out. And that’s a real problem for decentralized networks.”

Post-Snowden, there’s certainly been growing interest in privacy by design — and a handful of startups and companies have been able to build momentum for services that promise to shield users’ data, such as DuckDuckGo (non-tracking search); Protonmail (e2e encrypted email); and Brave (privacy-safe browsing). Apple has also, of course, very successfully markets its premium hardware under a ‘privacy respecting’ banner.

Halpin says he wants Nym to be part of that movement; building privacy tech that can touch the mainstream.

“Because there’s so much venture capital floating into the market right now I think we have a once in a generation chance — just as everyone was excited about p2p in 2000 — we have a once in a generation chance to build privacy technology and we should build companies which natively support privacy, rather than just trying to bolt it on, in a half hearted manner, onto non-privacy respecting business models.

“Now I think the real question — which is why we didn’t raise more money — is, is there enough consumer and business demand that we can actually discover what the cost of privacy actually is? How much are people willing to pay for it and how much does it cost? And what we do is we do privacy on such a fundamental level is we say what is the cost of a privacy-enhanced byte or packet? So that’s what we’re trying to figure out: How much would people pay just for a privacy-enhanced byte and how much does just a privacy enhanced byte cost? And is this a small enough marginal cost that it can be added to all sorts of systems — just as we added TLS to all sorts of systems and encryption.”

#aws, #binance-labs, #blockchain, #cloud-services, #cryptocurrency, #cryptography, #encryption, #europe, #european-union, #machine-learning, #p2p, #polychain-capital, #privacy, #privacy-technology, #routing, #snowden, #surveillance-law, #tc, #tor, #vpn