Interview: Apple’s Head of Privacy details child abuse detection and Messages safety features

Last week, Apple announced a series of new features targeted at child safety on its devices. Though not live yet, the features will arrive later this year for users. Though the goals of these features are universally accepted to be good ones — the protection of minors and the limit of the spread of Child Sexual Abuse Material (CSAM), there have been some questions about the methods Apple is using.

I spoke to Erik Neuenschwander, Head of Privacy at Apple, about the new features launching for its devices. He shared detailed answers to many of the concerns that people have about the features and talked at length to some of the tactical and strategic issues that could come up once this system rolls out. 

I also asked about the rollout of the features, which come closely intertwined but are really completely separate systems that have similar goals. To be specific, Apple is announcing three different things here, some of which are being confused with one another in coverage and in the minds of the public. 

CSAM detection in iCloud Photos – A detection system called NeuralHash creates identifiers it can compare with IDs from the National Center for Missing and Exploited Children and other entities to detect known CSAM content in iCloud Photo libraries. Most cloud providers already scan user libraries for this information — Apple’s system is different in that it does the matching on device rather than in the cloud.

Communication Safety in Messages – A feature that a parent opts to turn on for a minor on their iCloud Family account. It will alert children when an image they are going to view has been detected to be explicit and it tells them that it will also alert the parent.

Interventions in Siri and search – A feature that will intervene when a user tries to search for CSAM-related terms through Siri and search and will inform the user of the intervention and offer resources.

For more on all of these features you can read our articles linked above or Apple’s new FAQ that it posted this weekend.

From personal experience, I know that there are people who don’t understand the difference between those first two systems, or assume that there will be some possibility that they may come under scrutiny for innocent pictures of their own children that may trigger some filter. It’s led to confusion in what is already a complex rollout of announcements. These two systems are completely separate, of course, with CSAM detection looking for precise matches with content that is already known to organizations to be abuse imagery. Communication Safety in Messages takes place entirely on the device and reports nothing externally — it’s just there to flag to a child that they are or could be about to be viewing explicit images. This feature is opt-in by the parent and transparent to both parent and child that it is enabled.

Apple’s Communication Safety in Messages feature. Image Credits: Apple

There have also been questions about the on-device hashing of photos to create identifiers that can be compared with the database. Though NeuralHash is a technology that can be used for other kinds of features like faster search in photos, it’s not currently used for anything else on iPhone aside from CSAM detection. When iCloud Photos is disabled, the feature stops working completely. This offers an opt-out for people but at an admittedly steep cost given the convenience and integration of iCloud Photos with Apple’s operating systems.

Though this interview won’t answer every possible question related to these new features, this is the most extensive on-the-record discussion by Apple’s senior privacy member. It seems clear from Apple’s willingness to provide access and its ongoing FAQ’s and press briefings (there have been at least 3 so far and likely many more to come) that it feels that it has a good solution here. 

Despite the concerns and resistance, it seems as if it is willing to take as much time as is necessary to convince everyone of that. 

This interview has been lightly edited for clarity.

TC: Most other cloud providers have been scanning for CSAM for some time now. Apple has not. Obviously there are no current regulations that say that you must seek it out on your servers, but there is some roiling regulation in the EU and other countries. Is that the impetus for this? Basically, why now?

Erik Neuenschwander: Why now comes down to the fact that we’ve now got the technology that can balance strong child safety and user privacy. This is an area we’ve been looking at for some time, including current state of the art techniques which mostly involves scanning through entire contents of users libraries on cloud services that — as you point out — isn’t something that we’ve ever done; to look through user’s iCloud Photos. This system doesn’t change that either, it neither looks through data on the device, nor does it look through all photos in iCloud Photos. Instead what it does is gives us a new ability to identify accounts which are starting collections of known CSAM.

So the development of this new CSAM detection technology is the watershed that makes now the time to launch this. And Apple feels that it can do it in a way that it feels comfortable with and that is ‘good’ for your users?

That’s exactly right. We have two co-equal goals here. One is to improve child safety on the platform and the second is to preserve user privacy, And what we’ve been able to do across all three of the features, is bring together technologies that let us deliver on both of those goals.

Announcing the Communications safety in Messages features and the CSAM detection in iCloud Photos system at the same time seems to have created confusion about their capabilities and goals. Was it a good idea to announce them concurrently? And why were they announced concurrently, if they are separate systems?

Well, while they are [two] systems they are also of a piece along with our increased interventions that will be coming in Siri and search. As important as it is to identify collections of known CSAM where they are stored in Apple’s iCloud Photos service, It’s also important to try to get upstream of that already horrible situation. So CSAM detection means that there’s already known CSAM that has been through the reporting process, and is being shared widely re-victimizing children on top of the abuse that had to happen to create that material in the first place. for the creator of that material in the first place. And so to do that, I think is an important step, but it is also important to do things to intervene earlier on when people are beginning to enter into this problematic and harmful area, or if there are already abusers trying to groom or to bring children into situations where abuse can take place, and Communication Safety in Messages and our interventions in Siri and search actually strike at those parts of the process. So we’re really trying to disrupt the cycles that lead to CSAM that then ultimately might get detected by our system.

The process of Apple’s CSAM detection in iCloud Photos system. Image Credits: Apple

Governments and agencies worldwide are constantly pressuring all large organizations that have any sort of end-to-end or even partial encryption enabled for their users. They often lean on CSAM and possible terrorism activities as rationale to argue for backdoors or encryption defeat measures. Is launching the feature and this capability with on-device hash matching an effort to stave off those requests and say, look, we can provide you with the information that you require to track down and prevent CSAM activity — but without compromising a user’s privacy?

So, first, you talked about the device matching so I just want to underscore that the system as designed doesn’t reveal — in the way that people might traditionally think of a match — the result of the match to the device or, even if you consider the vouchers that the device creates, to Apple. Apple is unable to process individual vouchers; instead, all the properties of our system mean that it’s only once an account has accumulated a collection of vouchers associated with illegal, known CSAM images that we are able to learn anything about the user’s account. 

Now, why to do it is because, as you said, this is something that will provide that detection capability while preserving user privacy. We’re motivated by the need to do more for child safety across the digital ecosystem, and all three of our features, I think, take very positive steps in that direction. At the same time we’re going to leave privacy undisturbed for everyone not engaged in the illegal activity.

Does this, creating a framework to allow scanning and matching of on-device content, create a framework for outside law enforcement to counter with, ‘we can give you a list, we don’t want to look at all of the user’s data but we can give you a list of content that we’d like you to match’. And if you can match it with this content you can match it with other content we want to search for. How does it not undermine Apple’s current position of ‘hey, we can’t decrypt the user’s device, it’s encrypted, we don’t hold the key?’

It doesn’t change that one iota. The device is still encrypted, we still don’t hold the key, and the system is designed to function on on-device data. What we’ve designed has a device side component — and it has the device side component by the way, for privacy improvements. The alternative of just processing by going through and trying to evaluate users data on a server is actually more amenable to changes [without user knowledge], and less protective of user privacy.

Our system involves both an on-device component where the voucher is created, but nothing is learned, and a server-side component, which is where that voucher is sent along with data coming to Apple service and processed across the account to learn if there are collections of illegal CSAM. That means that it is a service feature. I understand that it’s a complex attribute that a feature of the service has a portion where the voucher is generated on the device, but again, nothing’s learned about the content on the device. The voucher generation is actually exactly what enables us not to have to begin processing all users’ content on our servers which we’ve never done for iCloud Photos. It’s those sorts of systems that I think are more troubling when it comes to the privacy properties — or how they could be changed without any user insight or knowledge to do things other than what they were designed to do.

One of the bigger queries about this system is that Apple has said that it will just refuse action if it is asked by a government or other agency to compromise by adding things that are not CSAM to the database to check for them on-device. There are some examples where Apple has had to comply with local law at the highest levels if it wants to operate there, China being an example. So how do we trust that Apple is going to hew to this rejection of interference If pressured or asked by a government to compromise the system?

Well first, that is launching only for US, iCloud accounts, and so the hypotheticals seem to bring up generic countries or other countries that aren’t the US when they speak in that way, and the therefore it seems to be the case that people agree US law doesn’t offer these kinds of capabilities to our government. 

But even in the case where we’re talking about some attempt to change the system, it has a number of protections built in that make it not very useful for trying to identify individuals holding specifically objectionable images. The hash list is built into the operating system, we have one global operating system and don’t have the ability to target updates to individual users and so hash lists will be shared by all users when the system is enabled. And secondly, the system requires the threshold of images to be exceeded so trying to seek out even a single image from a person’s device or set of people’s devices won’t work because the system simply does not provide any knowledge to Apple for single photos stored in our service. And then, thirdly, the system has built into it a stage of manual review where, if an account is flagged with a collection of illegal CSAM material, an Apple team will review that to make sure that it is a correct match of illegal CSAM material prior to making any referral to any external entity. And so the hypothetical requires jumping over a lot of hoops, including having Apple change its internal process to refer material that is not illegal, like known CSAM and that we don’t believe that there’s a basis on which people will be able to make that request in the US. And the last point that I would just add is that it does still preserve user choice, if a user does not like this kind of functionality, they can choose not to use iCloud Photos and if iCloud Photos is not enabled no part of the system is functional.

So if iCloud Photos is disabled, the system does not work, which is the public language in the FAQ. I just wanted to ask specifically, when you disable iCloud Photos, does this system continue to create hashes of your photos on device, or is it completely inactive at that point?

If users are not using iCloud Photos, NeuralHash will not run and will not generate any vouchers. CSAM detection is a neural hash being compared against a database of the known CSAM hashes that are part of the operating system image. None of that piece, nor any of the additional parts including the creation of the safety vouchers or the uploading of vouchers to iCloud Photos is functioning if you’re not using iCloud Photos. 

In recent years, Apple has often leaned into the fact that on-device processing preserves user privacy. And in nearly every previous case and I can think of that’s true. Scanning photos to identify their content and allow me to search them, for instance. I’d rather that be done locally and never sent to a server. However, in this case, it seems like there may actually be a sort of anti-effect in that you’re scanning locally, but for external use cases, rather than scanning for personal use — creating a ‘less trust’ scenario in the minds of some users. Add to this that every other cloud provider scans it on their servers and the question becomes why should this implementation being different from most others engender more trust in the user rather than less?

I think we’re raising the bar, compared to the industry standard way to do this. Any sort of server side algorithm that’s processing all users photos is putting that data at more risk of disclosure and is, by definition, less transparent in terms of what it’s doing on top of the user’s library. So, by building this into our operating system, we gain the same properties that the integrity of the operating system provides already across so many other features, the one global operating system that’s the same for all users who download it and install it, and so it in one property is much more challenging, even how it would be targeted to an individual user. On the server side that’s actually quite easy — trivial. To be able to have some of the properties and building it into the device and ensuring it’s the same for all users with the features enable give a strong privacy property. 

Secondly, you point out how use of on device technology is privacy preserving, and in this case, that’s a representation that I would make to you, again. That it’s really the alternative to where users’ libraries have to be processed on a server that is less private.

The things that we can say with this system is that it leaves privacy completely undisturbed for every other user who’s not into this illegal behavior, Apple gain no additional knowledge about any users cloud library. No user’s iCloud Library has to be processed as a result of this feature. Instead what we’re able to do is to create these cryptographic safety vouchers. They have mathematical properties that say, Apple will only be able to decrypt the contents or learn anything about the images and users specifically that collect photos that match illegal, known CSAM hashes, and that’s just not something anyone can say about a cloud processing scanning service, where every single image has to be processed in a clear decrypted form and run by routine to determine who knows what? At that point it’s very easy to determine anything you want [about a user’s images] versus our system only what is determined to be those images that match a set of known CSAM hashes that came directly from NCMEC and and other child safety organizations. 

Can this CSAM detection feature stay holistic when the device is physically compromised? Sometimes cryptography gets bypassed locally, somebody has the device in hand — are there any additional layers there?

I think it’s important to underscore how very challenging and expensive and rare this is. It’s not a practical concern for most users though it’s one we take very seriously, because the protection of data on the device is paramount for us. And so if we engage in the hypothetical where we say that there has been an attack on someone’s device: that is such a powerful attack that there are many things that that attacker could attempt to do to that user. There’s a lot of a user’s data that they could potentially get access to. And the idea that the most valuable thing that an attacker — who’s undergone such an extremely difficult action as breaching someone’s device — was that they would want to trigger a manual review of an account doesn’t make much sense. 

Because, let’s remember, even if the threshold is met, and we have some vouchers that are decrypted by Apple. The next stage is a manual review to determine if that account should be referred to NCMEC or not, and that is something that we want to only occur in cases where it’s a legitimate high value report. We’ve designed the system in that way, but if we consider the attack scenario you brought up, I think that’s not a very compelling outcome to an attacker.

Why is there a threshold of images for reporting, isn’t one piece of CSAM content too many?

We want to ensure that the reports that we make to NCMEC are high value and actionable, and one of the notions of all systems is that there’s some uncertainty built in to whether or not that image matched, And so the threshold allows us to reach that point where we expect a false reporting rate for review of one in 1 trillion accounts per year. So, working against the idea that we do not have any interest in looking through users’ photo libraries outside those that are holding collections of known CSAM the threshold allows us to have high confidence that those accounts that we review are ones that when we refer to NCMEC, law enforcement will be able to take up and effectively investigate, prosecute and convict.

#apple, #apple-inc, #apple-photos, #china, #cloud-applications, #cloud-computing, #cloud-services, #computing, #cryptography, #encryption, #european-union, #head, #icloud, #ios, #iphone, #law-enforcement, #operating-system, #operating-systems, #privacy, #private, #siri, #software, #united-states, #webmail

Apple says it will begin scanning iCloud Photos for child abuse images

Later this year, Apple will roll out a technology that will allow the company to detect and report known child sexual abuse material to law enforcement in a way it says will preserve user privacy.

Apple told TechCrunch that the detection of child sexual abuse material (CSAM) is one of several new features aimed at better protecting the children who use its services from online harm, including filters to block potentially sexually explicit photos sent and received through a child’s iMessage account. Another feature will intervene when a user tries to search for CSAM-related terms through Siri and Search.

Most cloud services — Dropbox, Google, and Microsoft to name a few — already scan user files for content that might violate their terms of service or be potentially illegal, like CSAM. But Apple has long resisted scanning users’ files in the cloud by giving users the option to encrypt their data before it ever reaches Apple’s iCloud servers.

Apple said its new CSAM detection technology — NeuralHash — instead works on a user’s device, and can identify if a user uploads known child abuse imagery to iCloud without decrypting the images until a threshold is met and a sequence of checks to verify the content are cleared.

News of Apple’s effort leaked Wednesday when Matthew Green, a cryptography professor at Johns Hopkins University, revealed the existence of the new technology in a series of tweets. The news was met with some resistance from some security experts and privacy advocates, but also users who are accustomed to Apple’s approach to security and privacy that most other companies don’t have.

Apple is trying to calm fears by baking in privacy through multiple layers of encryption, fashioned in a way that requires multiple steps before it ever makes it into the hands of Apple’s final manual review.

NeuralHash will land in iOS 15 and macOS Monterey, slated to be released in the next month or two, and works by converting the photos on a user’s iPhone or Mac into a unique string of letters and numbers, known as a hash. Any time you modify an image slightly, it changes the hash and can prevent matching. Apple says NeuralHash tries to ensure that identical and visually similar images — such as cropped or edited images — result in the same hash.

Before an image is uploaded to iCloud Photos, those hashes are matched on the device against a database of known hashes of child abuse imagery, provided by child protection organizations like the National Center for Missing & Exploited Children (NCMEC) and others. NeuralHash uses a cryptographic technique called private set intersection to detect a hash match without revealing what the image is or alerting the user.

The results are uploaded to Apple but cannot be read on their own. Apple uses another cryptographic principle called threshold secret sharing that allows it only to decrypt the contents if a user crosses a threshold of known child abuse imagery in their iCloud Photos. Apple would not say what that threshold was, but said — for example — that if a secret is split into a thousand pieces and the threshold is ten images of child abuse content, the secret can be reconstructed from any of those ten images.

Read more on TechCrunch

It’s at that point Apple can decrypt the matching images, manually verify the contents, disable a user’s account and report the imagery to NCMEC, which is then passed to law enforcement. Apple says this process is more privacy mindful than scanning files in the cloud as NeuralHash only searches for known and not new child abuse imagery. Apple said that there is a one in one trillion chance of a false positive, but there is an appeals process in place in the event an account is mistakenly flagged.

Apple has published technical details on its website about how NeuralHash works, which was reviewed by cryptography experts.

But despite the wide support of efforts to combat child sexual abuse, there is still a component of surveillance that many would feel uncomfortable handing over to an algorithm, and some security experts are calling for more public discussion before Apple rolls the technology out to users.

A big question is why now and not sooner. Apple said its privacy-preserving CSAM detection did not exist until now. But companies like Apple have also faced considerable pressure from the U.S. government and its allies to weaken or backdoor the encryption used to protect their users’ data to allow law enforcement to investigate serious crime.

Tech giants have refused efforts to backdoor their systems, but have faced resistance against efforts to further shut out government access. Although data stored in iCloud is encrypted in a way that even Apple cannot access it, Reuters reported last year that Apple dropped a plan for encrypting users’ full phone backups to iCloud after the FBI complained that it would harm investigations.

The news about Apple’s new CSAM detection tool, without public discussion, also sparked concerns that the technology could be abused to flood victims with child abuse imagery that could result in their account getting flagged and shuttered, but Apple downplayed the concerns and said a manual review would review the evidence for possible misuse.

Apple said NeuralHash will roll out in the U.S. at first, but would not say if, or when, it would be rolled out internationally. Until recently, companies like Facebook were forced to switch off its child abuse detection tools across the bloc after the practice was inadvertently banned. Apple said the feature is technically optional in that you don’t have to use iCloud Photos, but will be a requirement if users do. After all, your device belongs to you but Apple’s cloud does not.

#apple, #apple-inc, #cloud-applications, #cloud-services, #computing, #cryptography, #encryption, #facebook, #federal-bureau-of-investigation, #icloud, #ios, #iphone, #johns-hopkins-university, #law-enforcement, #macos, #privacy, #security, #technology, #u-s-government, #united-states, #webmail

Javier Soltero, Google’s head of Workspace, will join us at TC Sessions: SaaS

When it comes to big SaaS products, few are bigger than Google Workspace (formerly known as GSuite). So it’s maybe no surprise that one of the first people we contacted to speak at our SaaS conference on October 27 was Google’s Javier Soltero.

Today, Puerto Rico-born Soltero is Google’s VP and GM in charge of Workspace, which has well over 2 billion users. Today, it consists of products like Gmail and Google Calendar, Docs, Sheets, Slide Meet, Chat and Drive. Currently, Workspace is going through what may be one of its most important periods of change, too, with extensive new collaboration features and, for the first time, a paid individual plan. All of this, of course, is happening against the backdrop of the pandemic, which made remote collaboration tools and video chat services like Meet more important than ever.

All of that would be enough to make Soltera a good conversation partner for a SaaS event, but his background goes much further than that. He actually started his career as a software engineer at Netscape in the late 90s and after a few other engineering positions, co-founded launched his first startup, the monitoring service Hyperic, in 2004. Hyperic then merged with SpringSource, which was acquired by VMware, landing Soletro in the position as VMware’s CTO for its SaaS and Application Services.

It’s likely his next startup, the mobile-centric email startup Acompli, though, that you remember. Founded in mid-2013, Microsoft quickly acquired Acompli in late 2014 and then essentially turned into Outlook Mobile. At Microsoft, Soltero rose through the ranks to become a corporate VP for its Office group and Cortana, before decamping to Google in 2019. Since then, he’s become the public face of GSuite/Workspace and we’ll use our time with him to talk about the joys and challenges of managing a massive SaaS product, but also about what he learned from building products from the ground up.

Register today with a $75 early bird ticket and save $100 before tickets go up. TC Sessions: SaaS takes place on October 27 and will feature the chats with the leading minds in SaaS, networking, and startup demos.

 

#companies, #computing, #gmail, #google, #google-for-education, #google-workspace, #google-calendar, #javier-soltero, #mobile-software, #puerto-rico, #tc, #technology, #vp, #webmail

An email sent by One Medical exposed hundreds of customers’ email addresses

Primary care company One Medical has apologized after it sent out an email that exposed hundreds of customers’ email addresses.

The email sent out by One Medical on Wednesday asked to “verify your email,” but one email seen by TechCrunch had more than 980 email addresses copied on the email. The cause: One Medical did not use the blind carbon copy (bcc:) field to mass email its customers, which would have hidden their email addresses from each other.

Several customers took to Twitter to complain, but also express sympathy for what was quickly chalked up to an obvious mistake. Some users reported varying numbers of email addresses on the email that they received.

We asked One Medical how many customers had their email addresses exposed and if the company plans to report the incident to state governments, as may be required under state data breach notification laws, but we did not immediately hear back.

In a brief statement posted to Twitter, One Medical acknowledged the mistake, said: “We are aware emails were sent to some of our members that exposed recipient email addresses. We apologize if this has caused you concern, but please rest assured that we have investigated the root cause of this incident and confirmed that this was not caused by a security breach of our systems. We will take all appropriate actions to prevent this from happening again.”

On the scale of security lapses, this one is fairly low down on the impact scale — compared to a breach of passwords, or financial and health data. But the exposure of email addresses can still be used to identify customers of the company.

The San Francisco-based One Medical, backed by Google’s parent company Alphabet, went public last year just prior to the start of the pandemic.

Read more:

#alphabet, #api, #computing, #data-breach, #email, #health, #microsoft, #one-medical, #outlook-com, #privacy, #san-francisco, #security, #webmail

ProtonMail gets a slick new look, as privacy tech eyes the mainstream

End-to-end encrypted email service ProtonMail has refreshed its design, updating with a cleaner look and a more customizable user interface — including the ability to pick from a bunch of themes (dark and contrasting versions are both in the mix).

Last month the Swiss company officially announced passing 50M users globally, as it turned seven years old. Over those years privacy tech has come a long way in terms of usability — which in turn has helped drive adoption.

ProtonMail’s full integration of PGP, for example, makes the gold standard of e2e encryption invisibly accessible to a mainstream Internet user, providing them with a technical guarantee that it cannot poke around in their stuff.

Its new look (see screenshot gallery below) is really just a cherry on the cake of that underlying end-to-end encryption — but as usage of its product continues to step up it’s necessarily paying more attention to design and user interface details…

Proton has also been busy building out a suite of productivity tools which it can cross-promote to webmail users, using the same privacy promise as its sales pitch (it talks about offering an “encrypted ecosystem”).

And while ProtonMail is a freemium product, which can be a red flag for digital privacy, Proton’s business has the credibility of always having had privacy engineering at its core. Its business model is to monetize via paying users — who it says are subsidizing the free tier of its tools.

One notable change to the refreshed ProtonMail web app is an app switcher that lets users quickly switch between (or indeed discover) its other apps: Proton Calendar and Proton Driver (an e2e encrypted cloud storage offering, currently still in beta).

The company also offers a VPN service, although it’s worth emphasizing that while Proton’s pledge is that it doesn’t track users’ web browsing, the service architecture of VPNs is different so there’s no technical ‘zero access’ guarantee here, as there is with Proton’s other products.

A difference of color in the icons Proton displays in the app switcher — where Mail, Calendar and Drive are colored purple like its wider brand livery and only the VPN is tinted green — is perhaps intended to represent that distinction.

Other tweaks to the updated ProtonMail interface include redesigned keyboard shortcuts which the company says makes it easier to check messages and quick filters to sort mails by read or unread status.

The company’s Import-Export app — to help users transfer messages to they can make the switch from another webmail provider — exited beta back in November.

Zooming out, adoption of privacy tech is growing for a number of reasons. As well as the increased accessibility and usability that’s being driven by developers of privacy tech tools like Proton, rising awareness of the risks around digital data breaches and privacy-hostile ad models is a parallel and powerful driver — to the point where iPhone maker Apple now routinely draws attention to rivals’ privacy-hostile digital activity in its marketing for iOS, seeking to put clear blue water between how it treats users’ data vs the data-mining competition.

Proton, the company behind ProtonMail, is positioned to benefit from the same privacy messaging. So it’s no surprise to see it making use of the iOS App Privacy disclosures introduced by Apple last year to highlight its own competitive distinction.

Here, for example, it’s pointing users’ attention to background data exchanges which underlie Google-owned Gmail and contrasting all those direct lines feeding into Google’s ad targeting business with absolutely no surveillance at all of ProtonMail users’ messages…

Comparison of the privacy disclosures of ProtonMail’s iOS app vs Gmail’s (Image credits: Proton)

Commenting on ProtonMail’s new look in a statement, Andy Yen, founder and CEO, added: “Your email is your life. It’s a record of your purchases, your conversations, your friends and loved ones. If left unprotected it can provide a detailed insight into your private life. We believe users should have a choice on how and with whom their data is shared. With the redesigned ProtonMail, we are offering an even easier way for users to take control of their data.”

#andy-yen, #apple, #apps, #digital-privacy, #e2e, #e2e-encryption, #email-encryption, #encryption, #europe, #free-software, #gmail, #google, #iphone, #privacy, #productivity-tools, #proton, #protonmail, #vpn, #web-app, #webmail

Click Studios asks customers to stop tweeting about its Passwordstate data breach

Australian security software house Click Studios has told customers not to post emails sent by the company about its data breach, which allowed malicious hackers to push a malicious update to its flagship enterprise password manager Passwordstate to steal customer passwords.

Last week, the company told customers to “commence resetting all passwords” stored in its flagship password manager after the hackers pushed the malicious update to customers over a 28-hour window between April 20-22. The malicious update was designed to contact the attacker’s servers to retrieve malware designed to steal and send the password manager’s contents back to the attackers.

In an email to customers, Click Studios did not say how the attackers compromised the password manager’s update feature, but included a link to a security fix.

But news of the breach only became public after after Danish cybersecurity firm CSIS Group published a blog post with details of the attack hours after Click Studios emailed its customers.

Click Studios claims Passwordstate is used by “more than 29,000 customers,” including in the Fortune 500, government, banking, defense and aerospace, and most major industries.

In an update on its website, Click Studios said in a Wednesday advisory that customers are “requested not to post Click Studios correspondence on Social Media.” The email adds: “It is expected that the bad actor is actively monitoring Social Media, looking for information they can use to their advantage, for related attacks.”

“It is expected the bad actor is actively monitoring social media for information on the compromise and exploit. It is important customers do not post information on Social Media that can be used by the bad actor. This has happened with phishing emails being sent that replicate Click Studios email content,” the company said.

Besides a handful of advisories published by the company since the breach was discovered, the company has refused to comment or respond to questions.

It’s also not clear if the company has disclosed the breach to U.S. and EU authorities where the company has customers, but where data breach notification rules obligate companies to disclose incidents timely. Companies can be fined up to 4% of their annual global revenue for falling foul of Europe’s GDPR rules.

Click Studios chief executive Mark Sandford has not responded to repeated requests for comment by TechCrunch. Instead, TechCrunch received the same canned autoresponse from the company’s support email saying that the company’s staff are “focused only on assisting customers technically.”

TechCrunch emailed Sandford again on Thursday for comment on the latest advisory, but did not hear back.

#aerospace, #articles, #banking, #computer-security, #crime, #cybercrime, #data-breach, #europe, #european-union, #major, #microsoft, #outlook-com, #password, #password-manager, #passwordstate, #phishing, #security, #social-engineering, #social-media, #spamming, #united-states, #webmail

Messaging app Go SMS Pro exposed millions of users’ private photos and files

Go SMS Pro, one of the most popular messaging apps for Android, is exposing photos, videos and other files sent privately by its users. Worse, the app maker has done nothing to fix the bug.

Security researchers at Trustwave discovered the flaw in August and contacted the app maker with a 90-day deadline to fix the issue, as is standard practice in vulnerability disclosure to allow enough time for a fix. But after the deadline elapsed without hearing back, the researchers went public.

Trustwave shared its findings with TechCrunch this week.

When a Go SMS Pro user sends a photo, video or other file to someone who doesn’t have the app installed, the app uploads the file to its servers, and lets the user share a web address by text message so the recipient can see the file without installing the app. But the researchers found that these web addresses were sequential. In fact, any time a file was shared — even between app users — a web address would be generated regardless. That meant anyone who knew about the predictable web address could have cycled through millions of different web addresses to users’ files.

Go SMS Pro has more than 100 million installs, according to its listing in Google Play.

TechCrunch verified the researcher’s findings. In viewing just a few dozen links, we found a person’s phone number, a screenshot of a bank transfer, an order confirmation including someone’s home address, an arrest record, and far more explicit photos than we were expecting, to be quite honest.

Karl Sigler, senior security research manager at Trustwave, said while it wasn’t possible to target any specific user, any file sent using the app is vulnerable to public access. “An attacker can create scripts that could throw a wide net across all the media files stored in the cloud instance,” he said.

We had about as much luck getting a response from the app maker as the researchers. TechCrunch emailed two email addresses associated with the app. One email immediately bounced back saying the email couldn’t be delivered due to a full inbox. The other email was opened, according to our email open tracker, but a follow-up email was not.

Since you might now want a messaging app that protects your privacy, we have you covered.

#apps, #data-breach, #email, #google-play, #mobile-security, #mobile-software, #privacy, #security, #software, #webmail

Microsoft outage leaves users unable to access Office, Outlook, Teams

Microsoft said it’s investigating an authentication outage with Office 365, preventing users from accessing some of the company’s most widely used services, including Office.com, Outlook.com, and Teams.

The company’s status dashboard said the issue started at 2:25pm PT, and has impacted mostly consumer users across the globe for the last few hours. Some government users may also be impacted, the company said.

In a series of tweets, Microsoft said that it tried to fix the issue, but was forced to roll back its changes after the fix failed.

For now, Microsoft said it was “rerouting traffic to alternate infrastructure to improve the user experience while we continue to investigate the issue.”

But that leaves millions on the U.S. west coast and users in Australia still unable to access their online services.

TechCrunch will keep you posted with developments. In the meantime, feel free to catch up with some of the bigger stories of the day.

Read more:

#australia, #broadband, #computing, #judge, #microsoft-office, #operating-systems, #outlook-com, #personal-shopper, #play-store, #security, #software, #united-states, #webmail, #windows-live

Apple said to soon offer subscription bundles combining multiple of its services

Apple is reportedly getting ready to launch new bundles of its various subscription services, according to Bloomberg. The bundled services packages, said to be potentially called ‘Apple One,’ will include Apple services including Apple Music, Apple Arcade, Apple TV+, Apple News+ and iCloud in a number of different tiered offerings, all for one fee lower that would be lower than subscribing to each individually.

Bloomberg says that these could launch as early as October, which is when the new iPhone is said to be coming to market. Different package options will include one entry-level offering with Apple Music and Apple TV+, alongside an upgrade option that adds Apple Arcade, and other that also includes Apple News+. A higher-priced option will also bundle in extra iCloud storage, according to the report, though Bloomberg also claims that these arrangements and plans could still change prior to launch.

While the final pricing isn’t included in the report, it does say that the aim is to save subscribers between $2 and $5 per month depending on the tier, vs. the standard cost of subscribing to those services currently. All subscriptions would also work with Apple’s existing Family Sharing system, meaning up to six members of a single household can have access through Apple’s existing shared family digital goods infrastructure.

Apple is also said to be planning to continue its strategy of bundling free subscriptions to its services with new hardware purchases – a tactic it used last year with the introduction of Apple TV+, which it offered free for a year to customers who bought recently-released Apple hardware.

Service subscription bundling is move that a lot of Apple observers have been calling for basically ever since Apple started investing more seriously in its service options. The strategy makes a lot of sense, especially in terms of helping Apple boost adoption of its services which aren’t necessarily as popular as some of the others. It also provides a way for the company to begin to build out a more comprehensive and potentially stable recurring revenue business similar to something like Amazon Prime, which is a regular standout success story for Amazon in terms of its fiscal performance.

#amazon, #apple, #apple-arcade, #apple-inc, #apple-music, #apple-news, #apple-tv, #apps, #cloud-applications, #computing, #icloud, #ios, #iphone, #itunes, #subscription-services, #tc, #webmail

Gmail for G Suite gets deep integrations with Chat, Meet, Rooms and more

Google is launching a major update to its G Suite productivity tools today that will see a deep integration of Gmail, Chat, Meet and Rooms on the web and on mobile, as well as other tools like Calendar, Docs, Sheets and Slides. This integration will become available in the G Suite early adopter program, with a wider roll-out coming at a later time.

The G Suite team has been working on this project for about a year, though it fast-tracked the Gmail/Meet integration, which was originally scheduled to be part of today’s release, as part of its response to the COVID-19 pandemic.

At the core of today’s update is the idea that we’re all constantly switching between different modes of communication, be that email, chat, voice or video. So with this update, the company is bringing all of this together, with Gmail being the focal point for the time being, given that this is where most users already find themselves for hours on end anyway.

Google is branding this initiative as a ‘better home for work’ and in practice, it means that you’ll not just see deeper integrations between products, like a fill calendaring and file management experience in Gmail, but also the ability to have a video chat open on one side of the window while collaboratively editing a document in real-time on the other.

Image Credits: Google

According to G Suite VP and GM Javier Soltero, the overall idea here is not just to bring all of these tools closer together to reduce the task-switching that users have to do.

Image Credits: Google

“We’re announcing something we’ve been working on since a little bit before I even joined Google last year: a new integrated workspace designed to bring together all the core components of communication and collaboration into a single surface that is not just about bringing these ingredients into the same pane of glass, but also realizes something that’s greater than the sum of its parts,” he told me ahead of today’s announcement. “The degree of integration across the different modes of communication, specifically email, chat, and video calling and voice video calling along with our existing physical existing strength in collaboration.”

Just like on the web, Google also revealed some of its plans when it first announced its latest major update to Gmail for mobile in May, with its Meet integration in the form of a new bar at the bottom of the screen for moving between Mail and Meet. With this, it’s expanding this to include native Chat and Rooms support as well. Soltero noted that Google things of these four products as the “four pillars of the integrated workspace.” Having them all integrated into a single app means you can manage the notification behavior of all of them in a single place, for example, and without the often cumbersome task-switching experience on mobile.

For now, these updates are specific to G Suite, though similar to Google’s work around bringing Meet to consumers, the company plans to bring this workspace experience to consumers as well, but what exactly that will look like still remains to be seen. “Right now we’re really focused. The people who urgently need this are those involved in productivity scenarios. This idea of ‘the new home for work’ is much more about collaboration that is specific to professional settings, productivity and workplace settings,” Soltero said.

But there is more…

Google is also announcing a few other feature updates to its G Suite line today. Chat rooms, for example, are now getting shared files and tasks, with the ability to assign tasks and to invite users from outside your company into rooms. These rooms now also let you have chats open on one side and edit a document on the other, all without switching to a completely different web app.

Also new is the ability in Gmail to search not just for emails but also chats, as well as new tools to pin important rooms and new ‘do not disturb’ and ‘out of office’ settings.

One nifty new feature of these new integrated workspaces is that Google is also working with some of its partners to bring their apps into the experience. The company specifically mentions DocuSign, Salesforce and Trello. These companies already offer some deep Gmail integrations, including integrations with the Gmail sidebar, so we’ll likely see this list expand over time.

Meet itself, too, is getting some updates in the coming weeks with ‘knocking controls’ to make sure that once you throw somebody out of a meeting, that person can’t come back, and safety locks that help meeting hosts decide who can chat or present in a meeting.

Image Credits:

#android, #chat-room, #computing, #docusign, #enterprise, #g-suite, #gmail, #google, #javier-soltero, #major, #meet, #operating-systems, #outlook-com, #salesforce, #tc, #trello, #vp, #webmail

Apple will let users pick their own default email and browser apps

Apple quietly made a major announcement that will change life for users of mobile Chrome, Gmail or Outlook. The company is shifting its view on app defaults and will be allowing users to set different app defaults for their mail and browser apps.

The company specifically denoted that this feature is coming to iPadOS and iOS 14. This likely means users can designate which browser they’re directed to when they tap a link somewhere. We’ll see whether Apple reserves any functionality for its own services. Rather than highlighting this new feature in the keynote, they snuck it into roundup screens that hovered onscreen for a few seconds. It’s hidden in the bottom center of the screen.

This is a big change for Apple but it’s no surprise they wouldn’t opt to specifically highlight this onstage. Apple has been reluctant to give users the option to use third-party apps as defaults. The big exception to date has been allowing users early on to set Google Maps as the default over Apple Maps.

Email and browsing are huge mobile use cases and it’s surprising that users haven’t had this capability to shift defaults to apps like Chrome or Gmail until this upcoming update. As Apple finds itself at the center of more anti-trust conversations, app defaults has been one area that’s always popped up as a method by which Apple promotes its own services over those from other companies.

Details are scant in terms of what this feature will look like exactly and what services will boast support, but I imagine we’ll hear more as the betas begin rolling out.

#apple-inc, #freeware, #gmail, #google, #google-chrome, #ios, #ipad, #operating-systems, #software, #tc, #web-browsers, #webmail, #wwdc-2020

Google brings Meet to Gmail on mobile

Google today announced a deeper integration between Gmail on mobile and its Meet video conferencing service. Now, if you use Gmail on Android or iOS and somebody sends you a link to a Meet event, you can join the meeting right from your inbox.

That obviously isn’t radically different from how things work today, where Gmail will take you right into the Meet app, but the major difference here is that you won’t have to install the dedicated Meet app anymore to join a call from Gmail.

The second and maybe bigger update — and this one won’t launch until a few weeks from now — is that the mobile Gmail app will also get a new Meet tab at the bottom of the screen. This new tab will show you all your upcoming Meet meetings in Google Calendar and will allow you to start a meeting, get a link to share or schedule a meeting in Calendar.

If you’re not a Meet power user, then you can turn this tab off, too, which I assume a lot of people will do, given that not everybody will want to give up screen estate in their email app for a dedicated Meet button.

It’s interesting to see that Google is trying to bring Gmail and Meet so closely together. The act of moving between two different apps for email and meetings never felt like a burden, but Google clearly wants more people to be aware of Meet (especially now that it offers a free tier) and remove any friction that could keep potential users from using it. The company already integrated Meet into the Gmail web app, where it felt pretty natural given that Gmail on the web long featured support for Hangouts (RIP, I guess?) and its predecessors. On mobile, though, it feels a bit forced. Hangouts, after all, was never a built-in part of Gmail on mobile either.

#android, #apps, #computing, #gmail, #google, #google-hangouts, #google-calendar, #meet, #operating-systems, #web-app, #webmail

Basecamp launches Hey, a hosted email service for neat freaks

Project management software maker Basecamp has launched a feature-packed hosted email service, called Hey — which they tout as taking aim at the traditional chaos and clutter of the email inbox.

Hey includes a built in screener that asks users to confirm whether or not they want to receive email from a new address. Inbound emails a Hey user has consented to are then triaged into different trays — with a central “imbox” (“im” standing for important) containing only the comms the user specifies as important to them; while newsletters are intended to live a News Feed style tray, called The Feed, (where they’re automatically displayed partially opened for easy casual reading); while email receipts get stacked up in a for-reference ‘Paper Trail’ inbox view. 

Other notable features include baked in tracking pixel blocking (with Hey acting like a VPN and sharing its own IP address with trackers, rather than email senders learning yours when you open a mail with embedded trackers); a handy looking attachment library that lets you view all attachments you’ve ever received in one searchable place; and a ‘Reply Later’ feature that lets you tag emails you want to follow up on, teeing them up in a stack — clicking a ‘Focus & Reply’ button then displays all stacked emails in a single page so you can take a one-hit run at replying to everything you teed up earlier.

The software is the literal opposite of an MVP — with all sorts of organizational workflow style hacks baked in at launch, such as the ability to merge different email threads; rename email subjects; set up notifications for individual contacts; take clippings from within emails to save to a reference library; and attach your own sticky notes to emails as a way to keep further tabs on stuff you may want to revisit or remember.

Some other salient points: Hey is not free (they’re offering a free 14 day trial but pricing thereafter is a flat $99 per year billed in one go for 100GB storage; certain vanity email addresses may cost you more); Hey is not end-to-end encrypted (they make an up front promise that they’re not data mining your inbox but they do hold the keys to access your info); Hey does not support IMAP or POP, so Basecamp is giving the middle finger to standard email protocols — instead you’re tethered to using only Hey’s apps forever (hence they have apps for web, Mac, Windows, Linux, iPhone, iPad, and Android right now); nor can you import email from another webmail service.

Asked by a Twitter user about the lack of support for IMAP, Basecamp CTO David Heinemeier Hansson confirmed it will never be supported, writing that: “Our changes to email requires the vertical integration we’ve done.”

While custom domains are not available at launch, Heinemeier Hansson noted they are coming “later this year”. Also on the slate for the same timeframe: Hey for Business.

Right now, Basecamp is limiting sign ups to the free trial of Hey via a wait list plus invite system.

As of yesterday, it said there were more than 50,000 people on the wait list — warning it might take “a couple of weeks” before they’re ready to accept direct sign-ups.

In the meanwhile, for anyone keen on a closer look of Basecamp’s reorganized spin on email, founder and CEO, Jason Fried, has recorded the below video for a walk through tour of Hey’s features…

#apps, #basecamp, #data-mining, #email, #hey, #jason-fried, #privacy, #webmail

Superhuman CEO Rahul Vohra on waitlists, freemium pricing and future products

The “Sent via Superhuman iOS” email signature has become one of the strangest flexes in the tech industry, but its influence is enduring, as the $30 per month invite-only email app continues to shape how a wave of personal productivity startups are building their business and product strategies.

I had a chance to chat with Superhuman CEO and founder Rahul Vohra earlier this month during an oddly busy time for him. He had just announced a dedicated $7 million angel fund with his friend Todd Goldberg (which I wrote up here) and we also noted that LinkedIn is killing off Sales Navigator, a feature driven by Rapportive, which Vohra founded and later sold in 2012. All the while, his buzzy email company is plugging along, amassing more interested users. Vohra tells me there are now more than 275,000 people on the waitlist for Superhuman.

Below is a chunk of my conversation with Vohra, which has been edited for length and clarity.


TechCrunch: When you go out to raise funding and a chunk of your theoretical user base is sitting on a waitlist, is it a little tougher to determine the total market for your product?

Rahul Vohra: That’s a good question. When we were doing our Series B, it was very easily answered because we’re one of a cohort of companies, that includes Notion and Airtable and Figma, where the addressable market — assuming you can build a product that’s good enough — is utterly enormous.

With my last company, Rapportive, there was a lot of conversation around, “oh, what’s the business model? What’s the market? How many people need this?” This almost never came up in any fundraising conversation. People were more like, “well, if this thing works, obviously the market is basically all of prosumer productivity and that is, no matter how you define it, absolutely huge.”

#business-software, #enterprise, #figma, #github, #gmail, #notion, #rahul-vohra, #rapportive, #superhuman, #tc, #todd-goldberg, #webmail