7 new security features Apple quietly announced at WWDC

Apple went big on privacy during its Worldwide Developer Conference (WWDC) keynote this week, showcasing features from on-device Siri audio processing to a new privacy dashboard for iOS that makes it easier than ever to see which apps are collecting your data and when.

While typically vocal about security during the Memoji-filled, two-hour-long(!) keynote, the company also quietly introduced several new security and privacy-focused features during its WWDC developer sessions. We’ve rounded up some of the most interesting — and important.

Passwordless login with iCloud Keychain

Apple is the latest tech company taking steps to ditch the password. During its “Move beyond passwords” developer session, it previewed Passkeys in iCloud Keychain, a method of passwordless authentication powered by WebAuthn, and Face ID and Touch ID.

The feature, which will ultimately be available in both iOS 15 and macOS Monterey, means you no longer have to set a password when creating an account or a website or app. Instead, you’ll simply pick a username, and then use Face ID or Touch ID to confirm it’s you. The passkey is then stored in your keychain and then synced across your Apple devices using iCloud — so you don’t have to remember it, nor do you have to carry around a hardware authenticator key.

“Because it’s just a single tap to sign in, it’s simultaneously easier, faster and more secure than almost all common forms of authentication today,” said Garrett Davidson, an Apple authentication experience engineer. 

While it’s unlikely to be available on your iPhone or Mac any time soon — Apple says the feature is still in its ‘early stages’ and it’s currently disabled by default — the move is another sign of the growing momentum behind eliminating passwords, which are prone to being forgotten, reused across multiple services, and — ultimately — phishing attacks. Microsoft previously announced plans to make Windows 10 password-free, and Google recently confirmed that it’s working towards “creating a future where one day you won’t need a password at all”.

Microphone indicator in macOS

macOS has a new indicator to tell you when the microhpone is on. (Image: Apple)

Since the introduction of iOS 14, iPhone users have been able to keep an eye on which apps are accessing their microphone via a green or orange dot in the status bar. Now it’s coming to the desktop too.

In macOS Monterey, users will be able to see which apps are accessing their Mac’s microphone in Control Center, MacRumors reports, which will complement the existing hardware-based green light that appears next to a Mac’s webcam when the camera is in use.

Secure paste

iOS 15, which will include a bunch of privacy-bolstering tools from Mail Privacy Protection to App Privacy Reports, is also getting a feature called Secure Paste that will help to shield your clipboard data from other apps.

This feature will enable users to paste content from one app to another, without the second app being able to access the information on the clipboard until you paste it. This is a significant improvement over iOS 14, which would notify when an app took data from the clipboard but did nothing to prevent it from happening.

With secure paste, developers can let users paste from a different app without having access to what was copied until the user takes action to paste it into their app,” Apple explains. “When developers use secure paste, users will be able to paste without being alerted via the [clipboard] transparency notification, helping give them peace of mind.”

While this feature sounds somewhat insignificant, it’s being introduced following a major privacy issue that came to light last year. In March 2020, security researchers revealed that dozens of popular iOS apps — including TikTok — were “snooping” on users’ clipboard without their consent, potentially accessing highly sensitive data.

Advanced Fraud Protection for Apple Card

Payments fraud is more prevalent than ever as a result of the pandemic, and Apple is looking to do something about it. As first reported by 9to5Mac, the company has previewed Advanced Fraud Protection, a feature that will let Apple Card users generate new card numbers in the Wallet app.

While details remain thin — the feature isn’t live in the first iOS 15 developer beta — Apple’s explanation suggests that Advanced Fraud Protection will make it possible to generate new security codes — the three-digit number you enter at checkout – when making online purchases. 

“With Advanced Fraud Protection, Apple Card users can have a security code that changes regularly to make online Card Number transactions even more secure,” the brief explainer reads. We’ve asked Apple for some more information. 

‘Unlock with Apple Watch’ for Siri requests

As a result of the widespread mask-wearing necessitated by the pandemic, Apple introduced an ‘Unlock with Apple Watch’ in iOS 14.5 that let enabled users to unlock their iPhone and authenticate Apple Pay payments using an Apple Watch instead of Face ID.

The scope of this feature is expanding with iOS 15, as the company has confirmed that users will soon be able to use this alternative authentication method for Siri requests, such as adjusting phone settings or reading messages. Currently, users have to enter a PIN, password or use Face ID to do so.

“Use the secure connection to your Apple Watch for Siri requests or to unlock your iPhone when an obstruction, like a mask, prevents Face ID from recognizing your Face,” Apple explains. Your watch must be passcode protected, unlocked, and on your wrist close by.”

Standalone security patches

To ensure iPhone users who don’t want to upgrade to iOS 15 straight away are up to date with security updates, Apple is going to start decoupling patches from feature updates. When iOS 15 lands later this year, users will be given the option to update to the latest version of iOS or to stick with iOS 14 and simply install the latest security fixes. 

“iOS now offers a choice between two software update versions in the Settings app,” Apple explains (via MacRumors). “You can update to the latest version of iOS 15 as soon as it’s released for the latest features and most complete set of security updates. Or continue on ‌iOS 14‌ and still get important security updates until you’re ready to upgrade to the next major version.”

This feature sees Apple following in the footsteps of Google, which has long rolled out monthly security patches to Android users.

‘Erase all contents and settings’ for Mac

Wiping a Mac has been a laborious task that has required you to erase your device completely then reinstall macOS. Thankfully, that’s going to change. Apple is bringing the “erase all contents and settings” option that’s been on iPhones and iPads for years to macOS Monterey.

The option will let you factory reset your MacBook with just a click. “System Preferences now offers an option to erase all user data and user-installed apps from the system, while maintaining the operating system currently installed,” Apple says. “Because storage is always encrypted on Mac systems with Apple Silicon or the T2 chip, the system is instantly and securely ‘erased’ by destroying the encryption keys.”

#android, #apple, #apple-inc, #clipboard, #computing, #control-center, #encryption, #face-id, #google, #icloud, #ios, #ios-14, #ipads, #iphone, #keychain, #microsoft, #microsoft-windows, #online-purchases, #operating-system, #operating-systems, #privacy, #security, #siri, #software

0

ProtonMail gets a slick new look, as privacy tech eyes the mainstream

End-to-end encrypted email service ProtonMail has refreshed its design, updating with a cleaner look and a more customizable user interface — including the ability to pick from a bunch of themes (dark and contrasting versions are both in the mix).

Last month the Swiss company officially announced passing 50M users globally, as it turned seven years old. Over those years privacy tech has come a long way in terms of usability — which in turn has helped drive adoption.

ProtonMail’s full integration of PGP, for example, makes the gold standard of e2e encryption invisibly accessible to a mainstream Internet user, providing them with a technical guarantee that it cannot poke around in their stuff.

Its new look (see screenshot gallery below) is really just a cherry on the cake of that underlying end-to-end encryption — but as usage of its product continues to step up it’s necessarily paying more attention to design and user interface details…

Proton has also been busy building out a suite of productivity tools which it can cross-promote to webmail users, using the same privacy promise as its sales pitch (it talks about offering an “encrypted ecosystem”).

And while ProtonMail is a freemium product, which can be a red flag for digital privacy, Proton’s business has the credibility of always having had privacy engineering at its core. Its business model is to monetize via paying users — who it says are subsidizing the free tier of its tools.

One notable change to the refreshed ProtonMail web app is an app switcher that lets users quickly switch between (or indeed discover) its other apps: Proton Calendar and Proton Driver (an e2e encrypted cloud storage offering, currently still in beta).

The company also offers a VPN service, although it’s worth emphasizing that while Proton’s pledge is that it doesn’t track users’ web browsing, the service architecture of VPNs is different so there’s no technical ‘zero access’ guarantee here, as there is with Proton’s other products.

A difference of color in the icons Proton displays in the app switcher — where Mail, Calendar and Drive are colored purple like its wider brand livery and only the VPN is tinted green — is perhaps intended to represent that distinction.

Other tweaks to the updated ProtonMail interface include redesigned keyboard shortcuts which the company says makes it easier to check messages and quick filters to sort mails by read or unread status.

The company’s Import-Export app — to help users transfer messages to they can make the switch from another webmail provider — exited beta back in November.

Zooming out, adoption of privacy tech is growing for a number of reasons. As well as the increased accessibility and usability that’s being driven by developers of privacy tech tools like Proton, rising awareness of the risks around digital data breaches and privacy-hostile ad models is a parallel and powerful driver — to the point where iPhone maker Apple now routinely draws attention to rivals’ privacy-hostile digital activity in its marketing for iOS, seeking to put clear blue water between how it treats users’ data vs the data-mining competition.

Proton, the company behind ProtonMail, is positioned to benefit from the same privacy messaging. So it’s no surprise to see it making use of the iOS App Privacy disclosures introduced by Apple last year to highlight its own competitive distinction.

Here, for example, it’s pointing users’ attention to background data exchanges which underlie Google-owned Gmail and contrasting all those direct lines feeding into Google’s ad targeting business with absolutely no surveillance at all of ProtonMail users’ messages…

Comparison of the privacy disclosures of ProtonMail’s iOS app vs Gmail’s (Image credits: Proton)

Commenting on ProtonMail’s new look in a statement, Andy Yen, founder and CEO, added: “Your email is your life. It’s a record of your purchases, your conversations, your friends and loved ones. If left unprotected it can provide a detailed insight into your private life. We believe users should have a choice on how and with whom their data is shared. With the redesigned ProtonMail, we are offering an even easier way for users to take control of their data.”

#andy-yen, #apple, #apps, #digital-privacy, #e2e, #e2e-encryption, #email-encryption, #encryption, #europe, #free-software, #gmail, #google, #iphone, #privacy, #productivity-tools, #proton, #protonmail, #vpn, #web-app, #webmail

0

Ring won’t say how many users had footage obtained by police

Ring gets a lot of criticism, not just for its massive surveillance network of home video doorbells and its problematic privacy and security practices, but also for giving that doorbell footage to law enforcement. While Ring is making moves towards transparency, the company refuses to disclose how many users had their data given to police.

The video doorbell maker, acquired by Amazon in 2018, has partnerships with at least 1,800 U.S. police departments (and growing) that can request camera footage from Ring doorbells. Prior to a change this week, any police department that Ring partnered with could privately request doorbell camera footage from Ring customers for an active investigation. Ring will now let its police partners publicly request video footage from users through its Neighbors app.

The change ostensibly gives Ring users more control when police can access their doorbell footage, but ignores privacy concerns that police can access users’ footage without a warrant.

Civil liberties advocates and lawmakers have long warned that police can obtain camera footage from Ring users through a legal back door because Ring’s sprawling network of doorbell cameras are owned by private users. Police can still serve Ring with a legal demand, such as a subpoena for basic user information, or a search warrant or court order for video content, assuming there is evidence of a crime.

Ring received over 1,800 legal demands during 2020, more than double from the year earlier, according to a transparency report that Ring published quietly in January. Ring does not disclose sales figures but says it has “millions” of customers. But the report leaves out context that most transparency reports include: how many users or accounts had footage given to police when Ring was served with a legal demand?

When reached, Ring declined to say how many users had footage obtained by police.

That number of users or accounts subject to searches is not inherently secret, but an obscure side effect of how companies decide — if at all — to disclose when the government demands user data. Though they are not obligated to, most tech companies publish transparency reports once or twice a year to show how often user data is obtained by the government.

Transparency reports were a way for companies subject to data requests to push back against damning allegations of intrusive bulk government surveillance by showing that only a fraction of a company’s users are subject to government demands.

But context is everything. Facebook, Apple, Microsoft, Google, and Twitter all reveal how many legal demands they receive, but also specify how many users or accounts had data given. In some cases, the number of users or accounts affected can be twice or more than threefold the number of demands they received.

Ring’s parent, Amazon, is a rare exception among the big tech giants, which does not break out the specific number of users whose information was turned over to law enforcement.

“Ring is ostensibly a security camera company that makes devices you can put on your own homes, but it is increasingly also a tool of the state to conduct criminal investigations and surveillance,” Matthew Guariglia, policy analyst at the Electronic Frontier Foundation, told TechCrunch.

Guariglia added that Ring could release the numbers of users subject to legal demands, but also how many users have previously responded to police requests through the app.

Ring users can opt out of receiving requests from police, but this option would not stop law enforcement from obtaining a legal order from a judge for your data. Users can also switch on end-to-end encryption to prevent anyone other than the user, including Ring, from accessing their videos.

#amazon, #apple, #articles, #electronic-frontier-foundation, #encryption, #facebook, #google, #hardware, #judge, #law-enforcement, #microsoft, #neighbors, #operating-systems, #privacy, #ring, #security, #smart-doorbell, #software, #terms-of-service, #transparency-report

0

The rise of cybersecurity debt

Ransomware attacks on the JBS beef plant, and the Colonial Pipeline before it, have sparked a now familiar set of reactions. There are promises of retaliation against the groups responsible, the prospect of company executives being brought in front of Congress in the coming months, and even a proposed executive order on cybersecurity that could take months to fully implement.

But once again, amid this flurry of activity, we must ask or answer a fundamental question about the state of our cybersecurity defense: Why does this keep happening?

I have a theory on why. In software development, there is a concept called “technical debt.” It describes the costs companies pay when they choose to build software the easy (or fast) way instead of the right way, cobbling together temporary solutions to satisfy a short-term need. Over time, as teams struggle to maintain a patchwork of poorly architectured applications, tech debt accrues in the form of lost productivity or poor customer experience.

Complexity is the enemy of security. Some companies are forced to put together as many as 50 different security solutions from up to 10 different vendors to protect their sprawling technology estates.

Our nation’s cybersecurity defenses are laboring under the burden of a similar debt. Only the scale is far greater, the stakes are higher and the interest is compounding. The true cost of this “cybersecurity debt” is difficult to quantify. Though we still do not know the exact cause of either attack, we do know beef prices will be significantly impacted and gas prices jumped 8 cents on news of the Colonial Pipeline attack, costing consumers and businesses billions. The damage done to public trust is incalculable.

How did we get here? The public and private sectors are spending more than $4 trillion a year in the digital arms race that is our modern economy. The goal of these investments is speed and innovation. But in pursuit of these ambitions, organizations of all sizes have assembled complex, uncoordinated systems — running thousands of applications across multiple private and public clouds, drawing on data from hundreds of locations and devices.

Complexity is the enemy of security. Some companies are forced to put together as many as 50 different security solutions from up to 10 different vendors to protect their sprawling technology estates — acting as a systems integrator of sorts. Every node in these fantastically complicated networks is like a door or window that might be inadvertently left open. Each represents a potential point of failure and an exponential increase in cybersecurity debt.

We have an unprecedented opportunity and responsibility to update the architectural foundations of our digital infrastructure and pay off our cybersecurity debt. To accomplish this, two critical steps must be taken.

First, we must embrace open standards across all critical digital infrastructure, especially the infrastructure used by private contractors to service the government. Until recently, it was thought that the only way to standardize security protocols across a complex digital estate was to rebuild it from the ground up in the cloud. But this is akin to replacing the foundations of a home while still living in it. You simply cannot lift-and-shift massive, mission-critical workloads from private data centers to the cloud.

There is another way: Open, hybrid cloud architectures can connect and standardize security across any kind of infrastructure, from private data centers to public clouds, to the edges of the network. This unifies the security workflow and increases the visibility of threats across the entire network (including the third- and fourth-party networks where data flows) and orchestrates the response. It essentially eliminates weak links without having to move data or applications — a design point that should be embraced across the public and private sectors.

The second step is to close the remaining loopholes in the data security supply chain. President Biden’s executive order requires federal agencies to encrypt data that is being stored or transmitted. We have an opportunity to take that a step further and also address data that is in use. As more organizations outsource the storage and processing of their data to cloud providers, expecting real-time data analytics in return, this represents an area of vulnerability.

Many believe this vulnerability is simply the price we pay for outsourcing digital infrastructure to another company. But this is not true. Cloud providers can, and do, protect their customers’ data with the same ferocity as they protect their own. They do not need access to the data they store on their servers. Ever.

To ensure this requires confidential computing, which encrypts data at rest, in transit and in process. Confidential computing makes it technically impossible for anyone without the encryption key to access the data, not even your cloud provider. At IBM, for example, our customers run workloads in the IBM Cloud with full privacy and control. They are the only ones that hold the key. We could not access their data even if compelled by a court order or ransom request. It is simply not an option.

Paying down the principal on any kind of debt can be daunting, as anyone with a mortgage or student loan can attest. But this is not a low-interest loan. As the JBS and Colonial Pipeline attacks clearly demonstrate, the cost of not addressing our cybersecurity debt spans far beyond monetary damages. Our food and fuel supplies are at risk, and entire economies can be disrupted.

I believe that with the right measures — strong public and private collaboration — we have an opportunity to construct a future that brings forward the combined power of security and technological advancement built on trust.

#cloud-computing, #cloud-infrastructure, #cloud-management, #colonial-pipeline, #column, #cybersecurity, #cyberwarfare, #data-security, #developer, #encryption, #opinion, #security, #software-development, #tc

0

For startups, trustworthy security means going above and beyond compliance standards

When it comes to meeting compliance standards, many startups are dominating the alphabet. From GDPR and CCPA to SOC 2, ISO27001, PCI DSS and HIPAA, companies have been charging toward meeting the compliance standards required to operate their businesses.

Today, every healthcare founder knows their product must meet HIPAA compliance, and any company working in the consumer space would be well aware of GDPR, for example.

But a mistake many high-growth companies make is that they treat compliance as a catchall phrase that includes security. Thinking this could be an expensive and painful error. In reality, compliance means that a company meets a minimum set of controls. Security, on the other hand, encompasses a broad range of best practices and software that help address risks associated with the company’s operations.

It makes sense that startups want to tackle compliance first. Being compliant plays a big role in any company’s geographical expansion to regulated markets and in its penetration to new industries like finance or healthcare. So in many ways, achieving compliance is a part of a startup’s go-to-market kit. And indeed, enterprise buyers expect startups to check the compliance box before signing on as their customer, so startups are rightfully aligning around their buyers’ expectations.

One of the best ways startups can begin tackling security is with an early security hire.

With all of this in mind, it’s not surprising that we’ve witnessed a trend where startups achieve compliance from the very early days and often prioritize this motion over developing an exciting feature or launching a new campaign to bring in leads, for instance.

Compliance is an important milestone for a young company and one that moves the cybersecurity industry forward. It forces startup founders to put security hats on and think about protecting their company, as well as their customers. At the same time, compliance provides comfort to the enterprise buyer’s legal and security teams when engaging with emerging vendors. So why is compliance alone not enough?

First, compliance doesn’t mean security (although it is a step in the right direction). It is more often than not that young companies are compliant while being vulnerable in their security posture.

What does it look like? For example, a software company may have met SOC 2 standards that require all employees to install endpoint protection on their devices, but it may not have a way to enforce employees to actually activate and update the software. Furthermore, the company may lack a centrally managed tool for monitoring and reporting to see if any endpoint breaches have occurred, where, to whom and why. And, finally, the company may not have the expertise to quickly respond to and fix a data breach or attack.

Therefore, although compliance standards are met, several security flaws remain. The end result is that startups can suffer security breaches that end up costing them a bundle. For companies with under 500 employees, the average security breach costs an estimated $7.7 million, according to a study by IBM, not to mention the brand damage and lost trust from existing and potential customers.

Second, an unforeseen danger for startups is that compliance can create a false sense of safety. Receiving a compliance certificate from objective auditors and renowned organizations could give the impression that the security front is covered.

Once startups start gaining traction and signing upmarket customers, that sense of security grows, because if the startup managed to acquire security-minded customers from the F-500, being compliant must be enough for now and the startup is probably secure by association. When charging after enterprise deals, it’s the buyer’s expectations that push startups to achieve SOC 2 or ISO27001 compliance to satisfy the enterprise security threshold. But in many cases, enterprise buyers don’t ask sophisticated questions or go deeper into understanding the risk a vendor brings, so startups are never really called to task on their security systems.

Third, compliance only deals with a defined set of knowns. It doesn’t cover anything that is unknown and new since the last version of the regulatory requirements were written.

For example, APIs are growing in use, but regulations and compliance standards have yet to catch up with the trend. So an e-commerce company must be PCI-DSS compliant to accept credit card payments, but it may also leverage multiple APIs that have weak authentication or business logic flaws. When the PCI standard was written, APIs weren’t common, so they aren’t included in the regulations, yet now most fintech companies rely heavily on them. So a merchant may be PCI-DSS compliant, but use nonsecure APIs, potentially exposing customers to credit card breaches.

Startups are not to blame for the mix-up between compliance and security. It is difficult for any company to be both compliant and secure, and for startups with limited budget, time or security know-how, it’s especially challenging. In a perfect world, startups would be both compliant and secure from the get-go; it’s not realistic to expect early-stage companies to spend millions of dollars on bulletproofing their security infrastructure. But there are some things startups can do to become more secure.

One of the best ways startups can begin tackling security is with an early security hire. This team member might seem like a “nice to have” that you could put off until the company reaches a major headcount or revenue milestone, but I would argue that a head of security is a key early hire because this person’s job will be to focus entirely on analyzing threats and identifying, deploying and monitoring security practices. Additionally, startups would benefit from ensuring their technical teams are security-savvy and keep security top of mind when designing products and offerings.

Another tactic startups can take to bolster their security is to deploy the right tools. The good news is that startups can do so without breaking the bank; there are many security companies offering open-source, free or relatively affordable versions of their solutions for emerging companies to use, including Snyk, Auth0, HashiCorp, CrowdStrike and Cloudflare.

A full security rollout would include software and best practices for identity and access management, infrastructure, application development, resiliency and governance, but most startups are unlikely to have the time and budget necessary to deploy all pillars of a robust security infrastructure.

Luckily, there are resources like Security 4 Startups that offer a free, open-source framework for startups to figure out what to do first. The guide helps founders identify and solve the most common and important security challenges at every stage, providing a list of entry-level solutions as a solid start to building a long-term security program. In addition, compliance automation tools can help with continuous monitoring to ensure these controls stay in place.

For startups, compliance is critical for establishing trust with partners and customers. But if this trust is eroded after a security incident, it will be nearly impossible to regain it. Being secure, not only compliant, will help startups take trust to a whole other level and not only boost market momentum, but also make sure their products are here to stay.

So instead of equating compliance with security, I suggest expanding the equation to consider that compliance and security equal trust. And trust equals business success and longevity.

#column, #compliance, #cybercrime, #cybersecurity, #data-breach, #encryption, #enterprise, #security, #startups, #tc

0

Skiff, an end-to-end encrypted alternative to Google Docs, raises $3.7M seed

Imagine if Google Docs was end-to-end encrypted so that not even Google could access your documents. That’s Skiff, in a nutshell.

Skiff is a document editor with a similar look and feel to Google Docs, allowing you to write, edit and collaborate in real-time with colleagues with privacy baked in. Because the document editor is built on a foundation of end-to-end encryption, Skiff doesn’t have access to anyone’s documents — only users, and those who are invited to collaborate, do.

It’s an idea that has already attracted the attention of investors. Skiff’s co-founders Andrew Milich (CEO) and Jason Ginsberg (CTO) announced today that the startup has raised $3.7 million in seed funding from venture firm Sequoia Capital, just over a year since Skiff was founded in March 2020. Alphabet chairman John Hennessy, former Yahoo chief executive Jerry Yang, and Eventbrite co-founders Julia and Kevin Hartz also participated in the round.

Milich and Ginsberg told TechCrunch that the company will use the seed funding to grow the team and build out the platform.

Skiff isn’t that much different from WhatsApp or Signal, which are also end-to-end encrypted, underneath its document editor. “Instead of using it to send messages to a bunch of people, we’re using it to send little pieces of documents and then piecing those together into a collaborative workspace,” said Milich.

But the co-founders acknowledged that putting your sensitive documents in the cloud requires users to put a lot of trust into the startup, particularly one that hasn’t been around for long. That’s why Skiff published a whitepaper with technical details of how its technology works, and has begun to open source parts of its code, allowing anyone to see how the platform works. Milich said Skiff has also gone through at least one comprehensive security audit, and the company counts advisors from the Signal Foundation to Trail of Bits.

It seems to be working. In the months since Skiff soft-launched through an invite-only program, thousands of users — including journalists, research scientists and human rights lawyers — use Skiff every day, with another 8,000 users on a waitlist.

“The group of users that we’re most excited about are just regular people that care about privacy,” said Ginsberg. “There are just so many privacy communities and people that are advocates for these types of products that really care about how they’re built and have sort of lost trust in big companies.”

“They’re using us because they’re really excited about the vision and the future of end-to-end encryption,” he said.

#advisors, #alphabet, #ceo, #cryptography, #cto, #encryption, #end-to-end-encryption, #eventbrite, #google, #google-allo, #google-docs, #jerry-yang, #john-hennessy, #kevin-hartz, #operating-systems, #security, #sequoia-capital, #signal-foundation, #skiff, #software, #startups, #technology, #yahoo

0

European Parliament amps up pressure on EU-US data flows and GDPR enforcement

European Union lawmakers are facing further pressure to step in and do something about lackadaisical enforcement of the bloc’s flagship data protection regime after the European Parliament voted yesterday to back a call urging the Commission to start an infringement proceeding against Ireland’s Data Protection Commission (DPC) for not “properly enforcing” the regulation.

The Commission and the DPC have been contacted for comment on the parliament’s call.

Last summer the Commission’s own two-year review of the General Data Protection Regulation (GDPR) highlighted a lack of uniformly vigorous enforcement — but commissioners were keener to point out the positives, lauding the regulation as a “global reference point”.

But it’s now nearly three years since the regulation begun being applied and criticism over weak enforcement is getting harder for the EU’s executive to ignore.

The parliament’s resolution — which, while non-legally binding, fires a strong political message across the Commission’s bow — singles out the DPC for specific criticism given its outsized role in enforcement of the General Data Protection Regulation (GDPR). It’s the lead supervisory authority for complaints brought against the many big tech companies which choose to site their regional headquarters in the country (on account of its corporate-friendly tax system).

The text of the resolution expresses “deep concern” over the DPC’s failure to reach a decision on a number of complaints against breaches of the GDPR filed the day it came into application, on May 25, 2018 — including against Facebook and Google — and criticises the Irish data watchdog for interpreting ‘without delay’ in Article 60(3) of the GDPR “contrary to the legislators’ intention – as longer than a matter of months”, as they put it.

To date the DPC has only reached a final decision on one cross-border GDPR case — against Twitter.

The parliament also says it’s “concerned about the lack of tech specialists working for the DPC and their use of outdated systems” (which Brave also flagged last year) — as well as criticizing the watchdog’s handling of a complaint originally brought by privacy campaigner Max Schrems years before the GDPR came into application, which relates to the clash between EU privacy rights and US surveillance laws, and which still hasn’t resulted in a decision.

The DPC’s approach to handling Schrems’ 2013 complaint led to a 2018 referral to the CJEU — which in turn led to the landmark Schrems II judgement last summer invalidating the flagship EU-US data transfer arrangement, Privacy Shield.

That ruling did not outlaw alternative data transfer mechanisms but made it clear that EU DPAs have an obligation to step in and suspend data transfers if European’s information is being taken to a third country that does not have essentially equivalent protections to those they have under EU law — thereby putting the ball back in the DPC’s court on the Schrems complaint.

The Irish regulator then sent a preliminary order to Facebook to suspend its data transfers and the tech giant responded by filing for a judicial review of the DPC’s processes. However the Irish High Court rejected Facebook’s petition last week. And a stay on the DPC’s investigation was lifted yesterday — so the DPC’s process of reaching a decision on the Facebook data flows complaint has started moving again.

A final decision could still take several months more, though — as we’ve reported before — as the DPC’s draft decision will also need to be put to the other EU DPAs for review and the chance to object.

The parliament’s resolution states that it “is worried that supervisory authorities have not taken proactive steps under Article 61 and 66 of the GDPR to force the DPC to comply with its obligations under the GDPR”, and — in more general remarks on the enforcement of GDPR around international data transfers — it states that it:

Is concerned about the insufficient level of enforcement of the GDPR, particularly in the area of international transfers; expresses concerns at the lack of prioritisation and overall scrutiny by national supervisory authorities with regard to personal data transfers to third countries, despite the significant CJEU case law developments over the past five years; deplores the absence of meaningful decisions and corrective measures in this regard, and urges the EDPB [European Data Protection Board] and national supervisory authorities to include personal data transfers as part of their audit, compliance and enforcement strategies; points out that harmonised binding administrative procedures on the representation of data subjects and admissibility are needed to provide legal certainty and deal with crossborder complaints;

The knotty, multi-year saga of Schrems’ Facebook data-flows complaint, as played out via the procedural twists of the DPC and Facebook’s lawyers’ delaying tactics, illustrates the multi-layered legal, political and commercial complexities bound up with data flows out of the EU (post-Snowden’s 2013 revelations of US mass surveillance programs) — not to mention the staggering challenge for EU data subjects to actually exercise the rights they have on paper. But these intersecting issues around international data flows do seem to be finally coming to a head, in the wake of the Schrems II CJEU ruling.

The clock is now ticking for the issuing of major data suspension orders by EU data protection agencies, with Facebook’s business first in the firing line.

Other US-based services that are — similarly — subject to the US’ FISA regime (and also move EU users data over the pond for processing; and whose businesses are such they cannot shield user data via ‘zero access’ encryption architecture) are equally at risk of receiving an order to shut down their EU-US data-pipes. Or else having to shift data processing for these users inside the EU.

US-based services aren’t the only ones facing increasing legal uncertainty, either.

The UK, post-Brexit, is also classed as a third country (in EU law terms). And in a separate resolution today the parliament adopted a text on the UK adequacy agreement, granted earlier this year by the Commission, which raises objections to the arrangement — including by flagging a lack of GDPR enforcement in the UK as problematic.

On that front the parliament highlights how adtech complaints filed with the ICO have failed to yield a decision. (It writes that it’s concerned “non-enforcement is a structural problem” in the UK — which it suggests has left “a large number of data protection law breaches… [un]remedied”.)

It also calls out the UK’s surveillance regime, questioning its compatibility with the CJEU’s requirements for essential equivalence — while also raising concerns about the risk that the UK could undermine protections on EU citizens data via onward transfers to jurisdictions the EU does not have an adequacy agreement with, among other objections.

The Commission put a four year lifespan on the UK’s adequacy deal — meaning there will be another major review ahead of any continuation of the arrangement in 2025.

It’s a far cry from the ‘hands-off’ fifteen years the EU-US ‘Safe Harbor’ agreement stood for, before a Schrems challenge finally led to the CJEU striking it down back in 2015. So the takeaway here is that data deals that allow for people’s information to leave Europe aren’t going to be allowed to stand unchecked for years; close scrutiny and legal accountability are now firmly up front — and will remain in the frame going forward.

The global nature of the Internet and the ease with which data can digitally flow across borders of course brings huge benefits for businesses — but the resulting interplay between different legal regimes is leading to increasing levels of legal uncertainty for companies seeking to take people’s data across borders.

In the EU’s case, the issue is that data protection is regulated within the bloc and these laws require that protection stays with people’s information, no matter where it goes. So if the data flows to countries that do not offer the same safeguards — be that the US or indeed China or India (or even the UK) — then that risk is that it can’t, legally, be taken there.

How to resolve this clash, between data protection laws based on individual privacy rights and data access mandates driven by national security priorities, has no easy answers.

For the US, and for the transatlantic data flows between the EU and the US, the Commission has warned there will be no quick fix this time — as happened when it slapped a sticking plaster atop the invalidated Safe Harbor, hailing a new ‘Privacy Shield’ regime; only for the CJEU to blast that out of the water for much the same reasons a few years later. (The parliament resolution is particularly withering in its assessment of the Commission’s historic missteps there.)

For a fix to stick, major reform of US surveillance law is going to be needed. And the Commission appears to have accepted that’s not going to come overnight, so it seems to be trying to brace businesses for turbulence…

The parliament’s resolution on Schrems II also makes it clear that it expects DPAs to step in and cut off risky data flows — with MEPs writing that “if no arrangement with the US is swiftly found which guarantees an essentially equivalent and therefore adequate level of protection to that provided by the GDPR and the Charter, that these transfers will be suspended until the situation is resolved”.

So if DPAs fail to do this — and if Ireland keeps dragging its feet on closing out the Schrems complaint — they should expect more resolutions to be blasted at them from the parliament.

MEPs emphasize the need for any future EU-US data transfer agreement “to address the problems identified by the Court ruling in a sustainable manner” — pointing out that “no contract between companies can provide protection from indiscriminate access by intelligence authorities to the content of electronic communications, nor can any contract between companies provide sufficient legal remedies against mass surveillance”.

“This requires a reform of US surveillance laws and practices with a view to ensuring that access of US security authorities to data transferred from the EU is limited to what is necessary and proportionate, and that European data subjects have access to effective judicial redress before US courts,” the parliament adds.

It’s still true that businesses may be able to legally move EU personal data out of the bloc. Even, potentially, to the US — depending on the type of business; the data itself; and additional safeguards that could be applied.

However for data-mining companies like Facebook — which are subject to FISA and whose businesses rely on accessing people’s data — then achieving essential equivalence with EU privacy protections looks, well, essentially impossible.

And while the parliament hasn’t made an explicit call in the resolution for Facebook’s EU data flows to be cut off that is the clear implication of it urging infringement proceedings against the DPC (and deploring “the absence of meaningful decisions and corrective measures” in the area of international transfers).

The parliament says it wants to see “solid mechanisms compliant with the CJEU judgement” set out — for the benefit of businesses with the chance to legally move data out of the EU — saying, for example, that the Commission’s proposal for a template for Standard Contractual Clauses (SCCs) should “duly take into account all the relevant recommendations of the EDPB“.

It also says it supports the creation of a tool box of supplementary measures for such businesses to choose from — in areas like security and data protection certification; encryption safeguards; and pseudonymisation — so long as the measures included are accepted by regulators.

It also wants to see publicly available resources on the relevant legislation of the EU’s main trading partners to help businesses that have the possibility of being able to legally move data out of the bloc get guidance to help them do so with compliance.

The overarching message here is that businesses should buckle up for disruption of cross-border data flows — and tool up for compliance, where possible.

In another segment of the resolution, for example, the parliament calls on the Commission to “analyse the situation of cloud providers falling under section 702 of the FISA who transfers data using SCCs” — going on to suggest that support for European alternatives to US cloud providers may be needed to plug “gaps in the protection of data of European citizens transferred to the United States” and “reduce the dependence of the Union in storage capacities vis-à-vis third countries and to strengthen the Union’s strategic autonomy in terms of data management and protection”.

#brexit, #china, #cloud, #data-mining, #data-protection, #data-protection-commission, #data-security, #encryption, #eu-us-privacy-shield, #europe, #european-data-protection-board, #european-parliament, #european-union, #facebook, #general-data-protection-regulation, #google, #india, #ireland, #lawsuit, #max-schrems, #noyb, #privacy, #safe-harbor, #surveillance-law, #twitter, #united-kingdom, #united-states

0

Proton, the privacy startup behind e2e encrypted ProtonMail, confirms passing 50M users

End-to-end encrypted email provider ProtonMail has officially confirmed it’s passed 50 million users globally as it turns seven years old.

It’s a notable milestone for a services provider that intentionally does not have a data business — opting instead for a privacy pledge based on zero access architecture that means it has no way to decrypt the contents of ProtonMail users’ emails.

Although, to be clear, the 50M+ figure applies to total users of all its products (which includes a VPN offering), not just users of its e2e encrypted email. (It declined to break out email users vs other products when we asked.)

Commenting in a statement, Andy Yen, founder and CEO, said: “The conversation about privacy has shifted surprisingly quickly in the past seven years. Privacy has gone from being an afterthought, to the main focus of a lot of discussions about the future of the Internet. In the process, Proton has gone from a crowdfunded idea of a better Internet, to being at the forefront of the global privacy wave. Proton is an alternative to the surveillance capitalism model advanced by Silicon Valley’s tech giants, that allows us to put the needs of users and society first.”

ProtonMail, which was founded in 2014, has diversified into offering a suite of products — including the aforementioned VPN and a calendar offering (Proton Calendar). A cloud storage service, Proton Drive, is also slated for public release later this year.

For all these products it claims take the same ‘zero access’ hands off approach to user data. Albeit, it’s a bit of an apples and oranges comparison to compare e2e encrypted email with an encrypted VPN service — since the issue with VPN services is that they can see activity (i.e. where the encrypted or otherwise packets are going) and that metadata can sum to a log of your Internet activity (even with e2e encryption of the packets themselves).

Proton claims it doesn’t track or record its VPN users’ web browsing. And given its wider privacy-dependent reputation that’s at least a more credible claim vs the average VPN service. Nonetheless, you do still have to trust Proton not to do that (or be forced to do that by, for e.g., law enforcement). It’s not the same technical ‘zero access’ guarantee as it can offer for its e2e encrypted email.

Proton does also offer a free VPN — which, as we’ve said before, can be a red flag for data logging risk — but the company specifies that users of the paid version subsidize free users. So, again, the claim is zero logging but you still need to make a judgement call on whether to trust that.

From Snowden to 50M+

Over ProtonMail’s seven year run privacy has certainly gained cache as a brand promise — which is why you can now see data-mining giants like Facebook making ludicrous claims about ‘pivoting’ their people-profiling surveillance empires to ‘privacy’. So, as ever, PR that’s larded with claims of ‘respect for privacy’ demands very close scrutiny.

And while it’s clearly absurd for an adtech giant like Facebook to try to cloak the fact that its business model relies on stripping away people’s privacy with claims to the contrary, in Proton’s case the privacy claim is very strong indeed — since the company was founded with the goal of being “immune to large scale spying”. Spying such as that carried out by the NSA.

ProtonMail’s founding idea was to build a system “that does not require trusting us”.

While usage of e2e encryption has grown enormously since 2013 — when disclosures by NSA whistleblower, Edward Snowden, revealed the extent of data gathering by government mass surveillance programs, which were shown (il)liberally tapping into Internet cables and mainstream digital services to grab people’s data without their knowledge or consent — growth that’s certainly been helped by consumer friendly services like ProtonMail making robust encryption far more accessible — there are worrying moves by lawmakers in a number of jurisdictions that clash with the core idea and threaten access to e2e encryption.

In the wake of the Snowden disclosures, ‘Five Eyes’ countries steadily amped up international political pressure on e2e encryption. Australia, for example, passed an anti-encryption law in 2018 — which grants police powers to issue ‘technical notices’ to force companies operating on its soil to help the government hack, implant malware, undermine encryption or insert backdoors at the behest of the government.

While, in 2016, the UK reaffirmed its surveillance regime — passing a law that gives the government powers to compel companies to remove or not implement e2e encryption. Under the Investigatory Powers Act, a statutory instrument called a Technical Capability Notice (TCN) can be served on comms services providers to compel decrypted access. (And as the ORG noted in April, there’s no way to track usage as the law gags providers from reporting anything at all about a TCN application, including that it even exists.)

More recently, UK ministers have kept up public pressure on e2e encryption — framing it as an existential threat to child protection. Simultaneously they are legislating — via an Online Safety Bill, out in draft earlier this month — to put a legally binding obligation on service providers to ‘prevent bad things from happening on the Internet’ (as the ORG neatly sums it up). And while still at the draft stage, private messaging services are in scope of that bill — putting the law on a potential collision course with messaging services that use e2e encryption.

The U.S., meanwhile, has declined to reform warrantless surveillance.

And if you think the EU is a safe space for e2e encryption, there are reasons to be concerned in continental Europe too.

EU lawmakers have recently made a push for what they describe as “lawful access” to encrypted data — without specifying exactly how that might be achieved, i.e. without breaking and/or backdooring e2e encryption and therefore undoing the digital security they also say is vital.

In a further worrying development, EU lawmakers have proposed automated scanning of encrypted communications services — aka a provision called ‘chatcontrol’ that’s ostensibly targeted at prosecuting those who share child exploitation content — which raises further questions over how such laws might intersect with ‘zero access’ services like ProtonMail.

The European Pirate Party has been sounding the alarm — and dubs the ‘chatcontrol’ proposal “the end of the privacy of digital correspondence” — warning that “securely encrypted communication is at risk”.

A plenary vote on the proposal is expected in the coming months — so where exactly the EU lands on that remains to be seen.

ProtonMail, meanwhile, is based in Switzerland which is not a member of the EU and has one of the stronger reputations for privacy laws globally. However the country also backed beefed-up surveillance powers in 2016 — extending the digital snooping capabilities of its own intelligence agencies.

It does also adopt some EU regulations — so, again, it’s not clear whether or not any pan-EU automated scanning of message content could end up being applied to services based in the country.

The threats to e2e encryption are certainly growing, even as usage of such properly private services keeps scaling.

Asked whether it has concerns, ProtonMail pointed out that the EU’s current temporary chatcontrol proposal is voluntary — meaning it would be up to the company in question to decide its own policy. Although it accepts there is “some support” in the Commission for the chatcontrol proposals to be made mandatory.

“It’s not clear at this time whether these proposals could impact Proton specifically [i.e. if they were to become mandatory],” the spokesman also told us. “The extent to which a Swiss company like Proton might be impacted by such efforts would have to be assessed based on the specific legal proposal. To our knowledge, none has been made for now.”

“We completely agree that steps have to be taken to combat the spread of illegal explicit material. However, our concern is that the forced scanning of communications would be an ineffective approach and would instead have the unintended effect of undermining many of the basic freedoms that the EU was established to protect,” he added. “Any form of automated content scanning is incompatible with end-to-end encryption and by definition undermines the right to privacy.”

So while Proton is rightly celebrating that a steady commitment to zero access infrastructure over the past seven years has helped its business grow to 50M+ users, there are reasons for all privacy-minded folk to be watchful of what the next years of political developments might mean for the privacy and security of all our data.

 

#andy-yen, #australia, #computer-security, #e2e, #e2e-encryption, #edward-snowden, #email-encryption, #encryption, #end-to-end-encryption, #europe, #european-union, #facebook, #human-rights, #internet-cables, #law-enforcement, #online-safety-bill, #privacy, #proton, #protonmail, #switzerland, #tc, #united-kingdom, #united-states, #vpn, #web-browsing

0

Google Cloud Run gets committed use discounts and new security features

Cloud Run, Google Cloud’s serverless platform for containerized applications, is getting committed use discounts. Users who commit to spending a given amount on using Cloud Run for a year will get a 17% discount on the money they commit. The company offers a similar pre-commitment discount scheme for VM-based Compute Engine instances, as well as automatic ‘sustained use‘ discounts for machines that run for more than 25% of a month.

In addition, Google Cloud is also introducing a number of new security features for Cloud Run, including the ability to mount secrets from the Google Cloud Secret Manager and binary authorization to help define and enforce policies about how containers are deployed on the service. Cloud Run users can now also now use and manage their own encryption keys (by default, Cloud Run uses Google-managed keys) and a new Recommendation Hub inside of Cloud Run will now offer users recommendations for how to better protect their Cloud Run services.

Aparna Sinha, who recently became the director of product management for Google Cloud’s serverless platform, noted that these updates are part of Google Cloud’s push to build what she calls the “next generation of serverless.’

“We’re really excited to introduce our new vision for serverless, which I think is going to help redefine this space,” she told me. “In the past, serverless has meant a certain narrower type of compute, which is focused on functions or a very specific kind of applications, web services, etc. — and what we are talking about with redefining serverless is focusing on the power of serverless, which is the developer experience and the ease of use, but broadening it into a much more versatile platform, where many different types of applications can be run, and building in the Google way of doing DevOps and security and a lot of integrations so that you have access to everything that’s the best of cloud.”

She noted that Cloud Run saw “tremendous adoption” during the pandemic, something she attributes to the fact that businesses were looking to speed up time-to-value from their applications. IKEA, for example, which famously had a hard time moving from in-store to online sales, bet on Google Cloud’s serverless platform to bring down the refresh time of its online store and inventory management system from three hours to less than three minutes after switching to this model.

“That’s kind of the power of serverless, I think, especially looking forward, the ability to build real-time applications that have data about the context, about the inventory, about the customer and can therefore be much more reactive and responsive,” Sinha said. “This is an expectation that customers will have going forward and serverless is an excellent way to deliver that as well as be responsive to demand patterns, especially when they’re changing so much in today’s uncertain environment.”

Since the container model gives businesses a lot of flexibility in what they want to run in these containers — and how they want to develop these applications since Cloud Run is language-agnostic — Google is now seeing a lot of other enterprises move to this platform as well, both for deploying completely new applications but also to modernize some of their existing services.

For the companies that have predictable usage patterns, the committed use discounts should be an attractive option and it’s likely the more sophisticated organizations that are asking for the kinds of new security features that Google Cloud is introducing today.

“The next generation of serverless combines the best of serverless with containers to run a broad spectrum of apps, with no language, networking or regional restrictions,” Sinha writes in today’s announcement. “The next generation of serverless will help developers build the modern applications of tomorrow—applications that adapt easily to change, scale as needed, respond to the needs of their customers faster and more efficiently, all while giving developers the best developer experience.”

#aparna-sinha, #cloud, #cloud-computing, #cloud-infrastructure, #computing, #developer, #encryption, #google, #google-cloud, #google-compute-engine, #ikea, #online-sales, #product-management, #serverless-computing, #web-services

0

Telegram to add group video calls next month

Group video calls will be coming to Telegram’s messaging platform next month with what’s being touted as a fully featured implementation, including support for web-based videoconferencing.

Founder Pavel Durov made the announcement via a (text) message posted to his official Telegram channel today where he wrote “we will be adding a video dimension to our voice chats in May, making Telegram a powerful platform for group video calls”.

“Screen sharing, encryption, noise-cancelling, desktop and tablet support — everything you can expect from a modern video conferencing tool, but with Telegram-level UI, speed and encryption. Stay tuned!” he added, using the sorts of phrases you’d expect from an enterprise software maker.

Telegram often taunts rivals over their tardiness to add new features but on video calls it has been a laggard, only adding the ability to make one-on-one video calls last August — rather than prioritizing a launch of group video calls, as it had suggested it would a few months earlier.

In an April 2020 blog post, to mark passing 400M users, it wrote that the global lockdown had “highlighted the need for a trusted video communication tool” — going on to dub video calls in 2020 “much like messaging in 2013”.

However it also emphasized the importance of security for group video calling — and that’s perhaps what’s caused the delay.

(Another possibility is the operational distraction of needing to raise a large chunk of debt financing to keep funding development: Last month Telegram announced it had raised over $1BN by selling bonds — its earlier plan to monetize via a blockchain platform having hit the buffers in 2020.)

In the event, rather than rolling out group video calls towards the latter end of 2020 it’s going to be doing so almost half way through 2021 — which has left videoconferencing platforms like Zoom to keep cleaning up during the pandemic-fuelled remote work and play boom (even as ‘Zoom fatigue’ has been added to our lexicon).

How secure Telegram’s implementation of group video calls will be, though, is an open question.

Durov’s post mades repeat mention of “encryption” — perhaps to make a subtle dig at Zoom’s own messy security claims history — but doesn’t specify whether it will use end-to-end encryption (we’ve asked).

Meanwhile Zoom does now offer e2e — and also has designs on becoming a platform in its own right, with apps and a marketplace, so there are a number of shifts in the comms landscape that could see the videoconferencing giant making deeper incursions into Telegram’s social messaging territory.

The one-to-one video calls Telegram launched last year were rolled out with its own e2e encryption — so presumably it will be replicating that approach for group calls.

However the MTProto encryption Telegram uses is custom-designed — and there’s been plenty of debate among cryptography experts over the soundness of its approach. So even if group calls are e2e encrypted there will be scrutiny over exactly how Telegram is doing it.

Also today, Durov touted two recently launched web versions of Telegram (not the first such versions by a long chalk, though) — adding that it’s currently testing “a functional version of web-based video calls internally, which will be added soon”.

He said the Webk and Webz versions of the web app are “by far the most cross-platform versions of Telegram we shipped so far”, and noting that no downloads or installs are required to access your chats via the browser.

“This is particularly good for corporate environments where installing native apps is now always allowed, but also good for users who like the instant nature of web sites,” he added, with another little nod toward enterprise users.

#cryptography, #e2e-encryption, #encryption, #end-to-end-encryption, #group-video-calls, #noise-cancelling, #pavel-durov, #social, #telegram, #video-conferencing, #web-app, #zoom

0

Cape Privacy announces $20M Series A to help companies securely share data

Cape Privacy, the early stage startup that wants to make it easier for companies to share sensitive data in a secure and encrypted way, announced a $20 million Series A today.

Evolution Equity Partners led the round with participation from new investors Tiger Global Management, Ridgeline Partners and Downing Lane. Existing investors Boldstart Ventures, Version One Ventures, Haystack, Radical Ventures and a slew of individual investors also participated. The company has now raised approximately $25 million including a $5 million seed investment we covered last June..

Cape Privacy CEO Ché Wijesinghe says that the product has evolved quite a bit since we last spoke. “We have really focused our efforts on encrypted learning, which is really the core technology, which was fundamental to allowing the multi-party compute capabilities between two organizations or two departments to work and build machine learning models on encrypted data,” Wijesinghe told me.

Wijesinghe says that a key business case involves a retail company owned by a private equity firm sharing data with a large financial services company, which is using the data to feed its machine learning models. In this case, sharing customer data, it’s essential to do it in a secure way and that is what Cape Privacy claims is its primary value prop.

He said that while the data sharing piece is the main focus of the company, it has data governance and compliance components to be sure that entities sharing data are doing so in a way that complies with internal and external rules and regulations related to the type of data.

While the company is concentrating on financial services for now because Wijesinghe has been working with these companies for years, he sees uses cases far beyond a single vertical including pharmaceuticals, government, healthcare telco and manufacturing.

“Every single industry needs this and so we look at the value of what Cape’s encrypted learning can provide as really being something that can be as transformative and be as impactful as what SSL was for the adoption of the web browser,” he said.

Richard Seewald, founding and managing partner at lead investor Evolution Equity Partners likes that ability to expand the product’s markets. “The application in Financial Services is only the beginning. Cape has big plans in life sciences and government where machine learning will help make incredible advances in clinical trials and counter-terrorism for example. We anticipate wide adoption of Cape’s technology across many use cases and industries,” he said.

The company has recently expanded to 20 people and Wijesinghe, who is half Asian, takes DEI seriously. “We’ve been very, very deliberate about our DEI efforts, and I think one of the things that we pride ourselves in is that we do foster a culture of acceptance, that it’s not just about diversity in terms of color, race, gender, but we just hired our first non binary employee,” he said,

Part of making people feel comfortable and included involves training so that fellow employees have a deeper understanding of the cultural differences. The company certainly has diversity across geographies with employees in 10 different time zones.

The company is obviously remote with a spread like that, but once the pandemic is over, Wijesinghe sees bringing people together on occasion with New York City as the hub for the company where people from all over the world can fly in and get together.

#cape-privacy, #data-sharing, #encryption, #enterprise, #evolution-equity-partners, #funding, #machine-learning, #recent-funding, #security, #startups

0

Enterprise security attackers are one password away from your worst day

If the definition of insanity is doing the same thing over and over and expecting a different outcome, then one might say the cybersecurity industry is insane.

Criminals continue to innovate with highly sophisticated attack methods, but many security organizations still use the same technological approaches they did 10 years ago. The world has changed, but cybersecurity hasn’t kept pace.

Distributed systems, with people and data everywhere, mean the perimeter has disappeared. And the hackers couldn’t be more excited. The same technology approaches, like correlation rules, manual processes, and reviewing alerts in isolation, do little more than remedy symptoms while hardly addressing the underlying problem.

Credentials are supposed to be the front gates of the castle, but as the SOC is failing to change, it is failing to detect. The cybersecurity industry must rethink its strategy to analyze how credentials are used and stop breaches before they become bigger problems.

It’s all about the credentials

Compromised credentials have long been a primary attack vector, but the problem has only grown worse in the mid-pandemic world. The acceleration of remote work has increased the attack footprint as organizations struggle to secure their network while employees work from unsecured connections. In April 2020, the FBI said that cybersecurity attacks reported to the organization grew by 400% compared to before the pandemic. Just imagine where that number is now in early 2021.

It only takes one compromised account for an attacker to enter the active directory and create their own credentials. In such an environment, all user accounts should be considered as potentially compromised.

Nearly all of the hundreds of breach reports I’ve read have involved compromised credentials. More than 80% of hacking breaches are now enabled by brute force or the use of lost or stolen credentials, according to the 2020 Data Breach Investigations Report. The most effective and commonly-used strategy is credential stuffing attacks, where digital adversaries break in, exploit the environment, then move laterally to gain higher-level access.

#column, #computer-security, #credential-stuffing, #crime, #cyberattack, #cybercrime, #cyberwarfare, #data-breach, #ec-column, #ec-cybersecurity, #encryption, #enterprise, #fireeye, #national-security-agency, #phishing, #security, #solarwinds

0

Messaging app Wire raises $21 million

Wire, the end-to-end encrypted messaging app and service, has raised a $21 million Series B funding round led by UVC Partners. As the company said a couple of years ago, the company is focusing on the enterprise market more than ever.

While Wire started as a consumer app, it never managed to attract hundreds of millions of customers like other messaging apps. That doesn’t mean that Wire is a bad product.

The app lets you securely talk with other people using text messages, photos, videos and voice messages. You can also start a video call and send files with other users. Wire supports both one-to-one conversations as well as rooms.

Everything is end-to-end encrypted by default, which means that the company can’t decrypt your conversations, can’t hand them over to a court or can’t expose your conversations to a potential hacker. You can also view the source code on GitHub.

In 2019, the company told TechCrunch that it would open a holding company in the U.S. to raise some funding. The idea was to double down on enterprise customers to find a clear path toward profitability. And this focus hasn’t changed since then.

“If I think back on the evolution of the business – three years ago we had zero revenue and zero customers – whereas today we’re announcing a B round and we have clearly established a well-recognised enterprise brand amongst the likes of Gartner, which is one of the things I am extremely proud of,” Wire’s CEO Morten Brogger told me.

“I also think that with the focus on a revenue-generating, enterprise business, we avoid situations like WhatsApp, where the only model you can ultimately turn to is monetising data,” he added.

And it seems to be working well when it comes to revenue growth. Right now, Wire has 1,800 customers. The number of customers has increased by almost 50% over the past year.

The company focuses on large customers, such as big corporations and government customers with a ton of potential users. Five G7 governments are currently using Wire. Overall, revenue has tripled in 2020.

In addition to working on Messaging Layer Security (MLS), Wire has been focused on improving conference calls and real-time interactions. The company believes messaging apps and real-time collaboration apps are slowly converging. And the startup wants to offer a service that works well across various scenarios.

You can also expect more end-to-end encrypted services in the collaboration space. Wire is still relatively small with 90 employees, which means it has room to grow and iterate.

#collaboration, #encryption, #europe, #fundings-exits, #messaging-app, #security, #startups, #wire

0

OpenSSL fixes high-severity flaw that allows hackers to crash servers

Stylized image of a floating padlock.

Enlarge (credit: Getty Images)

OpenSSL, the most widely software library for implementing website and email encryption, has patched a high-severity vulnerability that makes it easy for hackers to completely shut down huge numbers of servers.

OpenSSL provides time-tested cryptographic functions that implement the Transport Layer Security protocol, the successor to Secure Sockets Layer that encrypts data flowing between Internet servers and end-user clients. People developing applications that use TLS rely on OpenSSL to save time and avoid programming errors that are common when noncryptographers build applications that use complex encryption.

The crucial role OpenSSL plays in Internet security came into full view in 2014 when hackers began exploiting a critical vulnerability in the open-source code library that let them steal encryption keys, customer information, and other sensitive data from servers all over the world. Heartbleed, as the security flaw was called, demonstrated how a couple lines of faulty code could topple the security of banks, news sites, law firms, and more.

Read 9 remaining paragraphs | Comments

#biz-it, #encryption, #openssl, #ssl, #tech, #transport-layer-security

0

US claims seller of encrypted phones violated racketeering and drug laws

A man's hand on a smartphone with a screen displaying a lock symbol.

Enlarge (credit: Getty Images | Tevarak Phanduang | EyeEm)

A US grand jury has indicted the CEO of a Canadian company that sells encrypted phones, alleging that he and an associate violated racketeering and drug laws. On Friday, the federal grand jury “returned an indictment against the Chief Executive Officer and an associate of the Canada-based firm Sky Global on charges that they knowingly and intentionally participated in a criminal enterprise that facilitated the transnational importation and distribution of narcotics through the sale and service of encrypted communications devices,” the Department of Justice said in a press release.

Sky Global CEO Jean-Francois Eap and Thomas Herdman, a former distributor of Sky Global devices, were charged with a conspiracy to violate the Racketeer Influenced and Corrupt Organizations Act (RICO), a law designed to punish organized crime. They were also charged with a conspiracy to distribute illegal drugs and aiding and abetting. The racketeering and drug counts each carry maximum penalties of life in prison, the DOJ said. The US is seeking criminal convictions and forfeiture of “at least $100,000,000” worth of assets.

The indictment is available in this Motherboard article.

Read 12 remaining paragraphs | Comments

#encryption, #policy, #sky-global

0

Following backlash, WhatsApp to roll out in-app banner to better explain its privacy update

Last month, Facebook-owned WhatsApp announced it would delay enforcement of its new privacy terms, following a backlash from confused users which later led to a legal challenge in India and various regulatory investigations. WhatsApp users had misinterpreted the privacy updates as an indication that the app would begin sharing more data — including their private messages — with Facebook. Today, the company is sharing the next steps it’s taking to try to rectify the issue and clarify that’s not the case.

The mishandling of the privacy update on WhatsApp’s part led to widespread confusion and misinformation. In reality, WhatsApp had been sharing some information about its users with Facebook since 2016, following its acquisition by Facebook.

But the backlash is a solid indication of much user trust Facebook has since squandered. People immediately suspected the worst, and millions fled to alternative messaging apps, like Signal and Telegram, as a result.

Following the outcry, WhatsApp attempted to explain that the privacy update was actually focused on optional business features on the app, which allow business to see the content of messages between it and the end user, and give the businesses permission to use that information for its own marketing purposes, including advertising on Facebook. WhatsApp also said it labels conversations with businesses that are using hosting services from Facebook to manage their chats with customers, so users were aware.

Image Credits: WhatsApp

In the weeks since the debacle, WhatsApp says it spent time gathering user feedback and listening to concerns from people in various countries. The company found that users wanted assurance that WhatsApp was not reading their private messages or listening to their conversations, and that their communications were end-to-end encrypted. Users also said they wanted to know that WhatsApp wasn’t keeping logs of who they were messaging or sharing contact lists with Facebook.

These latter concerns seem valid, given that Facebook recently made its messaging systems across Facebook, Messenger and Instagram interoperable. One has to wonder when similar integrations will make their way to WhatsApp.

Today, WhatsApp says it will roll out new communications to users about the privacy update, which follows the Status update it offered back in January aimed at clarifying points of confusion. (See below).

Image Credits: WhatsApp

In a few weeks, WhatsApp will begin to roll out a small, in-app banner that will ask users to re-review the privacy policies — a change the company said users have shown to prefer over the pop-up, full-screen alert it displayed before.

When users click on “to review,” they’ll be shown a deeper summary of the changes, including added details about how WhatsApp works with Facebook. The changes stress that WhatsApp’s update don’t impact the privacy of users’ conversations, and reiterate the information about the optional business features.

Eventually, WhatsApp will begin to remind users to review and accept its updates to keep using WhatsApp. According to its prior announcement, it won’t be enforcing the new policy until May 15.

Image Credits: WhatsApp

Users will still need to be aware that their communications with businesses are not as secure as their private messages. This impacts a growing number of WhatsApp users, 175 million of which now communicate with businesses on the app, WhatsApp said in October.

In today’s blog post about the changes, WhatsApp also took a big swipe at rival messaging apps that used the confusion over the privacy update to draw in WhatsApp’s fleeing users by touting their own app’s privacy.

“We’ve seen some of our competitors try to get away with claiming they can’t see people’s messages – if an app doesn’t offer end-to-end encryption by default that means they can read your messages,” WhatsApp’s blog post read.

This seems to be a comment directed specifically towards Telegram, which often touts its “heavily encrypted” messaging app as more private alternative. But Telegram doesn’t offer end-to-end encryption by default, as apps like WhatsApp and Signal do. It uses “transport layer” encryption that protects the connection from the user to the server, a Wired article citing cybersecurity professionals explained in January. When users want an end-to-end encrypted experience for their one-on-one chats, they can enable the “secret chats” feature instead. (And this feature isn’t even available for group chats.)

In addition, WhatsApp fought back against the characterization that it’s somehow less safe because it has some limited data on users.

“Other apps say they’re better because they know even less information than WhatsApp. We believe people are looking for apps to be both reliable and safe, even if that requires WhatsApp having some limited data,” the post read. “We strive to be thoughtful on the decisions we make and we’ll continue to develop new ways of meeting these responsibilities with less information, not more,” it noted.

#apps, #encryption, #facebook, #messaging-apps, #policy, #privacy, #security, #signal, #social-media, #tc, #telegram, #whatsapp

0

Duality scores $14M DARPA contract for hardware-accelerated homomorphic encryption

Training AIs is essential to today’s tech sector, but handling the amount of data needed to do so is intrinsically dangerous. DARPA hopes to change that by tapping the encryption experts at Duality to create a hardware-accelerated method of using large quantities of data without decrypting it — a $14.5 million contract.

Duality specializes in what’s called fully homomorphic encryption. Without descending into the technical details, the main issue with everyday encryption methods — though it’s also sort of the point of them — is that they render the encrypted data totally unreadable, essentially noise unless you have the key to reverse the process. Doing that is computationally expensive with large datasets, and of course once the data is in the clear, it’s vulnerable to hackers, abuse, and other dangers.

There are methods, however, of encrypting data such that it can be analyzed and manipulated without decrypting it, and one of those is fully homomorphic encryption. Unfortunately FHE is even more computationally intense than ordinary encryption, ruling it out for applications where gigabytes or terabytes of data are called for. There are other methods of accomplishing the same ends but no one would cry if FHE suddenly became ten times easier.

DARPA is as interested as anyone else in this field, though it has considerably deeper pockets than your garden variety encryption wonk. This contract is part of a broader effort called DPRIVE, or Data Protection in Virtual Environments, and the stated goal is to develop a special purpose chip — an ASIC pre-assigned the code name TREBUCHET — to accelerate FHE by, hopefully, an order of magnitude or more.

The Duality team will bring in experts from USC, NYU, CMU, SpiralGen, Drexel University, and TwoSix Labs. The company has been in the game for a long time and has actually worked with DARPA before, so this is not new territory for them.

Duality team members have been supporting DARPA-funded innovation and application of FHE for over a decade. Some members of our team developed the first ever prototype HE hardware accelerators under the DARPA PROCEED program starting in 2010 and are lead developers for the PALISADE open source FHE library, first developed for the DARPA SAFEWARE program in 2015,” said Duality Labs director and principal investigator for the contract, David Bruce Cousins, in a press release.

As you can see, they’re not short on acronyms either.

It’s not totally clear what the timeline is on this, but considering the state of the technologies involved I wouldn’t expect results before at least two or three years from now.

#darpa, #duality, #duality-technologies, #encryption, #fully-homomorphic-encryption, #government, #homomorphic-encryption, #privacy

0

Signal and Telegram are also growing in China – for now

As fears over WhatsApp’s privacy policies send millions of users in the West to Signal and Telegram, the two encrypted apps are also seeing a slight user uptick in China, where WeChat has long dominated and the government has a tight grip on online communication.

Following WhatsApp’s pop-up notification reminding users that it shares their data with its parent Facebook, people began fleeing to alternate encrypted platforms. Telegram added 25 million just between January 10-13, the company said on its official Telegram channel, while Signal surged to the top of the App Store and Google Play Store in dozens of countries, TechCrunch learned earlier.

The migration was accelerated when, on January 7, Elon Musk urged his 40 million Twitter followers to install Signal in a tweet that likely stoked more interest in the end-to-end encryption messenger.

The growth of Telegram and Signal in China isn’t nearly as remarkable as their soaring popularity in regions where WhatsApp has been the mainstream chat app, but the uplift is a reminder that WeChat alternatives still exist in China in various capacities.

Signal amassed 9,000 new downloads from the China App Store between January 8 and 12, up 500% from the period between January 3 and 7, according to data from research firm Sensor Tower. Telegram added 17,000 downloads during January 8-12, up 6% from the January 3-7 duration. WhatsApp’s growth stalled, recording 10,000 downloads in both periods.

Sensor Tower estimates that Telegram has seen about 2.7 million total installs on China’s App Store, compared to 458,000 downloads from Signal and 9.5 million times from WhatsApp.

The fact that Telegram, Signal, and WhatsApp are accessible in China might come as a surprise to some people. But China’s censorship decisions can be arbitrary and inconsistent. As censorship monitoring site Apple Censorship shows, all major Western messengers are still available on the China App Store.

The situation for Android is trickier. Google services are largely blocked in China and Android users revert to Android app stores operated by local companies like Tencent and Baidu. Neither Telegram nor Signal is available on these third-party Android stores, but users with a tool that can bypass China’s Great Firewall, such as a virtual private network (VPN), can access Google Play and install the encrypted messengers.

The next challenge is actually using these apps. The major chat apps all get slightly different treatment from Beijing’s censorship apparatus. Some, like Signal, work perfectly without the need for a VPN. Users have reported that WhatsApp occasionally works in China without a VPN, though it loads very slowly. And Facebook doesn’t work at all without a VPN.

“Some websites and apps can remain untouched until they reach a certain threshold of users at which point the authorities will try to block or disrupt the website or app,” said Charlie Smith, the pseudonymous head of Great Fire, an organization monitoring the Chinese internet that also runs Apple Censorship.

“Perhaps before this mass migration from WhatsApp, Signal did not have that many users in China. That might have changed over the last week in which case the authorities could be pondering restrictions for Signal,” Smith added.

To legally operate in China, companies must store their data within China and submit information to the authorities for security spot-checks, according to a cybersecurity law enacted in 2017. Apple, for instance, partners with a local cloud provider to store the data of its Chinese users.

The requirement raises questions about the type of interaction that Signal, Telegram, and other foreign apps have with the Chinese authorities. Signal said it never turned over data to the Hong Kong police and had no data to turn over when concerns grew over Beijing’s heightened controls over the former British colony.

The biggest challenges for apps like Signal in China, according to Smith, will come from Apple, which is constantly under fire by investors and activists for submitting to the Chinese authorities.

In recent years, the American giant has stepped up app crackdown in China, zeroing in on services that grant Chinese users access to unfiltered information, such as VPN providers, RSS feed readers and podcast apps. Apple has also purged tens of thousands of unlicensed games in recent quarters after a years-long delay.

“Apple has a history of pre-emptively censoring apps that they believe the authorities would want censored,” Smith observed. “If Apple decides to remove Signal in China, either on its own initiative or in direct response to a request from the authorities, then Apple customers in China will be left with no secure messaging options.”

#apple, #asia, #beijing, #china, #encryption, #facebook, #firewall, #google-play-store, #government, #great-fire, #messenger, #tc, #telegram, #tencent, #vpn, #wechat, #whatsapp

0

Twitter’s vision of decentralization could also be the far-right’s internet endgame

This week, Twitter CEO Jack Dorsey finally responded publicly to the company’s decision to ban President Trump from its platform, writing that Twitter had “faced an extraordinary and untenable circumstance” and that he did not “feel pride” about the decision. In the same thread, he took time to call out a nascent Twitter-sponsored initiative called “bluesky,” which is aiming to build up an “open decentralized standard for social media” that Twitter is just one part of.

Researchers involved with bluesky reveal to TechCrunch an initiative still in its earliest stages that could fundamentally shift the power dynamics of the social web.

Bluesky is aiming to build a “durable” web standard that will ultimately ensure that platforms like Twitter have less centralized responsibility in deciding which users and communities have a voice on the internet. While this could protect speech from marginalized groups, it may also upend modern moderation techniques and efforts to prevent online radicalization.

Jack Dorsey, co-founder and chief executive officer of Twitter Inc., arrives after a break during a House Energy and Commerce Committee hearing in Washington, D.C., U.S., on Wednesday, Sept. 5, 2018. Republicans pressed Dorsey for what they said may be the “shadow-banning” of conservatives during the hearing. Photographer: Andrew Harrer/Bloomberg via Getty Images

What is bluesky?

Just as Bitcoin lacks a central bank to control it, a decentralized social network protocol operates without central governance, meaning Twitter would only control its own app built on bluesky, not other applications on the protocol. The open and independent system would allow applications to see, search and interact with content across the entire standard. Twitter hopes that the project can go far beyond what the existing Twitter API offers, enabling developers to create applications with different interfaces or methods of algorithmic curation, potentially paying entities across the protocol like Twitter for plug-and-play access to different moderation tools or identity networks.

A widely adopted, decentralized protocol is an opportunity for social networks to “pass the buck” on moderation responsibilities to a broader network, one person involved with the early stages of bluesky suggests, allowing individual applications on the protocol to decide which accounts and networks its users are blocked from accessing.

Social platforms like Parler or Gab could theoretically rebuild their networks on bluesky, benefitting from its stability and the network effects of an open protocol. Researchers involved are also clear that such a system would also provide a meaningful measure against government censorship and protect the speech of marginalized groups across the globe.

Bluesky’s current scope is firmly in the research phase, people involved tell TechCrunch, with about 40-50 active members from different factions of the decentralized tech community surveying the software landscape and putting together proposals for what the protocol should ultimately look like. Twitter has told early members that it hopes to hire a project manager in the coming weeks to build out an independent team that will start crafting the protocol itself.

Bluesky’s initial members were invited by Twitter CTO Parag Agrawal early last year. It was later determined that the group should open the conversation up to folks representing some of the more recognizable decentralized network projects, including Mastodon and ActivityPub who joined the working group hosted on the secure chat platform Element.

Jay Graber, founder of decentralized social platform Happening, was paid by Twitter to write up a technical review of the decentralized social ecosystem, an effort to “help Twitter evaluate the existing options in the space,” she tells TechCrunch.

“If [Twitter] wanted to design this thing, they could have just assigned a group of guys to do it, but there’s only one thing that this little tiny group of people could do better than Twitter, and that’s not be Twitter,” said Golda Velez, another member of the group who works as a senior software engineer at Postmates and co-founded civ.works, a privacy-centric social network for civic engagement.

The group has had some back and forth with Twitter executives on the scope of the project, eventually forming a Twitter-approved list of goals for the initiative. They define the challenges that the bluesky protocol should seek to address while also laying out what responsibilities are best left to the application creators building on the standard.

A Twitter spokesperson declined to comment.

Parrot.VC Twitter account

Image: TechCrunch

Who is involved

The pain points enumerated in the document, viewed by TechCrunch, encapsulate some of Twitter’s biggest shortcomings. They include “how to keep controversy and outrage from hijacking virality mechanisms,” as well as a desire to develop “customizable mechanisms” for moderation, though the document notes that the applications, not the overall protocol, are “ultimately liable for compliance, censorship, takedowns etc..”

“I think the solution to the problem of algorithms isn’t getting rid of algorithms — because sorting posts chronologically is an algorithm — the solution is to make it an open pluggable system by which you can go in and try different algorithms and see which one suits you or use the one that your friends like,” says Evan Henshaw-Plath, another member of the working group. He was one of Twitter’s earliest employees and has been building out his own decentralized social platform called Planetary.

His platform is based on the secure scuttlebutt protocol, which allows user to browse networks offline in an encrypted fashion. Early on, Planetary had been in talks with Twitter for a corporate investment as well as a personal investment from CEO Jack Dorsey, Henshaw-Plath says, but the competitive nature of the platform prompted some concern among Twitter’s lawyers and Planetary ended up receiving an investment from Twitter co-founder Biz Stone’s venture fund Future Positive. Stone did not respond to interview requests.

After agreeing on goals, Twitter had initially hoped for the broader team to arrive at some shared consensus but starkly different viewpoints within the group prompted Twitter to accept individual proposals from members. Some pushed Twitter to outright adopt or evolve an existing standard while others pushed for bluesky to pursue interoperability of standards early on and see what users naturally flock to.

One of the developers in the group hoping to bring bluesky onto their standard was Mastodon creator Eugen Rochko who tells TechCrunch he sees the need for a major shift in how social media platforms operate globally.

“Banning Trump was the right decision though it came a little bit too late. But at the same time, the nuance of the situation is that maybe it shouldn’t be a single American company that decides these things,” Rochko tells us.

Like several of the other members in the group, Rochko has been skeptical at times about Twitter’s motivation with the bluesky protocol. Shortly after Dorsey’s initial announcement in 2019, Mastodon’s official Twitter account tweeted out a biting critique, writing, “This is not an announcement of reinventing the wheel. This is announcing the building of a protocol that Twitter gets to control, like Google controls Android.”

Today, Mastodon is arguably one of the most mature decentralized social platforms. Rochko claims that the network of decentralized nodes has more than 2.3 million users spread across thousands of servers. In early 2017, the platform had its viral moment on Twitter, prompting an influx of “hundreds of thousands” of new users alongside some inquisitive potential investors whom Rochko has rebuffed in favor of a donation-based model.

Image Credits: TechCrunch

Inherent risks

Not all of the attention Rochko has garnered has been welcome. In 2019, Gab, a social network favored by right-wing extremists, brought its entire platform onto the Mastodon network after integrating the platform’s open source code, bringing Mastodon its single biggest web of users and its most undesirable liability all at once.

Rochko quickly disavowed the network and aimed to sever its ties to other nodes on the Mastodon platform and convince application creators to do the same. But a central fear of decentralization advocates was quickly realized, as the platform type’s first “success story” was a home for right-wing extremists.

This fear has been echoed in decentralized communities this week as app store owners and networks have taken another right-wing social network, Parler, off the web after violent content surfaced on the site in the lead-up and aftermath of riots at the U.S. Capitol, leaving some developers fearful that the social network may set up home on their decentralized standard.

“Fascists are 100% going to use peer-to-peer technologies, they already are and they’re going to start using it more… If they get pushed off of mainstream infrastructure or people are surveilling them really closely, they’re going to have added motivation,” said Emmi Bevensee, a researcher studying extremist presences on decentralized networks. “Maybe the far-right gets stronger footholds on peer-to-peer before the people who think the far-right is bad do because they were effectively pushed off.”

A central concern is that commoditizing decentralized platforms through efforts like bluesky will provide a more accessible route for extremists kicked off current platforms to maintain an audience and provide casual internet users a less janky path towards radicalization.

“Peer-to-peer technology is generally not that seamless right now. Some of it is; you can buy Bitcoin in Cash App now, which, if anything, is proof that this technology is going to become much more mainstream and adoption is going to become much more seamless,” Bevensee told TechCrunch. “In the current era of this mass exodus from Parler, they’re obviously going to lose a huge amount of audience that isn’t dedicated enough to get on IPFS. Scuttlebutt is a really cool technology but it’s not as seamless as Twitter.”

Extremists adopting technologies that promote privacy and strong encryption is far from a new phenomenon, encrypted chat apps like Signal and Telegram have been at the center of such controversies in recent years. Bevensee notes the tendency of right-wing extremist networks to adopt decentralized network tech has been “extremely demoralizing” to those early developer communities — though she notes that the same technologies can and do benefit “marginalized people all around the world.”

Though people connected to bluesky’s early moves see a long road ahead for the protocol’s development and adoption, they also see an evolving landscape with Parler and President Trump’s recent deplatforming that they hope will drive other stakeholders to eventually commit to integrating with the standard.

“Right at this moment I think that there’s going to be a lot of incentive to adopt, and I don’t just mean by end users, I mean by platforms, because Twitter is not the only one having these really thorny moderation problems,” Velez says. “I think people understand that this is a critical moment.”

#android, #biz-stone, #ceo, #co-founder, #computing, #encryption, #free-software, #gab, #google, #house-energy-and-commerce-committee, #jack-dorsey, #peer-to-peer, #photographer, #president, #social, #social-media, #social-media-platforms, #social-network, #social-networks, #tc, #technology, #text-messaging, #trump, #twitter, #united-states, #washington-d-c, #web-applications

0

How law enforcement gets around your smartphone’s encryption

Uberwachung, Symbolbild, Datensicherheit, Datenhoheit

Enlarge / Uberwachung, Symbolbild, Datensicherheit, Datenhoheit (credit: Westend61 | Getty Images)

Lawmakers and law enforcement agencies around the world, including in the United States, have increasingly called for backdoors in the encryption schemes that protect your data, arguing that national security is at stake. But new research indicates governments already have methods and tools that, for better or worse, let them access locked smartphones thanks to weaknesses in the security schemes of Android and iOS.

Cryptographers at Johns Hopkins University used publicly available documentation from Apple and Google as well as their own analysis to assess the robustness of Android and iOS encryption. They also studied more than a decade’s worth of reports about which of these mobile security features law enforcement and criminals have previously bypassed, or can currently, using special hacking tools. The researchers have dug into the current mobile privacy state of affairs and provided technical recommendations for how the two major mobile operating systems can continue to improve their protections.

“It just really shocked me, because I came into this project thinking that these phones are really protecting user data well,” says Johns Hopkins cryptographer Matthew Green, who oversaw the research. “Now I’ve come out of the project thinking almost nothing is protected as much as it could be. So why do we need a backdoor for law enforcement when the protections that these phones actually offer are so bad?”

Read 19 remaining paragraphs | Comments

#android, #biz-it, #encryption, #ios, #law-enforcement, #security, #smartphones, #tech

0

Tech and health companies including Microsoft and Salesforce team up on digital COVID-19 vaccination records

A new cross-industry initiative is seeking to establish a standard for digital vaccination records that can be used universally to identify COVID-19 vaccination status for individuals, in a way that can be both secure via encryption and traceable and verifiable for trustworthiness regarding their contents. The so-called ‘Vaccination Credential Initiative’ includes a range of big-name companies from both the healthcare and the tech industry, including Microsoft, Oracle, Salesforce and Epic, as well as the Mayo Clinic, Safe Health, Change Healthcare and the CARIN Alliance to name a few.

The effort is beginning with existing, recognized standards already in use in digital healthcare programs, like the SMART Health Cards specification, which adheres to HL7 FHIR (Fast Healthcare Interoperability Resources) which is a standard created for use in digital health records to make them interoperable between providers. The final product that the initiative aims to establish is an “encrypted digital copy of their immunization credentials to store in a digital wallet of their choice,” with a backup available as a printed QR code that includes W3C-standards verifiable credentials for individuals who don’t own or prefer not to use smartphones.

Vaccination credentials aren’t a new thing – they’ve existed in some form or another since the 1700s. But their use and history is also mired in controversy and accusations of inequity, since this is human beings we’re dealing with. And already with COVID-19, there efforts underway to make access to certain geographies dependent upon negative COVID-19 test results (though such results don’t actually guarantee that an individual doesn’t actually have COVID-19 or won’t transfer it to others).

A recent initiative by LA County specifically also is already providing digital immunization records to individuals via a partnership with Healthvana, facilitated by Apple’s Wallet technology. But Healthvana’s CEO and founder was explicit in telling me that that isn’t about providing a proof of immunity for use in deterring an individual’s social or geographic access. Instead, it’s about informing and supporting patients for optimal care outcomes.

It sounds like this initiative is much more about using a COVID-19 immunization record as a literal passport of sorts. It’s right in the name of the initiative, for once (‘Credential’ is pretty explicit). The companies involved also at least seem cognizant of the potential pitfalls of such a program, as MITRE’s chief digital health physician Dr. Brian Anderson said that “we are working to ensure that underserved populations have access to this verification,” and added that “just as COVID-19 does not discriminate based on socio-economic status, we must ensure that convenient access to records crosses the digital divide.”

Other quotes from Oracle and Salesforce, and additional member leaders confirm that the effort is focused on fostering a reopening of social and ecomicn activity, including “resuming travel,” get[ting] back to public life,” and “get[ting] concerts and sporting events going again.” Safe Health also says that they’ll help facility a “privacy-preserving health status verification” solution that is at least in part “blockchain-enabled.”

Given the urgency of solutions that can lead to a safe re-opening, and a way to keep tabs on the massive, global vaccination program that’s already underway, it makes sense that a modern approach would include a digital version of historic vaccination record systems. But such an approach, while it leverages new conveniences and modes made possible by smartphones and the internet, also opens itself up to new potential pitfalls and risks that will no doubt be highly scrutinized, particularly by public interest groups focused on privacy and equitable treatment.

#articles, #ceo, #encryption, #epic, #health, #healthcare, #healthvana, #mayo-clinic, #microsoft, #oracle, #regulation, #salesforce, #smartphones, #standards, #tc, #vaccination

0

Elon Musk dunks on Facebook and recommends Signal in wake of U.S. Capitol insurrection attempt

Elon Musk, the tech billionaire set to likely soon become the world’s richest man, and one of the most influential voices in the world of tech entrepreneurship, continued his recent trend of criticizing Facebook with a Twitter post late Wednesday night, following the attempted insurrection by pro-Trump rioters at the U.S. Capitol building. Musk shared a meme suggesting the founding of Facebook ultimately led to the day’s disastrous and shameful events.

Musk, who has himself used his massive reach (he has around 42.5 million followers on Twitter) to spread misinformation to his many followers, specifically around COVID-19 and its severity, also followed that up on Thursday morning with a reply expressing a lack of surprise at WhatsApp’s new Terms of Service and Privacy Policy, which will make sharing data from WhatsApp users back to Facebook mandatory for all on the platform.

The Tesla and SpaceX CEO also recommended that people instead use Signal, an encrypted messaging client which uses encryption by default and which is based on open-source standards. Side-note: If you do end up following Musk’s advice, you should also enable the app’s “disappearing messages” feature for an added layer of protection on both ends of the conversation.

Musk has a long history of opposing the use of Facebook, including the deletion of not just his own personal page, but also those of both Tesla and SpaceX, in 2018 during the original #deletefacebook campaign following the revelation of the Cambridge Analytica scandal.

#elon-musk, #encryption, #facebook, #hyperloop, #musk, #social, #social-media, #software, #spacex, #tc, #technology, #whatsapp

0

WhatsApp users must share their data with Facebook or stop using the app

In this photo illustration a Whatsapp logo seen displayed on

Enlarge (credit: Getty Images)

WhatsApp, the Facebook-owned messenger that claims to have privacy coded into its DNA, is giving its 2 billion plus users an ultimatum: agree to share their personal data with the social network or delete their accounts.

The requirement is being delivered through an in-app alert directing users to agree to sweeping changes in the WhatsApp terms of service. Those who don’t accept the revamped privacy policy by February 8 will no longer be able to use the app.

Share and share alike

Shortly after Facebook acquired WhatsApp for $19 billion in 2014, its developers built state-of-the-art end-to-end encryption into the messaging app. The move was seen as a victory for privacy advocates because it used the Signal Protocol, an open source encryption scheme whose source code has been reviewed and audited by scores of independent security experts.

Read 8 remaining paragraphs | Comments

#encryption, #facebook, #policy, #privacy, #signal, #tech, #whatsapp

0

Kazakhstan spies on citizens’ HTTPS traffic; browser makers fight back

Surveillance camera peering into laptop computer

Enlarge (credit: Thomas Jackson | Stone | Getty Images)

Google, Mozilla, Apple, and Microsoft said they’re joining forces to stop Kazakhstan’s government from decrypting and reading HTTPS-encrypted traffic sent between its citizens and overseas social media sites.

All four of the companies’ browsers recently received updates that block a root certificate the government has been requiring some citizens to install. The self-signed certificate caused traffic sent to and from select websites to be encrypted with a key controlled by the government. Under industry standards HTTPS keys are supposed to be private and under the control only of the site operator.

A thread on Mozilla’s bug-reporting site first reported the certificate in use on December 6. The Censored Planet website later reported that the certificate worked against dozens of Web services that mostly belonged to Google, Facebook, and Twitter. Censored Planet identified the sites affected as:

Read 3 remaining paragraphs | Comments

#biz-it, #censorship, #encryption, #https, #policy, #spying, #surveillance

0

2020 was a disaster, but the pandemic put security in the spotlight

Let’s preface this year’s predictions by acknowledging and admitting how hilariously wrong we were when this time last year we said that 2020 “showed promise.”

In fairness (almost) nobody saw a pandemic coming.

With 2020 wrapping up, much of the security headaches exposed by the pandemic will linger into the new year.

The pandemic is, and remains, a global disaster of epic proportions that’s forced billions of people into lockdown, left economies in tatters with companies (including startups) struggling to stay afloat. The mass shifting of people working from home brought security challenges with it, like how to protect your workforce when employees are working outside the security perimeter of their offices. But it’s forced us to find and solve solutions to some of the most complex challenges, like pulling off a secure election and securing the supply chain for the vaccines that will bring our lives back to some semblance of normality.

With 2020 wrapping up, much of the security headaches exposed by the pandemic will linger into the new year. This is what to expect.

Working from home has given hackers new avenues for attacks

The sudden lockdowns in March drove millions to work from home. But hackers quickly found new and interesting ways to target big companies by targeting the employees themselves. VPNs were a big target because of outstanding vulnerabilities that many companies didn’t bother to fix. Bugs in enterprise software left corporate networks open to attack. The flood of personal devices logging onto the network — and the influx of malware with it — introduced fresh havoc.

Sophos says that this mass decentralizing of the workforce has turned us all into our own IT departments. We have to patch our own computers, install security updates, and there’s no IT just down the hallway to ask if that’s a phishing email.

Companies are having to adjust to the cybersecurity challenges, since working from home is probably here to stay. Managed service providers, or outsourced IT departments, have a “huge opportunity to benefit from the work-from-home shift,” said Grayson Milbourne, security intelligence director at cybersecurity firm Webroot.

Ransomware has become more targeted and more difficult to escape

File-encrypting malware, or ransomware, is getting craftier and sneakier. Where traditional ransomware would encrypt and hold a victim’s files hostage in exchange for a ransom payout, the newer and more advanced strains first steal a victim’s files, encrypt the network and then threaten to publish the stolen files if the ransom isn’t paid.

This data-stealing ransomware makes escaping an attack far more difficult because a victim can’t just restore their systems from a backup (if there is one). CrowdStrike’s chief technology officer Michael Sentonas calls this new wave of ransomware “double extortion” because victims are forced to respond to the data breach as well.

The healthcare sector is under the closest guard because of the pandemic. Despite promises from some (but not all) ransomware groups that hospitals would not be deliberately targeted during the pandemic, medical practices were far from immune. 2020 saw several high profile attacks. A ransomware attack at Universal Health Services, one of the largest healthcare providers in the U.S., caused widespread disruption to its systems. Just last month U.S. Fertility confirmed a ransomware attack on its network.

These high-profile incidents are becoming more common because hackers are targeting their victims very carefully. These hyperfocused attacks require a lot more skill and effort but improve the hackers’ odds of landing a larger ransom — in some cases earning the hackers millions of dollars from a single attack.

“This coming year, these sophisticated cyberattacks will put enormous stress on the availability of services — in everything from rerouted healthcare services impacting patient care, to availability of online and mobile banking and finance platforms,” said Sentonas.

#computer-security, #cyberattacks, #encryption, #enterprise-software, #facial-recognition, #government, #law-enforcement, #malware, #privacy, #ransomware, #security, #u-s-government

0

5 questions every IT team should to be able to answer

Now more than ever, IT teams play a vital role in keeping their businesses running smoothly and securely. With all of the assets and data that are now broadly distributed, a CEO depends on their IT team to ensure employees remain connected and productive and that sensitive data remains protected.

CEOs often visualize and measure things in terms of dollars and cents, and in the face of continuing uncertainty, IT — along with most other parts of the business — is facing intense scrutiny and tightening of budgets. So, it is more important than ever to be able to demonstrate that they’ve made sound technology investments and have the agility needed to operate successfully in the face of continued uncertainty.

For a CEO to properly understand risk exposure and make the right investments, IT departments have to be able to confidently communicate what types of data are on any given device at any given time.

Here are five questions that IT teams should be ready to answer when their CEO comes calling:

What have we spent our money on?

Or, more specifically, exactly how many assets do we have? And, do we know where they are? While these seem like basic questions, they can be shockingly difficult to answer … much more difficult than people realize. The last several months in the wake of the COVID-19 outbreak have been the proof point.

With the mass exodus of machines leaving the building and disconnecting from the corporate network, many IT leaders found themselves guessing just how many devices had been released into the wild and gone home with employees.

One CIO we spoke to estimated they had “somewhere between 30,000 and 50,000 devices” that went home with employees, meaning there could have been up to 20,000 that were completely unaccounted for. The complexity was further compounded as old devices were pulled out of desk drawers and storage closets to get something into the hands of employees who were not equipped to work remotely. Companies had endpoints connecting to corporate network and systems that they hadn’t seen for years — meaning they were out-of-date from a security perspective as well.

This level of uncertainty is obviously unsustainable and introduces a tremendous amount of security risk. Every endpoint that goes unaccounted for not only means wasted spend but also increased vulnerability, greater potential for breach or compliance violation, and more. In order to mitigate these risks, there needs to be a permanent connection to every device that can tell you exactly how many assets you have deployed at any given time — whether they are in the building or out in the wild.

Are our devices and data protected?

Device and data security go hand in hand; without the ability to see every device that is deployed across an organization, it becomes next to impossible to know what data is living on those devices. When employees know they are leaving the building and going to be off network, they tend to engage in “data hoarding.”

#column, #cryptography, #cybercrime, #data-security, #encryption, #endpoint-security, #enterprise, #malware, #mobile-device-management, #security, #startups, #telecommuting, #the-extra-crunch-daily

0