Disaster recovery can be an effective way to ease into the cloud

Operating in the cloud is soon going to be a reality for many businesses whether they like it or not. Points of contention with this shift often arise from unfamiliarity and discomfort with cloud operations. However, cloud migrations don’t have to be a full lift and shift.

Instead, leaders unfamiliar with the cloud should start by moving over their disaster recovery program to the cloud, which helps to gain familiarity and understanding before a full migration of production workloads.

What is DRaaS?

Disaster recovery as a service (DRaaS) is cloud-based disaster recovery delivered as a service to organizations in a self-service, partially managed or fully managed service model. The agility of DR in the cloud affords businesses a geographically diverse location to failover operations and run as close to normal as possible following a disruptive event. DRaaS emphasizes speed of recovery so that this failover is as seamless as possible. Plus, technology teams can offload some of the more burdensome aspects of maintaining and testing their disaster recovery.

When it comes to disaster recovery testing, allow for extra time to let your IT staff learn the ins and outs of the cloud environment.

DRaaS is a perfect candidate for a first step into the cloud for five main reasons:

  • Using DRaaS helps leaders get accustomed to the ins and outs of cloud before conducting a full production shift.
  • Testing cycles of the DRaaS solution allows IT teams to see firsthand how their applications will operate in a cloud environment, enabling them to identify the applications that will need a full or partial refactor before migrating to the cloud.
  • With DRaaS, technology leaders can demonstrate an early win in the cloud without risking full production.
  • DRaaS success helps gain full buy-in from stakeholders, board members and executives.
  • The replication tools that DRaaS uses are sometimes the same tools used to migrate workloads for production environments — this helps the technology team practice their cloud migration strategy.

Steps to start your DRaaS journey to the cloud

Define your strategy

Do your research to determine if DRaaS is right for you given your long-term organizational goals. You don’t want to start down a path to one cloud environment if that cloud isn’t aligned with your company’s objectives, both for the short and long term. Having cross-functional conversations among business units and with company executives will assist in defining and iterating your strategy.

#as-a-service, #cloud, #cloud-computing, #cloud-infrastructure, #cloud-migration, #cloud-storage, #column, #computing, #data-recovery, #disaster-recovery, #ec-cloud-and-enterprise-infrastructure, #ec-column, #saas, #startups

Platform-as-a-service startup Porter aims to become go-to platform for deploying, managing cloud-based apps

By the time Porter co-founders Trevor Shim and Justin Rhee decided to build a company around DevOps, the pair were well versed in doing remote development on Kubernetes. And like other users, were consistently getting burnt by the technology.

They realized that for all of the benefits, Rhee told TechCrunch that the technology was there, but users were having to manage the complexity of hosting solutions as well as incur the costs associated with a big DevOps team.

They decided to build out a solution externally and went through Y Combinator’s Summer 2020 batch, where they found other startup companies trying to do the same.

Today, Porter announced $1.5 million in seed funding from Venrock, Translink Capital, Soma Capital and several angel investors. It’s goal is to build a Platform-as-a-Service that any team can use to manage applications in its own cloud, essentially delivering the full flexibility of Kubernetes through a Heroku-like experience.

Why Heroku? It is the hosting platform that developers are used to, and not just small companies, but also later stage companies. When they want to move to Amazon Web Services, Google Cloud or DigitalOcean, Porter will be that bridge, Shim added.

However, while Heroku is still popular, the pair say companies are thinking the platform is getting outdated because it is standing still technology-wise. Each year, companies move on from the platform due to technical limitations and cost, Rhee said.

A big part of the bet Porter is taking is not charging users for hosting, and its cost is a pure SaaS product,he said. They aren’t looking to be resellers, so companies can use their own cloud, but Porter will provide the automation and users can pay with their AWS and GCP credits, which gives them flexibility.

A common pattern is a move into Kubernetes, but “the zinger we talk about,” is if Heroku was built in 2021, it would have been built on Kubernetes, Shim added.

“So we see ourselves as a successor’s successor,” he said.

To be that bridge, the company will use the new funding to increase its engineering bandwidth with the goal of “becoming the de facto standard for all startups.” Shim said.

Porter’s platform went live in February, and in six months became the sixth-fastest growing open source platform download on GitHub, said Ethan Batraski, partner at Venrock. He met the company through YC and was “super impressed with Rhee’s and Shim’s vision.

“Heroku has 100,000 developers, but I believe it has stagnated,” Batraski added. “Porter already has 100 startups on its platform. The growth they’ve seen — four or five times — is what you want to see at this stage.”

His firm has long focused on data infrastructure and is seeing the stack get more complex, saying “at the same time, more developers are wanting to build out an app over a week, and scale it to millions of users, but that takes people resources. With Kubernetes it can turn everyone into an expert developer without them knowing it,” he added.

“Heroku has 100,000 developers, but I believe it has stagnated,” Batraski added. “Porter already has 100 startups on its platform. The growth they’ve seen — four or five times — is what you want to see at this stage.”

 

#apps, #cloud, #cloud-computing, #cloud-infrastructure, #cloud-storage, #developer, #ethan-batraski, #funding, #heroku, #justin-rhee, #kubernetes, #recent-funding, #saas, #soma-capital, #startups, #tc, #translink-capital, #trevor-shim, #venrock, #y-combinator

4 key areas SaaS startups must address to scale infrastructure for the enterprise

Startups and SMBs are usually the first to adopt many SaaS products. But as these customers grow in size and complexity — and as you rope in larger organizations — scaling your infrastructure for the enterprise becomes critical for success.

Below are four tips on how to advance your company’s infrastructure to support and grow with your largest customers.

Address your customers’ security and reliability needs

If you’re building SaaS, odds are you’re holding very important customer data. Regardless of what you build, that makes you a threat vector for attacks on your customers. While security is important for all customers, the stakes certainly get higher the larger they grow.

Given the stakes, it’s paramount to build infrastructure, products and processes that address your customers’ growing security and reliability needs. That includes the ethical and moral obligation you have to make sure your systems and practices meet and exceed any claim you make about security and reliability to your customers.

Here are security and reliability requirements large customers typically ask for:

Formal SLAs around uptime: If you’re building SaaS, customers expect it to be available all the time. Large customers using your software for mission-critical applications will expect to see formal SLAs in contracts committing to 99.9% uptime or higher. As you build infrastructure and product layers, you need to be confident in your uptime and be able to measure uptime on a per customer basis so you know if you’re meeting your contractual obligations.

While it’s hard to prioritize asks from your largest customers, you’ll find that their collective feedback will pull your product roadmap in a specific direction.

Real-time status of your platform: Most larger customers will expect to see your platform’s historical uptime and have real-time visibility into events and incidents as they happen. As you mature and specialize, creating this visibility for customers also drives more collaboration between your customer operations and infrastructure teams. This collaboration is valuable to invest in, as it provides insights into how customers are experiencing a particular degradation in your service and allows for you to communicate back what you found so far and what your ETA is.

Backups: As your customers grow, be prepared for expectations around backups — not just in terms of how long it takes to recover the whole application, but also around backup periodicity, location of your backups and data retention (e.g., are you holding on to the data too long?). If you’re building your backup strategy, thinking about future flexibility around backup management will help you stay ahead of these asks.

#amazon-web-services, #api, #cloud, #cloud-infrastructure, #cloud-storage, #column, #data-center, #dlp, #ec-cloud-and-enterprise-infrastructure, #ec-column, #ec-enterprise-applications, #ec-how-to, #enterprise, #enterprise-saas, #multitenancy, #saas, #software-as-a-service, #sso, #startups, #web-services

Evernote quietly disappeared from an anti-surveillance lobbying group’s website

In 2013, eight tech companies were accused of funneling their users’ data to the U.S. National Security Agency under the so-called PRISM program, according to highly classified government documents leaked by NSA whistleblower Edward Snowden. Six months later, the tech companies formed a coalition under the name Reform Government Surveillance, which as the name would suggest was to lobby lawmakers for reforms to government surveillance laws.

The idea was simple enough: to call on lawmakers to limit surveillance to targeted threats rather than conduct a dragnet collection of Americans’ private data, provide greater oversight and allow companies to be more transparent about the kinds of secret orders for user data that they receive.

Apple, Facebook, Google, LinkedIn, Microsoft, Twitter, Yahoo and AOL (to later become Verizon Media, which owns TechCrunch — for now) were the founding members of Reform Government Surveillance, or RGS, and over the years added Amazon, Dropbox, Evernote, Snap and Zoom as members.

But then sometime in June 2019, Evernote quietly disappeared from the RGS website without warning. What’s even more strange is that nobody noticed for two years, not even Evernote.

“We hadn’t realized our logo had been removed from the Reform Government Surveillance website,” said an Evernote spokesperson, when reached for comment by TechCrunch. “We are still members.”

Evernote joined the coalition in October 2014, a year and a half after PRISM first came to public light, even though the company was never named in the leaked Snowden documents. Still, Evernote was a powerful ally to have onboard, and showed RGS that its support for reforming government surveillance laws was gaining traction outside of the companies named in the leaked NSA files. Evernote cites its membership of RGS in its most recent transparency report and that it supports efforts to “reform practices and laws regulating government surveillance of individuals and access to their information” — which makes its disappearance from the RGS website all the more bizarre.

TechCrunch also asked the other companies in the RGS coalition if they knew why Evernote was removed and all either didn’t respond, wouldn’t comment or had no idea. A spokesperson for one of the RGS companies said they weren’t all that surprised since companies “drop in and out of trade associations.”

The website of the Reform Government Surveillance coalition, which features Amazon, Apple, Dropbox, Facebook, Google, Microsoft, Snap, Twitter, Verizon Media and Zoom, but not Evernote, which is also a member. Image Credits: TechCrunch

While that may be true — companies often sign on to lobbying efforts that ultimately help their businesses; government surveillance is one of those rare thorny issues that got some of the biggest names in Silicon Valley rallying behind the cause. After all, few tech companies have openly and actively advocated for an increase in government surveillance of their users, since it’s the users themselves who are asking for more privacy baked into the services they use.

In the end, the reason for Evernote’s removal seems remarkably benign.

“Evernote has been a longtime member — but they were less active over the last couple of years, so we removed them from the website,” said an email from Monument Advocacy, a Washington, D.C. lobbying firm that represents RGS. “Your inquiry has helped to prompt new conversations between our organizations and we’re looking forward to working together more in the future.”

Monument has been involved with RGS since near the beginning after it was hired by the RGS coalition of companies to lobby for changes to surveillance laws in Congress. Monument has spent $2.2 million in lobbying to date since it began work with RGS in 2014, according to OpenSecrets, specifically on lobbying lawmakers to push for changes to bills under congressional consideration, such as changes to the Patriot Act and the Foreign Intelligence Surveillance Act, or FISA, albeit with mixed success. RGS supported the USA Freedom Act, a bill designed to curtail some of the NSA’s collection under the Patriot Act, but was unsuccessful in its opposition to the reauthorization of Section 702 of FISA, the powers that allow the NSA to collect intelligence on foreigners living outside the United States, which was reauthorized for six years in 2018.

RGS has been largely quiet for the past year — issuing just one statement on the importance of transatlantic data flows, the most recent hot-button issue to concern tech companies, fearing that anything other than the legal status quo could see vast swaths of their users in Europe cut off from their services.

“RGS companies are committed to protecting the privacy of those who use our services, and to safeguard personal data,” said the statement, which included the logos of Amazon, Apple, Dropbox, Facebook, Google, Microsoft, Snap, Twitter, Verizon Media and Zoom, but not Evernote.

In a coalition that’s only as strong as its members, the decision to remove Evernote from the website while it’s still a member hardly sends a resounding message of collective corporate unity — which these days isn’t something Big Tech can find much of.

#amazon, #apple, #articles, #cloud-storage, #computing, #congress, #edward-snowden, #europe, #evernote, #facebook, #government, #linkedin, #mass-surveillance, #microsoft, #national-security-agency, #prism, #security, #software, #spokesperson, #techcrunch, #transparency-report, #twitter, #united-states, #usa-freedom-act, #verizon, #washington-d-c, #yahoo

Swiss Post acquires e2e encrypted cloud services provider, Tresorit

Swiss Post, the former state-owned mail delivery firm which became a private limited company in 2013, diversifying into logistics, finance, transport and more (including dabbling in drone delivery) while retaining its role as Switzerland’s national postal service, has acquired a majority stake in Swiss-Hungarian startup Tresorit, an early European pioneer in end-to-end-encrypted cloud services.

Terms of the acquisition are not being disclosed. But Swiss Post’s income has been falling in recent years, as (snailmail) letter volumes continue to decline. And a 2019 missive warned its business needed to find new sources of income.

Tresorit, meanwhile, last raised back in 2018 — when it announced an €11.5M Series B round, with investors including 3TS Capital Partners and PortfoLion. Other backers of the startup include business angels and serial entrepreneurs like Márton Szőke, Balázs Fejes and Andreas Kemi. According to Crunchbase Tresorit had raised less than $18M over its decade+ run.

It looks like a measure of the rising store being put on data security that a veteran ‘household’ brand like Swiss Post sees strategic value in extending its suite of digital services with the help of a trusted startup in the e2e encryption space.

‘Zero access’ encryption was still pretty niche back when Tresorit got going over a decade ago but it’s essentially become the gold standard for trusted information security, with a variety of players now offering e2e encrypted services — to businesses and consumers.

Announcing the acquisition in a press release today, the pair said they will “collaborate to further develop privacy-friendly and secure digital services that enable people and businesses to easily exchange information while keeping their data secure and private”.

Tresorit will remain an independent company within Swiss Post Group, continuing to serve its global target regions of EU countries, the UK and the US, with the current management (founders), brand and service also slated to remain unchanged, per the announcement.

The 2011-founded startup sells what it brands as “ultra secure” cloud services — such as storage, file syncing and collaboration — targeted at business users (it has 10,000+ customers globally); all zipped up with a ‘zero access’ promise courtesy of a technical architecture that means Tresorit literally can’t decrypt customer data because it does not hold the encryption keys.

It said today that the acquisition will strengthen its business by supporting further expansion in core markets — including Germany, Austria and Switzerland. (The Swiss Post brand should obviously be a help there.)

The pair also said they see potential for Tresorit’s tech to expand Swiss Post’s existing digital product portfolio — which includes services like a “digital letter box” app (ePost) and an encrypted email offering. So it’s not starting from scratch here.

Commenting on the acquisition in a statement, Istvan Lam, co-founder and CEO of Tresorit, said: “From the very beginning, our mission has been to empower everyone to stay in control of their digital valuables. We are proud to have found a partner in Swiss Post who shares our values on security and privacy and makes us even stronger. We are convinced that this collaboration strengthens both companies and opens up new opportunities for us and our customers.”

Asked why the startup decided to sell at this point in its business development — rather than taking another path, such as an IPO and going public — Lam flagged Swiss Post’s ‘trusted’ brand and what he dubbed a “100% fit” on values and mission.

“Tresorit’s latest investment, our biggest funding round, happened in 2018. As usual with venture capital-backed companies, the lifecycle of this investment round is now beginning to come to an end,” he told TechCrunch.

“Going public via an IPO has also been on our roadmap and could have been a realistic scenario within the next 3-4 years. The reason we have decided to partner now with a strategic investor and collaborate with Swiss Post is that their core values and vision on data privacy is a 100% fit with our values and mission of protecting privacy. With the acquisition, we entered a long-term strategic partnership and are convinced that with Tresorit’s end-to-end encryption technology and the trusted brand of Swiss Post we will further develop services that help individuals and businesses exchange information securely and privately.”

“Tresorit has paved the way for true end-to-end encryption across the software industry over the past decade. With the acquisition of Tresorit, we are strategically expanding our competencies in digital data security and digital privacy, allowing us to further develop existing offers,” added Nicole Burth, a member of the Swiss Post Group executive board and head of communication services, in a supporting statement.

Switzerland remains a bit of a hub for pro-privacy startups and services, owing to a historical reputation for strong privacy laws.

However, as Republik reported earlier this year, state surveillance activity in the country has been stepping up — following a 2018 amendment to legislative powers that expanded intercept capabilities to cover digital comms.

Such encroachments are worrying but may arguably make e2e encryption even more important — as it can offer a technical barrier against state-sanctioned privacy intrusions.

At the same time, there is a risk that legislators perceive rising use of robust encryption as a threat to national security interests and their associated surveillance powers — meaning they could seek to counter the trend by passing even more expansive legislation that directly targets and or even outlaws the use of e2e encryption. (Australia has passed an anti-encryption law, for instance, while the UK cemented its mass surveillance capabilities back in 2016 — passing legislation which includes powers to compel companies to limit the use of encryption.)

At the European Union level, lawmakers have also recently been pushing an agenda of ‘lawful access’ to encrypted data — while simultaneously claiming to support the use of encryption on data security and privacy grounds. Quite how the EU will circle that square in legislative terms remains to be seen.

But there are also some more positive legal headwinds for European encryption startups like Tresorit: A ruling last summer by Europe’s top court dialled up the complexity of taking users’ personal data out of the region — certainly when people’s information is flowing to third countries like the US where it’s at risk from state agencies’ mass surveillance.

Asked if Tresorit has seen a rise in interest in the wake of the ‘Schrems II’ ruling, Lam told us: “We see the demand for European-based SaaS cloud services growing in the future. Being a European-based company has already been an important competitive advantage for us, especially among our business and enterprise customers.”

EU law in this area contains a quirk whereby the national security powers of Member States are not so clearly factored in vs third countries. And while Switzerland is not an EU Member it remains a closely associated country, being part of the bloc’s single market.

Nevertheless, questions over the sustainability of Switzerland’s EU data adequacy decision persist, given concerns that its growing domestic surveillance regime does not provide individuals with adequate redress remedies — and may therefore be violating their fundamental rights.

If Switzerland loses EU data adequacy it could impact the compliance requirements of digital services based in the country — albeit, again, e2e encryption could offer Swiss companies a technical solution to circumvent such legal uncertainty. So that still looks like good news for companies like Tresorit.

 

#3ts-capital-partners, #austria, #cloud, #cloud-services, #cloud-storage, #cryptography, #e2e-encryption, #encryption, #end-to-end-encryption, #europe, #european-union, #fundings-exits, #germany, #privacy, #schrems-ii, #security, #swiss-post, #switzerland, #tc, #tresorit

Internxt gets $1M to be ‘the Coinbase of decentralized storage’

Valencia-based startup Internxt has been quietly working on an ambitious plan to make decentralized cloud storage massively accessible to anyone with an Internet connection.

It’s just bagged $1M in seed funding led by Angels Capital, a European VC fund owned by Juan Roig (aka Spain’s richest grocer and second wealthiest billionaire), and Miami-based The Venture City. It had previously raised around half a million dollars via a token sale to help fund early development.

The seed funds will be put towards its next phase of growth — its month-to-month growth rate is 30% and it tells us it’s confident it can at least sustain that — including planning a big boost to headcount so it can accelerate product development.

The Spanish startup has spent most of its short life to date developing a decentralized infrastructure that it argues is both inherently more secure and more private than mainstream cloud-based apps (such as those offered by tech giants like Google).

This is because files are not only encrypted in a way that means it cannot access your data but information is also stored in a highly decentralized way, split into tiny shards which are then distributed across multiple storage locations, with users of the network contributing storage space (and being recompensed for providing that capacity with — you guessed it — crypto).

“It’s a distributed architecture, we’ve got servers all over the world,” explains founder and CEO Fran Villalba Segarra. “We leverage and use the space provided by professionals and individuals. So they connect to our infrastructure and start hosting data shards and we pay them for the data they host — which is also more affordable because we are not going through the traditional route of just renting out a data center and paying them for a fixed amount of space.

“It’s like the Airbnb model or Uber model. We’ve kind of democratized storage.”

Internxt clocked up three years of R&D, beginning in 2017, before launching its first cloud-based apps: Drive (file storage), a year ago — and now Photos (a Google Photos rival).

So far it’s attracting around a million active users without paying any attention to marketing, per Villalba Segarra.

Internxt Mail is the next product in its pipeline — to compete with Gmail and also ProtonMail, a pro-privacy alternative to Google’s freemium webmail client (and for more on why it believes it can offer an edge there read on).

Internxt Send (file transfer) is another product billed as coming soon.

“We’re working on a G-Suite alternative to make sure we’re at the level of Google when it comes to competing with them,” he adds.

The issue Internxt’s architecture is designed to solve is that files which are stored in just one place are vulnerable to being accessed by others. Whether that’s the storage provider itself (who may, like Google, have a privacy-hostile business model based on mining users’ data); or hackers/third parties who manage to break the provider’s security — and can thus grab and/or otherwise interfere with your files.

Security risks when networks are compromised can include ransomeware attacks — which have been on an uptick in recent years — whereby attackers that have penetrated a network and gained access to stored files then hold the information to ransom by walling off the rightful owner’s access (typically by applying their own layer of encryption and demanding payment to unlock the data).

The core conviction driving Internxt’s decentralization push is that files sitting whole on a server or hard drive are sitting ducks.

Its answer to that problem is an alternative file storage infrastructure that combines zero access encryption and decentralization — meaning files are sharded, distributed and mirrored across multiple storage locations, making them highly resilient against storage failures or indeed hack attacks and snooping.

The approach ameliorates cloud service provider-based privacy concerns because Internxt itself cannot access user data.

To make money its business model is simple, tiered subscriptions: With (currently) one plan covering all its existing and planned services — based on how much data you need. (It is also freemium, with the first 10GB being free.)

Internxt is by no means the first to see key user value in rethinking core Internet architecture.

Scotland’s MaidSafe has been trying to build an alternative decentralized Internet for well over a decade at this point — only starting alpha testing its alt network (aka, the Safe Network) back in 2016, after ten years of testing. Its long term mission to reinvent the Internet continues.

Another (slightly less veteran) competitor in the decentralized cloud storage space is Storj, which is targeting enterprise users. There’s also Filecoin and Sia — both also part of the newer wave of blockchain startups that sprung up after Bitcoin sparked entrepreneurial interest in cryptocurrencies and blockchain/decentralization.

How, then, is what Internxt’s doing different to these rival decentralized storage plays — all of which have been at this complex coal face for longer?

“We’re the only European based startup that’s doing this [except for MaidSafe, although it’s UK not EU based],” says Villalba Segarra, arguing that the European Union’s legal regime around data protection and privacy lends it an advantage vs U.S. competitors. “All the others, Storj, plus Sia, Filecoin… they’re all US-based companies as far as I’m aware.”

The other major differentiating factor he highlights is usability — arguing that the aforementioned competitors have been “built by developers for developers”. Whereas he says Internxt’s goal is be the equivalent of ‘Coinbase for decentralized storage’; aka, it wants to make a very complex technology highly accessible to non-technical Internet users.

“It’s a huge technology but in the blockchain space we see this all the time — where there’s huge potential but it’s very hard to use,” he tells TechCrunch. “That’s essentially what Coinbase is also trying to do — bringing blockchain to users, making it easier to use, easier to invest in cryptocurrency etc. So that’s what we’re trying to do at Internxt as well, bringing blockchain for cloud storage to the people. Making it easy to use with a very easy to use interface and so forth.

“It’s the only service in the distributed cloud space that’s actually usable — that’s kind of our main differentiating factor from Storj and all these other companies.”

“In terms of infrastructure it’s actually pretty similar to that of Sia or Storj,” he goes on — further likening Internxt’s ‘zero access’ encryption to Proton Drive’s architecture (aka, the file storage product from the makers of end-to-end encrypted email service ProtonMail) — which also relies on client side encryption to give users a robust technical guarantee that the service provider can’t snoop on your stuff. (So you don’t have to just trust the company not to violate your privacy.)

But while it’s also touting zero access encryption (it seems to be using off-the-shelf AES-256 encryption; it says it uses “military grade”, client-side, open source encryption that’s been audited by Spain’s S2 Grupo, a major local cybersecurity firm), Internxt takes the further step of decentralizing the encrypted bits of data too. And that means it can tout added security benefits, per Villalba Segarra.

“On top of that what we do is we fragment data and then distribute it around the world. So essentially what servers host are encrypted data shards — which is much more secure because if a hacker was ever to access one of these servers what they would find is encrypted data shards which are essentially useless. Not even we can access that data.

“So that adds a huge layer of security against hackers or third party [access] in terms of data. And then on top of that we build very nice interfaces with which the user is very used to using — pretty much similar to those of Google… and that also makes us very different from Storj and Sia.”

Storage space for Internxt users’ files is provided by users who are incentivized to offer up their unused capacity to host data shards with micropayments of crypto for doing so. This means capacity could be coming from an individual user connecting to Internxt with just their laptop — or a datacenter company with large amounts of unused storage capacity. (And Villalba Segarra notes that it has a number of data center companies, such as OVH, are connected to its network.)

“We don’t have any direct contracts [for storage provision]… Anyone can connect to our network — so datacenters with available storage space, if they want to make some money on that they can connect to our network. We don’t pay them as much as we would pay them if we went to them through the traditional route,” he says, likening this portion of the approach to how Airbnb has both hosts and guests (or Uber needs drivers and riders).

“We are the platform that connects both parties but we don’t host any data ourselves.”

Internxt uses a reputation system to manage storage providers — to ensure network uptime and quality of service — and also applies blockchain ‘proof of work’ challenges to node operators to make sure they’re actually storing the data they claim.

“Because of the decentralized nature of our architecture we really need to make sure that it hits a certain level of reliability,” he says. “So for that we use blockchain technology… When you’re storing data in your own data center it’s easier in terms of making sure it’s reliable but when you’re storing it in a decentralized architecture it brings a lot of benefits — such as more privacy or it’s also more affordable — but the downside is you need to make sure that for example they’re actually storing data.”

Payments to storage capacity providers are also made via blockchain tech — which Villalba Segarra says is the only way to scale and automate so many micropayments to ~10,000 node operators all over the world.

Discussing the issue of energy costs — given that ‘proof of work’ blockchain-based technologies are facing increased scrutiny over the energy consumption involved in carrying out the calculations — he suggests that Internxt’s decentralized architecture can be more energy efficient than traditional data centers because data shards are more likely to be located nearer to the requesting user — shrinking the energy required to retrieve packets vs always having to do so from a few centralized global locations.

“What we’ve seen in terms of energy consumption is that we’re actually much more energy efficient than a traditional cloud storage service. Why? Think about it, we mirror files and we store them all over the world… It’s actually impossible to access a file from Dropbox that is sent out from [a specific location]. Essentially when you access Dropbox or Google Drive and you download a file they’re going to be sending it out from their data center in Texas or wherever. So there’s a huge data transfer energy consumption there — and people don’t think about it,” he argues.

“Data center energy consumption is already 2%* of the whole world’s energy consumption if I’m not mistaken. So being able to use latency and being able to send your files from [somewhere near the user] — which is also going to be faster, which is all factored into our reputation system — so our algorithms are going to be sending you the files that are closer to you so that we save a lot of energy from that. So if you multiple that by millions of users and millions of terabytes that actually saves a lot of energy consumption and also costs for us.”

What about latency from the user’s point of view? Is there a noticeable lag when they try to upload or retrieve and access files stored on Internxt vs — for example — Google Drive?

Villalba Segarra says being able to store file fragments closer to the user also helps compensate for any lag. But he also confirms there is a bit of a speed difference vs mainstream cloud storage services.

“In terms of upload and download speed we’re pretty close to Google Drive and Dropbox,” he suggests. “Again these companies have been around for over ten years and their services are very well optimized and they’ve got a traditional cloud architecture which is also relatively simpler, easier to build and they’ve got thousands of [employees] so their services are obviously much better than our service in terms of speed and all that. But we’re getting really close to them and we’re working really fast towards bringing our speed [to that level] and also as many features as possible to our architecture and to our services.”

“Essentially how we see it is we’re at the level of Proton Drive or Tresorit in terms of usability,” he adds on the latency point. “And we’re getting really close to Google Drive. But an average user shouldn’t really see much of a difference and, as I said, we’re literally working as hard as possible to make our services as useable as those of Google. But we’re ages ahead of Storj, Sia, MaidSafe and so forth — that’s for sure.”

Internxt is doing all this complex networking with a team of just 20 people currently. But with the new seed funding tucked in its back pocket the plan now is to ramp up hiring over the next few months — so that it can accelerate product development, sustain its growth and keep pushing its competitive edge.

“By the time we do a Series A we should be around 100 people at Internxt,” says Villalba Segarra. “We are already preparing our Series A. We just closed our seed round but because of how fast we’re growing we are already being reached out to by a few other lead VC funds from the US and London.

“It will be a pretty big Series A. Potentially the biggest in Spain… We plan on growing until the Series A at at least a 30% month-to-month rate which is what we’ve been growing up until now.”

He also tells TechCrunch that the intention for the Series A is to do the funding at a $50M valuation.

“We were planning on doing it a year from now because we literally just closed our [seed] round but because of how many VCs are reaching out to us we may actually do it by the end of this year,” he says, adding: “But timeframe isn’t an issue for us. What matters most is being able to reach that minimum valuation.”

*Per the IEA, data centres and data transmission networks each accounted for around 1% of global electricity use in 2019

#angels-capital, #blockchain, #cloud-computing, #cloud-storage, #coinbase, #cryptocurrencies, #decentralization, #dropbox, #encryption, #energy-consumption, #europe, #european-union, #fundings-exits, #gmail, #internxt, #privacy, #recent-funding, #spain, #startups, #storage, #tc, #the-venture-city, #valencia

Google will let enterprises store their Google Workspace encryption keys

As ubiquitous as Google Docs has become in the last year alone, a major criticism often overlooked by the countless workplaces who use it is that it isn’t end-to-end encrypted, allowing Google — or any requesting government agency — access to a company’s files. But Google is finally addressing that key complaint with a round of updates that will let customers shield their data by storing their own encryption keys.

Google Workspace, the company’s enterprise offering that includes Google Docs, Slides and Sheets, is adding client-side encryption so that a company’s data will be indecipherable to Google.

Companies using Google Workspace can store their encryption keys with one of four partners for now: Flowcrypt, Futurex, Thales, or Virtru, which are compatible with Google’s specifications. The move is largely aimed at regulated industries — like finance, healthcare, and defense — where intellectual property and sensitive data are subject to intense privacy and compliance rules.

(Image: Google / supplied)

The real magic lands later in the year when Google will publish details of an API that will let enterprise customers build their own in-house key service, allowing workplaces to retain direct control of their encryption keys. That means if the government wants that company’s data, they have to knock on their front door — and not sneak around the back by serving the key holder with a legal demand.

Google published technical details of how the client-side encryption feature works, and will roll out as a beta in the coming weeks.

Tech companies giving their corporate customers control of their own encryption keys has been a growing trend in recent years. Slack and cloud vendor Egnyte bucked the trend by allowing their enterprise users to store their own encryption keys, effectively cutting themselves out of the surveillance loop. But Google has dragged its feet on encryption for so long that startups are working to build alternatives that bake in encryption from the ground up.

Google said it’s also pushing out new trust rules for how files are shared in Google Drive to give administrators more granularity on how different levels of sensitive files can be shared, and new data classification labels to mark documents with a level of sensitivity such as “secret” or “internal”.

The company said it’s improving its malware protection efforts by now blocking phishing and malware shared from within organizations. The aim is to help cut down on employees mistakenly sharing malicious documents.

#api, #cloud-storage, #computing, #cryptography, #data-protection, #data-security, #egnyte, #encryption, #end-to-end-encryption, #finance, #google, #google-workspace, #google-drive, #healthcare, #privacy, #security, #technology, #thales

Gatheround raises millions from Homebrew, Bloomberg and Stripe’s COO to help remote workers connect

Remote work is no longer a new topic, as much of the world has now been doing it for a year or more because of the COVID-19 pandemic.

Companies — big and small — have had to react in myriad ways. Many of the initial challenges have focused on workflow, productivity and the like. But one aspect of the whole remote work shift that is not getting as much attention is the culture angle.

A 100% remote startup that was tackling the issue way before COVID-19 was even around is now seeing a big surge in demand for its offering that aims to help companies address the “people” challenge of remote work. It started its life with the name Icebreaker to reflect the aim of “breaking the ice” with people with whom you work.

“We designed the initial version of our product as a way to connect people who’d never met, kind of virtual speed dating,” says co-founder and CEO Perry Rosenstein. “But we realized that people were using it for far more than that.” 

So over time, its offering has evolved to include a bigger goal of helping people get together beyond an initial encounter –– hence its new name: Gatheround.

“For remote companies, a big challenge or problem that is now bordering on a crisis is how to build connection, trust and empathy between people that aren’t sharing a physical space,” says co-founder and COO Lisa Conn. “There’s no five-minute conversations after meetings, no shared meals, no cafeterias — this is where connection organically builds.”

Organizations should be concerned, Gatheround maintains, that as we move more remote, that work will become more transactional and people will become more isolated. They can’t ignore that humans are largely social creatures, Conn said.

The startup aims to bring people together online through real-time events such as a range of chats, videos and one-on-one and group conversations. The startup also provides templates to facilitate cultural rituals and learning & development (L&D) activities, such as all-hands meetings and workshops on diversity, equity and inclusion. 

Gatheround’s video conversations aim to be a refreshing complement to Slack conversations, which despite serving the function of communication, still don’t bring users face-to-face.

Image Credits: Gatheround

Since its inception, Gatheround has quietly built up an impressive customer base, including 28 Fortune 500s, 11 of the 15 biggest U.S. tech companies, 26 of the top 30 universities and more than 700 educational institutions. Specifically, those users include Asana, Coinbase, Fiverr, Westfield and DigitalOcean. Universities, academic centers and nonprofits, including Georgetown’s Institute of Politics and Public Service and Chan Zuckerberg Initiative, are also customers. To date, Gatheround has had about 260,000 users hold 570,000 conversations on its SaaS-based, video platform.

All its growth so far has been organic, mostly referrals and word of mouth. Now, armed with $3.5 million in seed funding that builds upon a previous $500,000 raised, Gatheround is ready to aggressively go to market and build upon the momentum it’s seeing.

Venture firms Homebrew and Bloomberg Beta co-led the company’s latest raise, which included participation from angel investors such as Stripe COO Claire Hughes Johnson, Meetup co-founder Scott Heiferman, Li Jin and Lenny Rachitsky. 

Co-founders Rosenstein, Conn and Alexander McCormmach describe themselves as “experienced community builders,” having previously worked on President Obama’s campaigns as well as at companies like Facebook, Change.org and Hustle. 

The trio emphasize that Gatheround is also very different from Zoom and video conferencing apps in that its platform gives people prompts and organized ways to get to know and learn about each other as well as the flexibility to customize events.

“We’re fundamentally a connection platform, here to help organizations connect their people via real-time events that are not just really fun, but meaningful,” Conn said.

Homebrew Partner Hunter Walk says his firm was attracted to the company’s founder-market fit.

“They’re a really interesting combination of founders with all this experience community building on the political activism side, combined with really great product, design and operational skills,” he told TechCrunch. “It was kind of unique that they didn’t come out of an enterprise product background or pure social background.”

He was also drawn to the personalized nature of Gatheround’s platform, considering that it has become clear over the past year that the software powering the future of work “needs emotional intelligence.”

“Many companies in 2020 have focused on making remote work more productive. But what people desire more than ever is a way to deeply and meaningfully connect with their colleagues,” Walk said. “Gatheround does that better than any platform out there. I’ve never seen people come together virtually like they do on Gatheround, asking questions, sharing stories and learning as a group.” 

James Cham, partner at Bloomberg Beta, agrees with Walk that the founding team’s knowledge of behavioral psychology, group dynamics and community building gives them an edge.

“More than anything, though, they care about helping the world unite and feel connected, and have spent their entire careers building organizations to make that happen,” he said in a written statement. “So it was a no-brainer to back Gatheround, and I can’t wait to see the impact they have on society.”

The 14-person team will likely expand with the new capital, which will also go toward helping adding more functionality and details to the Gatheround product.

“Even before the pandemic, remote work was accelerating faster than other forms of work,” Conn said. “Now that’s intensified even more.”

Gatheround is not the only company attempting to tackle this space. Ireland-based Workvivo last year raised $16 million and earlier this year, Microsoft  launched Viva, its new “employee experience platform.”

#asana, #bloomberg-beta, #chan-zuckerberg-initiative, #cloud-storage, #coinbase, #computing, #digitalocean, #facebook, #funding, #fundings-exits, #groupware, #homebrew, #hunter-walk, #hustle, #li-jin, #meetup, #obama, #operating-systems, #perry-rosenstein, #recent-funding, #remote-work, #saas, #scott-heiferman, #social-media, #startup, #startups, #telecommuting, #united-states, #venture-capital, #walk

Wasabi scores $112M Series C on $700M valuation to take on cloud storage hyperscalers

Taking on Amazon S3 in the cloud storage game would seem to be a fool-hearty proposition, but Wasabi has found a way to build storage cheaply and pass the savings onto customers. Today the Boston-based startup announced a $112 million Series C investment on a $700 million valuation.

Fidelity Management & Research Company led the round with participation from previous investors. It reports that it has now raised $219 million in equity so far, along with additional debe financing, but it takes a lot of money to build a storage business.

CEO David Friend says that business is booming and he needed the money to keep it going. “The business has just been exploding. We achieved a roughly $700 million valuation on this round, so  you can imagine that business is doing well. We’ve tripled in each of the last three years and we’re ahead of plan for this year,” Friend told me.

He says that demand continues to grow and he’s been getting requests internationally. That was one of the primary reasons he went looking for more capital. What’s more, data sovereignty laws require that certain types of sensitive data like financial and healthcare be stored in-country, so the company needs to build more capacity where it’s needed.

He says they have nailed down the process of building storage, typically inside co-location facilities, and during the pandemic they actually became more efficient as they hired a firm to put together the hardware for them onsite. They also put channel partners like managed service providers (MSPs) and value added resellers (VARs) to work by incentivizing them to sell Wasabi to their customers.

Wasabi storage starts at $5.99 per terabyte per month. That’s a heck of a lot cheaper than Amazon S3, which starts at 0.23 per gigabyte for the first 50 terabytes or $23.00 a terabyte, considerably more than Wasabi’s offering.

But Friend admits that Wasabi still faces headwinds as a startup. No matter how cheap it is, companies want to be sure it’s going to be there for the long haul and a round this size from an investor with the pedigree of Fidelity will give the company more credibility with large enterprise buyers without the same demands of venture capital firms.

“Fidelity to me was the ideal investor. […] They don’t want a board seat. They don’t want to come in and tell us how to run the company. They are obviously looking toward an IPO or something like that, and they are just interested in being an investor in this business because cloud storage is a virtually unlimited market opportunity,” he said.

He sees his company as the typical kind of market irritant. He says that his company has run away from competitors in his part of the market and the hyperscalers are out there not paying attention because his business remains a fraction of theirs for the time being. While an IPO is far off, he took on an institutional investor this early because he believes it’s possible eventually.

“I think this is a big enough market we’re in, and we were lucky to get in at just the right time with the right kind of technology. There’s no doubt in my mind that Wasabi could grow to be a fairly substantial public company doing cloud infrastructure. I think we have a nice niche cut out for ourselves, and I don’t see any reason why we can’t continue to grow,” he said.

#boston-startups, #cloud, #cloud-storage, #enterprise, #fidelity-investments, #funding, #recent-funding, #startups, #storage, #tc, #wasabi

DigitalOcean says customer billing data ‘exposed’ by a security flaw

DigitalOcean has emailed customers warning of a data breach involving customers’ billing data, TechCrunch has learned.

The cloud infrastructure giant told customers in an email on Wednesday, obtained by TechCrunch, that it has “confirmed an unauthorized exposure of details associated with the billing profile on your DigitalOcean account.” The company said the person “gained access to some of your billing account details through a flaw that has been fixed” over a two-week window between April 9 and April 22.

The email said customer billing names and addresses were accessed, as well as the last four digits of the payment card, its expiry date, and the name of the card-issuing bank. The company said that customers’ DigitalOcean accounts were “not accessed,” and passwords and account tokens were “not involved” in this breach.

“To be extra careful, we have implemented additional security monitoring on your account. We are expanding our security measures to reduce the likelihood of this kind of flaw occuring [sic] in the future,” the email said.

DigitalOcean said it fixed the flaw and notified data protection authorities, but it’s not clear what the apparent flaw was that put customer billing information at risk.

In a statement, DigitalOcean’s security chief Tyler Healy said 1% of billing profiles were affected by the breach, but declined to address our specific questions, including how the vulnerability was discovered and which authorities have been informed.

Companies with customers in Europe are subject to GDPR, and can face fines of up to 4% of their global annual revenue.

Last year, the cloud company raised $100 million in new debt, followed by another $50 million round, months after laying off dozens of staff amid concerns about the company’s financial health. In March, the company went public, raising about $775 million in its initial public offering. 

#cloud, #cloud-computing, #cloud-infrastructure, #cloud-storage, #computing, #data-breach, #digitalocean, #enterprise, #security, #spokesperson, #web-hosting, #web-services, #world-wide-web

Solving the security challenges of public cloud

Experts believe the data-lake market will hit a massive $31.5 billion in the next six years, a prediction that has led to much concern among large enterprises. Why? Well, an increase in data lakes equals an increase in public cloud consumption — which leads to a soaring amount of notifications, alerts and security events.

Around 56% of enterprise organizations handle more than 1,000 security alerts every day and 70% of IT professionals have seen the volume of alerts double in the past five years, according to a 2020 Dark Reading report that cited research by Sumo Logic. In fact, many in the ONUG community are on the order of 1 million events per second. Yes, per second, which is in the range of tens of peta events per year.

Now that we are operating in a digitally transformed world, that number only continues to rise, leaving many enterprise IT leaders scrambling to handle these events and asking themselves if there’s a better way.

Why isn’t there a standardized approach for dealing with security of the public cloud — something so fundamental now to the operation of our society?

Compounding matters is the lack of a unified framework for dealing with public cloud security. End users and cloud consumers are forced to deal with increased spend on security infrastructure such as SIEMs, SOAR, security data lakes, tools, maintenance and staff — if they can find them — to operate with an “adequate” security posture.

Public cloud isn’t going away, and neither is the increase in data and security concerns. But enterprise leaders shouldn’t have to continue scrambling to solve these problems. We live in a highly standardized world. Standard operating processes exist for the simplest of tasks, such as elementary school student drop-offs and checking out a company car. But why isn’t there a standardized approach for dealing with security of the public cloud — something so fundamental now to the operation of our society?

The ONUG Collaborative had the same question. Security leaders from organizations such as FedEx, Raytheon Technologies, Fidelity, Cigna, Goldman Sachs and others came together to establish the Cloud Security Notification Framework. The goal is to create consistency in how cloud providers report security events, alerts and alarms, so end users receive improved visibility and governance of their data.

Here’s a closer look at the security challenges with public cloud and how CSNF aims to address the issues through a unified framework.

The root of the problem

A few key challenges are sparking the increased number of security alerts in the public cloud:

  1. Rapid digital transformation sparked by COVID-19.
  2. An expanded network edge created by the modern, work-from-home environment.
  3. An increase in the type of security attacks.

The first two challenges go hand in hand. In March of last year, when companies were forced to shut down their offices and shift operations and employees to a remote environment, the wall between cyber threats and safety came crashing down. This wasn’t a huge issue for organizations already operating remotely, but for major enterprises the pain points quickly boiled to the surface.

Numerous leaders have shared with me how security was outweighed by speed. Keeping everything up and running was prioritized over governance. Each employee effectively held a piece of the company’s network edge in their home office. Without basic governance controls in place or training to teach employees how to spot phishing or other threats, the door was left wide open for attacks.

In 2020, the FBI reported its cyber division was receiving nearly 4,000 complaints per day about security incidents, a 400% increase from pre-pandemic figures.

Another security issue is the growing intelligence of cybercriminals. The Dark Reading report said 67% of IT leaders claim a core challenge is a constant change in the type of security threats that must be managed. Cybercriminals are smarter than ever. Phishing emails, entrance through IoT devices and various other avenues have been exploited to tap into an organization’s network. IT teams are constantly forced to adapt and spend valuable hours focused on deciphering what is a concern and what’s not.

Without a unified framework in place, the volume of incidents will spiral out of control.

Where CSNF comes into play

CSNF will prove beneficial for cloud providers and IT consumers alike. Security platforms often require integration timelines to wrap in all data from siloed sources, including asset inventory, vulnerability assessments, IDS products and past security notifications. These timelines can be expensive and inefficient.

But with a standardized framework like CSNF, the integration process for past notifications is pared down and contextual processes are improved for the entire ecosystem, efficiently reducing spend and saving SecOps and DevSecOps teams time to focus on more strategic tasks like security posture assessment, developing new products and improving existing solutions.

Here’s a closer look at the benefits a standardized approach can create for all parties:

  • End users: CSNF can streamline operations for enterprise cloud consumers, like IT teams, and allows improved visibility and greater control over the security posture of their data. This enhanced sense of protection from improved cloud governance benefits all individuals.
  • Cloud providers: CSNF can eliminate the barrier to entry currently prohibiting an enterprise consumer from using additional services from a specific cloud provider by freeing up added security resources. Also, improved end-user cloud governance encourages more cloud consumption from businesses, increasing provider revenue and providing confidence that their data will be secure.
  • Cloud vendors: Cloud vendors that provide SaaS solutions are spending more on engineering resources to deal with increased security notifications. But with a standardized framework in place, these additional resources would no longer be necessary. Instead of spending money on such specific needs along with labor, vendors could refocus core staff on improving operations and products such as user dashboards and applications.

Working together, all groups can effectively reduce friction from security alerts and create a controlled cloud environment for years to come.

What’s next?

CSNF is in the building phase. Cloud consumers have banded together to compile requirements, and consumers continue to provide guidance as a prototype is established. The cloud providers are now in the process of building the key component of CSNF, its Decorator, which provides an open-source multicloud security reporting translation service.

The pandemic created many changes in our world, including new security challenges in the public cloud. Reducing IT noise must be a priority to continue operating with solid governance and efficiency, as it enhances a sense of security, eliminates the need for increased resources and allows for more cloud consumption. ONUG is working to ensure that the industry stays a step ahead of security events in an era of rapid digital transformation.

#cloud, #cloud-computing, #cloud-infrastructure, #cloud-services, #cloud-storage, #column, #computer-security, #cybersecurity, #opinion, #security, #tc

Grocery startup Mercato spilled years of data, but didn’t tell its customers

A security lapse at online grocery delivery startup Mercato exposed tens of thousands of customer orders, TechCrunch has learned.

A person with knowledge of the incident told TechCrunch that the incident happened in January after one of the company’s cloud storage buckets, hosted on Amazon’s cloud, was left open and unprotected.

The company fixed the data spill, but has not yet alerted its customers.

Mercato was founded in 2015 and helps over a thousand smaller grocers and specialty food stores get online for pickup or delivery, without having to sign up for delivery services like Instacart or Amazon Fresh. Mercato operates in Boston, Chicago, Los Angeles, and New York, where the company is headquartered.

TechCrunch obtained a copy of the exposed data and verified a portion of the records by matching names and addresses against known existing accounts and public records. The data set contained more than 70,000 orders dating between September 2015 and November 2019, and included customer names and email addresses, home addresses, and order details. Each record also had the user’s IP address of the device they used to place the order.

The data set also included the personal data and order details of company executives.

It’s not clear how the security lapse happened since storage buckets on Amazon’s cloud are private by default, or when the company learned of the exposure.

Companies are required to disclose data breaches or security lapses to state attorneys-general, but no notices have been published where they are required by law, such as California. The data set had more than 1,800 residents in California, more than three times the number needed to trigger mandatory disclosure under the state’s data breach notification laws.

It’s also not known if Mercato disclosed the incident to investors ahead of its $26 million Series A raise earlier this month. Velvet Sea Ventures, which led the round, did not respond to emails requesting comment.

In a statement, Mercato chief executive Bobby Brannigan confirmed the incident but declined to answer our questions, citing an ongoing investigation.

“We are conducting a complete audit using a third party and will be contacting the individuals who have been affected. We are confident that no credit card data was accessed because we do not store those details on our servers. We will continually inform all authoritative bodies and stakeholders, including investors, regarding the findings of our audit and any steps needed to remedy this situation,” said Brannigan.


Know something, say something. Send tips securely over Signal and WhatsApp to +1 646-755-8849. You can also send files or documents using our SecureDrop. Learn more

#amazon, #boston, #california, #chicago, #cloud-computing, #cloud-infrastructure, #cloud-storage, #computer-security, #computing, #data-breach, #data-security, #ecommerce, #food, #instacart, #los-angeles, #mercato, #new-york, #security, #technology, #united-states, #velvet-sea-ventures

Risk startup LogicGate confirms data breach

Risk and compliance startup LogicGate has confirmed a data breach. But unless you’re a customer, you probably didn’t hear about it.

An email sent by LogicGate to customers earlier this month said on February 23 an unauthorized third-party obtained credentials to its Amazon Web Services-hosted cloud storage servers storing customer backup files for its flagship platform Risk Cloud, which helps companies to identify and manage their risk and compliance with data protection and security standards. LogicGate says its Risk Cloud can also help find security vulnerabilities before they are exploited by malicious hackers.

The credentials “appear to have been used by an unauthorized third party to decrypt particular files stored in AWS S3 buckets in the LogicGate Risk Cloud backup environment,” the email read.

“Only data uploaded to your Risk Cloud environment on or prior to February 23, 2021, would have been included in that backup file. Further, to the extent you have stored attachments in the Risk Cloud, we did not identify decrypt events associated with such attachments,” it added.

LogicGate did not say how the AWS credentials were compromised. An email update sent by LogicGate last Friday said the company anticipates finding the root cause of the incident by this week.

But LogicGate has not made any public statement about the breach. It’s also not clear if the company contacted all of its customers or only those whose data was accessed. LogicGate counts Capco, SoFi, and Blue Cross Blue Shield of Kansas City as customers.

We sent a list of questions, including how many customers were affected and if the company has alerted U.S. state authorities as required by state data breach notification laws. When reached, LogicGate chief executive Matt Kunkel confirmed the breach but declined to comment citing an ongoing investigation. “We believe it’s best to communicate developments directly to our customers,” he said.

Kunkel would not say, when asked, if the attacker also exfiltrated the decrypted customer data from its servers.

Data breach notification laws vary by state, but companies that fail to report security incidents can face heavy fines. Under Europe’s GDPR rules, companies can face fines of up to 4% of their annual turnover for violations.

In December, LogicGate secured $8.75 million in fresh funding, totaling more than $40 million since it launched in 2015.


Are you a LogicGate customer? Send tips securely over Signal and WhatsApp to +1 646-755-8849. You can also send files or documents using our SecureDrop. Learn more

#amazon, #amazon-web-services, #blue-cross-blue-shield, #capco, #cloud, #cloud-computing, #cloud-storage, #computer-security, #computing, #data-breach, #data-security, #europe, #health-insurance, #securedrop, #security, #security-breaches, #sofi, #united-states

Aqua Security raises $135M at a $1B valuation for its cloud native security service

Aqua Security, a Boston- and Tel Aviv-based security startup that focuses squarely on securing cloud-native services, today announced that it has raised a $135 million Series E funding round at a $1 billion valuation. The round was led by ION Crossover Partners. Existing investors M12 Ventures, Lightspeed Venture Partners, Insight Partners, TLV Partners, Greenspring Associates and Acrew Capital also participated. In total, Aqua Security has now raised $265 million since it was founded in 2015.

The company was one of the earliest to focus on securing container deployments. And while many of its competitors were acquired over the years, Aqua remains independent and is now likely on a path to an IPO. When it launched, the industry focus was still very much on Docker and Docker containers. To the detriment of Docker, that quickly shifted to Kubernetes, which is now the de facto standard. But enterprises are also now looking at serverless and other new technologies on top of this new stack.

“Enterprises that five years ago were experimenting with different types of technologies are now facing a completely different technology stack, a completely different ecosystem and a completely new set of security requirements,” Aqua CEO Dror Davidoff told me. And with these new security requirements came a plethora of startups, all focusing on specific parts of the stack.

Image Credits: Aqua Security

What set Aqua apart, Dror argues, is that it managed to 1) become the best solution for container security and 2) realized that to succeed in the long run, it had to become a platform that would secure the entire cloud-native environment. About two years ago, the company made this switch from a product to a platform, as Davidoff describes it.

“There was a spree of acquisitions by CheckPoint and Palo Alto [Networks] and Trend [Micro],” Davidoff said. “They all started to acquire pieces and tried to build a more complete offering. The big advantage for Aqua was that we had everything natively built on one platform. […] Five years later, everyone is talking about cloud-native security. No one says ‘container security’ or ‘serverless security’ anymore. And Aqua is practically the broadest cloud-native security [platform].”

One interesting aspect of Aqua’s strategy is that it continues to bet on open source, too. Trivy, its open-source vulnerability scanner, is the default scanner for GitLab’s Harbor Registry and the CNCF’s Artifact Hub, for example.

“We are probably the best security open-source player there is because not only do we secure from vulnerable open source, we are also very active in the open-source community,” Davidoff said (with maybe a bit of hyperbole). “We provide tools to the community that are open source. To keep evolving, we have a whole open-source team. It’s part of the philosophy here that we want to be part of the community and it really helps us to understand it better and provide the right tools.”

In 2020, Aqua, which mostly focuses on mid-size and larger companies, doubled the number of paying customers and it now has more than half a dozen customers with an ARR of over $1 million each.

Davidoff tells me the company wasn’t actively looking for new funding. Its last funding round came together only a year ago, after all. But the team decided that it wanted to be able to double down on its current strategy and raise sooner than originally planned. ION had been interested in working with Aqua for a while, Davidoff told me, and while the company received other offers, the team decided to go ahead with ION as the lead investor (with all of Aqua’s existing investors also participating in this round).

“We want to grow from a product perspective, we want to grow from a go-to-market [perspective] and expand our geographical coverage — and we also want to be a little more acquisitive. That’s another direction we’re looking at because now we have the platform that allows us to do that. […] I feel we can take the company to great heights. That’s the plan. The market opportunity allows us to dream big.”

 

#acrew-capital, #aqua, #aqua-security, #boston, #checkpoint, #cloud, #cloud-computing, #cloud-infrastructure, #cloud-storage, #computing, #docker, #enterprise, #greenspring-associates, #insight-partners, #ion-crossover-partners, #kubernetes, #lightspeed-venture-partners, #palo-alto, #recent-funding, #security, #serverless-computing, #software, #startups, #tc, #tel-aviv, #tlv-partners

Project management service ZenHub raises $4.7M

ZenHub, the GitHub-centric project management service for development teams, today announced that it has raised a $4.7 million seed funding round from Canada’s BDC Capital and Ripple Ventures. This marks the first fundraise for the Vancouver, Canada-based startup after the team bootstrapped the service, which first launched back in 2014. Additional angel investors in this round include Adam Gross (former CEO of Heroku), Jiaona Zhang (VP Product at Webflow) and Oji Udezue (VP Product at Calendly).

In addition to announcing this funding round, the team also today launched its newest automation feature, which makes it easier for teams to plan the development sprints, something that is core to the Agile development process but often takes a lot of time and energy — something teams are better off spending on the actual development process.

“This is a really exciting kind of pivot point for us as a business and gives us a lot of ammunition, I think, to really go after our vision and mission a little bit more aggressively than we have even in the past,” ZenHub co-founder and CEO Aaron Upright told me. The team, he explained, used the beginning of the pandemic to spend a lot of time with customers to better understand how they were reacting to what was happening. In the process, customers repeatedly noted that development resources were getting increasingly expensive and that teams were being stretched even farther and under a lot of pressure.

ZenHub’s answer to this was to look into how it could automate more of the processes that constitute the most complex parts of Agile. Earlier this year, the company launched its first efforts in this area, with new tools for improving developer handoffs in GitHub and now, with the help of this new funding, it is putting the next pieces in place by helping teams automate their sprint planning.

Image Credits: ZenHub

“We thought about automation as an answer to [the problems development teams were facing] and that we could take an approach to automation and to help guide teams through some of the most complex and time-consuming parts of the Agile process,” Upright said. “We raised money so that we can really accelerate toward that vision. As a self-funded company, we could have gone down that path, albeit a little bit slower. But the opportunity that we saw in the market — really brought about by the pandemic, and teams working more remotely and this pressure to produce — we wanted to provide a solution much, much faster.”

The spring planning feature itself is actually pretty straightforward and allows project managers to allocate a certain number of story points (a core Agile metric to estimate the complexity of a given action item) to each sprint. ZenHub’s tool can then use that to automatically generate a list of the most highly prioritized items for the next sprint. Optionally, teams can also decide to roll over items that they didn’t finish during a given sprint into the next one.

Image Credits: ZenHub

With that, ZenHub Sprints can automate a lot of the standard sprint meetings and lets teams focus on thinking about the overall process. Of course, teams can always overrule the automated systems.

“There’s nothing more that developers hate than sitting around the table for eight hours, planning sprints, when really they all just want to be working on stuff,” Upright said.

With this new feature, sprints become a core feature of the ZenHub experience. Typically, project managers worked around this by assigning milestones in GitHub, but having a dedicated tool and these new automation features will make this quite a bit easier.

Coming soon, ZenHub will also build a new feature that will automate some parts of the software estimation process, too, by launching a new tool that will help teams more easily allocate story points to routing action items so that their discussions can focus on the more contentious ones.

#agile-software-development, #canada, #ceo, #cloud-infrastructure, #cloud-storage, #computing, #energy, #github, #heroku, #salesforce-com, #serverless-computing, #tc, #technology, #vancouver, #webflow

Microsoft’s Dapr open-source project to help developers build cloud-native apps hits 1.0

Dapr, the Microsoft-incubated open-source project that aims to make it easier for developers to build event-driven, distributed cloud-native applications, hit its 1.0 milestone today, signifying the project’s readiness for production use cases. Microsoft launched the Distributed Application Runtime (that’s what “Dapr” stand for) back in October 2019. Since then, the project released 14 updates and the community launched integrations with virtually all major cloud providers, including Azure, AWS, Alibaba and Google Cloud.

The goal for Dapr, Microsoft Azure CTO Mark Russinovich told me, was to democratize cloud-native development for enterprise developers.

“When we go look at what enterprise developers are being asked to do — they’ve traditionally been doing client, server, web plus database-type applications,” he noted. “But now, we’re asking them to containerize and to create microservices that scale out and have no-downtime updates — and they’ve got to integrate with all these cloud services. And many enterprises are, on top of that, asking them to make apps that are portable across on-premises environments as well as cloud environments or even be able to move between clouds. So just tons of complexity has been thrown at them that’s not specific to or not relevant to the business problems they’re trying to solve.”

And a lot of the development involves re-inventing the wheel to make their applications reliably talk to various other services. The idea behind Dapr is to give developers a single runtime that, out of the box, provides the tools that developers need to build event-driven microservices. Among other things, Dapr provides various building blocks for things like service-to-service communications, state management, pub/sub and secrets management.

Image Credits: Dapr

“The goal with Dapr was: let’s take care of all of the mundane work of writing one of these cloud-native distributed, highly available, scalable, secure cloud services, away from the developers so they can focus on their code. And actually, we took lessons from serverless, from Functions-as-a-Service where with, for example Azure Functions, it’s event-driven, they focus on their business logic and then things like the bindings that come with Azure Functions take care of connecting with other services,” Russinovich said.

He also noted that another goal here was to do away with language-specific models and to create a programming model that can be leveraged from any language. Enterprises, after all, tend to use multiple languages in their existing code, and a lot of them are now looking at how to best modernize their existing applications — without throwing out all of their current code.

As Russinovich noted, the project now has more than 700 contributors outside of Microsoft (though the core commuters are largely from Microsoft) and a number of businesses started using it in production before the 1.0 release. One of the larger cloud providers that is already using it is Alibaba. “Alibaba Cloud has really fallen in love with Dapr and is leveraging it heavily,” he said. Other organizations that have contributed to Dapr include HashiCorp and early users like ZEISS, Ignition Group and New Relic.

And while it may seem a bit odd for a cloud provider to be happy that its competitors are using its innovations already, Russinovich noted that this was exactly the plan and that the team hopes to bring Dapr into a foundation soon.

“We’ve been on a path to open governance for several months and the goal is to get this into a foundation. […] The goal is opening this up. It’s not a Microsoft thing. It’s an industry thing,” he said — but he wasn’t quite ready to say to which foundation the team is talking.

 

#alibaba, #alibaba-cloud, #aws, #cloud, #cloud-computing, #cloud-infrastructure, #cloud-services, #cloud-storage, #computing, #developer, #enterprise, #google, #hashicorp, #mark-russinovich, #microservices, #microsoft, #microsoft-azure, #new-relic, #serverless-computing, #tc

An argument against cloud-based applications

In the last decade we’ve seen massive changes in how we consume and interact with our world. The Yellow Pages is a concept that has to be meticulously explained with an impertinent scoff at our own age. We live within our smartphones, within our apps.

While we thrive with the information of the world at our fingertips, we casually throw away any semblance of privacy in exchange for the convenience of this world.

This line we straddle has been drawn with recklessness and calculation by big tech companies over the years as we’ve come to terms with what app manufacturers, large technology companies, and app stores demand of us.

Our private data into the cloud

According to Symantec, 89% of our Android apps and 39% of our iOS apps require access to private information. This risky use sends our data to cloud servers, to both amplify the performance of the application (think about the data needed for fitness apps) and store data for advertising demographics.

While large data companies would argue that data is not held for long, or not used in a nefarious manner, when we use the apps on our phones, we create an undeniable data trail. Companies generally keep data on the move, and servers around the world are constantly keeping data flowing, further away from its source.

Once we accept the terms and conditions we rarely read, our private data is no longer such. It is in the cloud, a term which has eluded concrete understanding throughout the years.

A distinction between cloud-based apps and cloud computing must be addressed. Cloud computing at an enterprise level, while argued against ad nauseam over the years, is generally considered to be a secure and cost-effective option for many businesses.

Even back in 2010, Microsoft said 70% of its team was working on things that were cloud-based or cloud-inspired, and the company projected that number would rise to 90% within a year. That was before we started relying on the cloud to store our most personal, private data.

Cloudy with a chance of confusion

To add complexity to this issue, there are literally apps to protect your privacy from other apps on your smart phone. Tearing more meat off the privacy bone, these apps themselves require a level of access that would generally raise eyebrows if it were any other category of app.

Consider the scenario where you use a key to encrypt data, but then you need to encrypt that key to make it safe. Ultimately, you end up with the most important keys not being encrypted. There is no win-win here. There is only finding a middle ground of contentment in which your apps find as much purchase in your private data as your doctor finds in your medical history.

The cloud is not tangible, nor is it something we as givers of the data can access. Each company has its own cloud servers, each one collecting similar data. But we have to consider why we give up this data. What are we getting in return? We are given access to applications that perhaps make our lives easier or better, but essentially are a service. It’s this service end of the transaction that must be altered.

App developers have to find a method of service delivery that does not require storage of personal data. There are two sides to this. The first is creating algorithms that can function on a local basis, rather than centralized and mixed with other data sets. The second is a shift in the general attitude of the industry, one in which free services are provided for the cost of your personal data (which ultimately is used to foster marketing opportunities).

Of course, asking this of any big data company that thrives on its data collection and marketing process is untenable. So the change has to come from new companies, willing to risk offering cloud privacy while still providing a service worth paying for. Because it wouldn’t be free. It cannot be free, as free is what got us into this situation in the first place.

Clearing the clouds of future privacy

What we can do right now is at least take a stance of personal vigilance. While there is some personal data that we cannot stem the flow of onto cloud servers around the world, we can at least limit the use of frivolous apps that collect too much data. For instance, games should never need access to our contacts, to our camera and so on. Everything within our phone is connected, it’s why Facebook seems to know everything about us, down to what’s in our bank account.

This sharing takes place on our phone and at the cloud level, and is something we need to consider when accepting the terms on a new app. When we sign into apps with our social accounts, we are just assisting the further collection of our data.

The cloud isn’t some omnipotent enemy here, but it is the excuse and tool that allows the mass collection of our personal data.

The future is likely one in which devices and apps finally become self-sufficient and localized, enabling users to maintain control of their data. The way we access apps and data in the cloud will change as well, as we’ll demand a functional process that forces a methodology change in service provisions. The cloud will be relegated to public data storage, leaving our private data on our devices where it belongs. We have to collectively push for this change, lest we lose whatever semblance of privacy in our data we have left.

#big-data, #cloud, #cloud-computing, #cloud-infrastructure, #cloud-storage, #column, #opinion, #privacy, #security

Vantage makes managing AWS easier

Vantage, a new service that makes managing AWS resources and their associated spend easier, is coming out of stealth today. The service offers its users an alternative to the complex AWS console with support for most of the standard AWS services, including EC2 instances, S3 buckets, VPCs, ECS and Fargate and Route 53 hosted zones.

The company’s founder, Ben Schaechter, previously worked at AWS and Digital Ocean (and before that, he worked on Crunchbase, too). Yet while DigitalOcean showed him how to build a developer experience for individuals and small businesses, he argues that the underlying services and hardware simply weren’t as robust as those of the hyperclouds. AWS, on the other hand, offers everything a developer could want (and likely more), but the user experience leaves a lot to be desired.

Image Credits: Vantage

“The idea was really born out of ‘what if we could take the user experience of DigitalOcean and apply it to the three public cloud providers, AWS, GCP and Azure,” Schaechter told me. “We decided to start just with AWS because the experience there is the roughest and it’s the largest player in the market. And I really think that we can provide a lot of value there before we do GCP and Azure.”

The focus for Vantage is on the developer experience and cost transparency. Schaechter noted that some of its users describe it as being akin to a “Mint for AWS.” To get started, you give Vantage a set of read permissions to your AWS services and the tool will automatically profile everything in your account. The service refreshes this list once per hour, but users can also refresh their lists manually.

Given that it’s often hard enough to know which AWS services you are actually using, that alone is a useful feature. “That’s the number one use case,” he said. “What are we paying for and what do we have?”

At the core of Vantage is what the team calls “views,” which allows you to see which resources you are using. What is interesting here is that this is quite a flexible system and allows you to build custom views to see which resources you are using for a given application across regions, for example. Those may include Lambda, storage buckets, your subnet, code pipeline and more.

On the cost-tracking side, Vantage currently only offers point-in-time costs, but Schaechter tells me that the team plans to add historical trends as well to give users a better view of their cloud spend.

Schaechter and his co-founder bootstrapped the company and he noted that before he wants to raise any money for the service, he wants to see people paying for it. Currently, Vantage offers a free plan, as well as paid “pro” and “business” plans with additional functionality.

Image Credits: Vantage 

#amazon-web-services, #cloud, #cloud-computing, #cloud-infrastructure, #cloud-storage, #computing, #developer, #digitalocean, #gcp, #tc, #web-hosting, #world-wide-web

AWS launches Glue Elastic Views to make it easier to move data from one purpose-built data store to another

AWS has launched a new tool to let developers move data from one store to another called Glue Elastic Views.

At the AWS:Invent keynote CEO Andy Jassy announced Glue Elastic Views, a service that lets programmers move data across multiple data stores more seamlessly.

The new service can take data from disparate silos and move them together. That AWS ETL service allows programmers to write a little bit of SQL code to have a materialized view tht can move from one source data store to another.

For instance, Jassy said, a programmer can move data from DynamoDB to Elastic Search allowing a developer to set up a materialized view to copy that data — all the while managing dependencies. That means if data changes in the source data lake, then it will automatically be updated in the other data stores where the data has been relocated, Jassy said.

“When you have the ability to move data… and move that data easily from data store to data store… that’s incredibly powerful,” said Jassy.

#amazon-web-services, #andy-jassy, #cloud-infrastructure, #cloud-storage, #computing, #data-lake, #data-management, #elasticsearch, #programmer, #sql, #tc, #web-hosting

Thousands of U.S. lab results and medical records spilled online after a security lapse

NTreatment, a technology company that manages electronic health and patient records for doctors and psychiatrists, left thousands of sensitive health records exposed to the internet because one of its cloud servers wasn’t protected with a password.

The cloud storage server server was hosted on Microsoft Azure and contained 109,000 files, a large portion of which contained lab test results from third-party providers like LabCorp, medical records, doctor’s notes, insurance claims, and other sensitive health data for patients across the U.S., a class of data considered protected health information under the Health Insurance Portability and Accountability Act (HIPAA). Running afoul of HIPAA can result in steep fines.

None of the data was encrypted, and nearly all of the sensitive files were viewable in the browser. Some of the medical records belonged to children.

TechCrunch found the exposed data as part of a separate investigation. It wasn’t initially clear who owned the storage server, but many of the electronic health records that TechCrunch reviewed in an effort to trace the source of the data spillage were tied to doctors and psychiatrists and healthcare workers working at hospitals or networks known to use nTreatment. The storage server also contained some internal company documents, including a non-disclosure agreement with a major prescriptions provider.

The data was secured on Monday after TechCrunch contacted the company. In an email, NTreatment co-founder Gregory Katz said the server was “used as a general purpose storage,” but did not say how long the server was exposed.

Katz said the company would notify affected providers and regulators of the incident.

It’s the latest in a series of incidents involving the exposure of medical data. Earlier this year we found a bug in LabCorp’s website that exposed thousands of lab results, and reported on the vast amounts of medical imaging floating around the web.

#articles, #cloud-storage, #co-founder, #data-breach, #electronic-health-records, #health, #medical-imaging, #security, #technology, #united-states

Google Photos is the latest “Unlimited” plan to impose hard limits

Screenshot of user interface for Google Photos.

Enlarge / Google is no longer offering unlimited photo storage—except to Pixel users, that is. (credit: Google)

Today, Google Photos VP Shimrit Ben-Yair announced the end of Google Photos’ unlimited photo storage policy. The plan already came with significant caveats—unlimited storage was for the tier Google deems “High Quality,” which includes compressed media only, capped at 16 megapixels for photos and 1080p for videos. Uncompressed or higher-resolution photos and videos saved in original quality count against the 15GiB cap for the user’s Google Drive account.

As of June 2021, High Quality photos and videos will also begin counting against a user’s Google Drive storage capacity. That said, if you’ve already got a terabyte of High Quality photos and videos stored in Photos, don’t panic—the policy change affects new photos and videos created or stored after June 2021 only. Media that’s already saved to Google Photos is grandfathered in and will not be affected by the new policy change.

Original Quality—again, meaning either uncompressed or resolution over 16mp still / 1080p video—is also unaffected, since those files were already subject to the user’s Google Drive quota. Any additional capacity purchased through Google One membership also applies to media storage—if you lease 100GiB of capacity at Google One’s $2/month or $20/year plans, that capacity applies to your Google Photos data as well.

Read 2 remaining paragraphs | Comments

#cloud-storage, #gmail, #google, #google-one, #google-photos, #google-drive, #tech, #unlimited

Come June 1, 2021, all of your new photos will count against your free Google storage

Come June 1, 2021, Google will change its storage policies for free accounts — and not for the better. Basically, if you’re on a free account and a semi-regular Google Photos user, get ready to pay up next year and subscribe to Google One.

Currently, every free Google Account comes with 15 GB of online storage for all your Gmail, Drive and Photos needs. Email and the files you store in Drive already counted against those 15 GB, but come June 1, all Docs, Sheets, Slides, Drawings, Forms or Jamboard files will count against the free storage as well. Those tend to be small files, but what’s maybe most important here, virtually all of your Photos uploads will now count against those 15 GB as well.

That’s a bid deal because today, Google Photos lets you store unlimited images (and unlimited video, if it’s in HD) for free as long as they are under 16MP in resolution or you opt to have Google degrade the quality. Come June of 2021, any new photo or video uploaded in high quality, which currently wouldn’t count against your allocation, will count against those free 15 GB.

Image Credits: Google

As people take more photos every year, that free allotment won’t last very long. Google argues that 80 percent of its users will have at least three years to reach those 15 GB. Given that you’re reading TechCrunch, though, chances are you’re in those 20 percent that will run out of space much faster (or you’re already on a Google One plan).

Some good news: to make this transition a bit easier, photos and videos uploaded in high quality before June 1, 2021 will not count toward the 15 GB of free storage. As usual, original quality images will continue to count against it, though. And if you own a Pixel device, even after June 1, you can still upload an unlimited number of high-quality images from those.

To let you see how long your current storage will last, Google will now show you personalized estimates, too, and come next June, the company will release a new free tool for Photos that lets you more easily manage your storage. It’ll also show you dark and blurry photos you may want to delete — but then, for a long time Google’s promise was you didn’t have to worry about storage (remember Google’s old Gmail motto? ‘Archive, don’t delete!’)

In addition to these storage updates, there’s a few additional changes worth knowing about. If your account is inactive in Gmail, Drive or Photos for more than two years, Google ‘may’ delete the content in that product. So if you use Gmail but don’t use Photos for two years because you use another service, Google may delete any old photos you had stored there. And if you stay over your storage limit for two years, Google “may delete your content across Gmail, Drive and Photos.”

Cutting back a free and (in some cases) unlimited service is never a great move. Google argues that it needs to make these changes to “continue to provide everyone with a great storage experience and to keep pace with the growing demand.”

People now upload more than 4.3 million GB to Gmail, Drive and Photos every day. That’s not cheap, I’m sure, but Google also controls every aspect of this and must have had some internal projections of how this would evolve when it first set those policies.

To some degree, though, this was maybe to be expected. This isn’t the freewheeling Google of 2010 anymore, after all. We’ve already seen some indications that Google may reserve some advanced features for Google One subscribers in Photos, for example. This new move will obviously push more people to pay for Google One and more money from Google One means a little bit less dependence on advertising for the company.

#cloud-applications, #cloud-computing, #cloud-storage, #computing, #gmail, #google, #google-one, #google-photos, #online-storage, #storage, #tc, #web-applications, #world-wide-web

Microsoft announces its first Azure data center region in Taiwan

After announcing its latest data center region in Austria earlier this month and an expansion of its footprint in Brazil, Microsoft today unveiled its plans to open a new region in Taiwan. This new region will augment its existing presence in East Asia, where the company already runs data centers in China (operated by 21Vianet), Hong Kong, Japan and Korea. This new region will bring Microsoft’s total presence around the world to 66 cloud regions.

Similar to its recent expansion in Brazil, Microsoft also pledged to provide digital skilling for over 200,000 people in Taiwan by 2024 and it is growing its Taiwan Azure Hardware Systems and Infrastructure engineering group, too. That’s in addition to investments in its IoT and AI research efforts in Taiwan and the startup accelerator it runs there.

“Our new investment in Taiwan reflects our faith in its strong heritage of hardware and software integration,” said Jean-Phillippe Courtois, Executive Vice President and President, Microsoft Global Sales, Marketing and Operations. “With Taiwan’s expertise in hardware manufacturing and the new datacenter region, we look forward to greater transformation, advancing what is possible with 5G, AI and IoT capabilities spanning the intelligent cloud and intelligent edge.”

Image Credits: Microsoft

The new region will offer access to the core Microsoft Azure services. Support for Microsoft 365, Dynamics 365 and Power Platform. That’s pretty much Microsoft’s playbook for launching all of its new regions these days. Like virtually all of Microsoft’s new data center region, this one will also offer multiple availability zones.

#artificial-intelligence, #austria, #brazil, #china, #cloud, #cloud-computing, #cloud-infrastructure, #cloud-storage, #computing, #internet-of-things, #iot, #japan, #microsoft, #microsoft-365, #microsoft-azure, #taiwan