Mirantis launches cloud-native data center-as-a-service software

Mirantis has been around the block, starting way back as an OpenStack startup, but a few years ago the company began to embrace cloud-native development technologies like containers, microservices and Kubernetes. Today, it announced Mirantis Flow, a fully managed open source set of services designed to help companies manage a cloud-native data center environment, whether your infrastructure lives on-prem or in a public cloud.

“We’re about delivering to customers an open source-based cloud-to-cloud experience in the data center, on the edge, and interoperable with public clouds,” Adrian Ionel, CEO and co-founder at Mirantis explained.

He points out that the biggest companies in the world, the hyperscalers like Facebook, Netflix and Apple, have all figured out how to manage in a hybrid cloud-native world, but most companies lack the resources of these large organizations. Mirantis Flow is aimed at putting these same types of capabilities that the big companies have inside these more modest organizations.

While the large infrastructure cloud vendors like Amazon, Microsoft and Google have been designed to help with this very problem, Ionel says that these tend to be less open and more proprietary. That can lead to lock-in, which today’s large organizations are looking desperately to avoid.

“[The large infrastructure vendors] will lock you into their stack and their APIs. They’re not based on open source standards or technology, so you are locked in your single source, and most large enterprises today are pursuing a multi-cloud strategy. They want infrastructure flexibility,” he said. He added, “The idea here is to provide a completely open and flexible zero lock-in alternative to the [big infrastructure providers, but with the] same cloud experience and same pace of innovation.”

They do this by putting together a stack of open source solutions in a single service. “We provide virtualization on top as part of the same fabric. We also provide software-defined networking, software-defined storage and CI/CD technology with DevOps as a service on top of it, which enables companies to automate the entire software development pipeline,” he said.

As the company describes the service in a blog post published today, it includes “Mirantis Container Cloud, Mirantis OpenStack and Mirantis Kubernetes Engine, all workloads are available for migration to cloud native infrastructure, whether they are traditional virtual machine workloads or containerized workloads.”

For companies worried about migrating their VMware virtual machines to this solution, Ionel says they have been able to move these VMs to the Mirantis solution in early customers. “This is a very, very simple conversion of the virtual machine from VMware standard to an open standard, and there is no reason why any application and any workload should not run on this infrastructure — and we’ve seen it over and over again in many many customers. So we don’t see any bottlenecks whatsoever for people to move right away,” he said.

It’s important to note that this solution does not include hardware. It’s about bringing your own hardware infrastructure, either physical or as a service, or using a Mirantis partner like Equinix. The service is available now for $15,000 per month or $180,000 annually, which includes: 1,000 core/vCPU licenses for access to all products in the Mirantis software suite plus support for 20 virtual machine (VM) migrations or application onboarding and unlimited 24×7 support. The company does not charge any additional fees for control plane and management software licenses.

#cloud, #developer, #enterprise, #kubernetes, #mirantis, #open-source, #openstack, #tc

Solo.io integrates a cloud native API gateway and service mesh into its enterprise platform

Connecting to all the services and microservices that a modern cloud native enterprise application requires can be a complicated task. It’s an area that startup Solo.io is trying to disrupt with the new release of its Gloo Mesh Enterprise platform.

Based in Cambridge, Massachusetts, Solo has had focus since its founding on a concept known as a service mesh. A service mesh provides an optimized approach to connect different components together in an automated approach, often inside of a Kubernetes cloud native environment. 

Idit Levine, founder and CEO at Solo, explained to TechCrunch that she knew from the outset when she started the company in 2017 that it might take a few years till the market understood the concept of the service mesh and why it is needed. That’s why her company also built out an API gateway technology that helps developers connect APIs, which can be different data sources or services.  

Until this week, the API and service mesh components of Solo’s Gloo Mesh Enterprise offering were separate technologies, with different configurations and control planes. That is now changing with the integration of both API and service mesh capabilities into a unified service. The integrated capabilities should make it easier to set up and configure all manner of services in the cloud that are running on Kubernetes.

Solo’s service mesh, known as Gloo Mesh, is based on the open source Istio project, which was created by Google. The API product is called Gloo Edge, which uses the open source Envoy project, originally created by ride sharing company Lyft. Levine explained that her team has now used Istio’s plugin architecture to connect with Envoy in an optimized approach.

Levine noted that many users start off with an API gateway and then extend to using the service mesh. With the new Gloo Mesh Enterprise update, she expects customer adoption to accelerate further as Solo will be able to differentiate against rivals in both the service mesh and API management markets.

While the service mesh space is still emerging including rivals such as Tetrate, API gateways are a more mature technology. There are a number of established vendors in the API management space including Kong which has raised $71 million in funding. Back in 2016, Google acquired API vendor Apigee for $625 million and has been expanding the technology in the years since, including the Apigee X platform announced in February of this year.

With the integration of Gloo Edge for API management into Gloo Mesh Enterprise, Solo isn’t quite covering all the bases for API technology, yet. Gloo Edge supports REST based APIs, which are by far the most common today, though it doesn’t support the emerging GraphQL API standard, which is becoming increasingly popular. Levine told us to ‘stay tuned’ for a future GraphQL announcement for Solo and its platform.

Solo has raised a total of $36.5 million across two rounds, with an $11 million Series A in 2018 and a $23 million Series B announced in October 2020. The company’s investors include Redpoint and True Ventures.

#api, #cloud, #cloud-computing, #developer, #envoy, #kubernetes, #microservices, #service-mesh, #true-ventures

Elastic acquisition spree continues as it acquires security startup CMD

Just days after Elastic announced the acquisition of build.security, the company is making yet another security acquisition. As part of its second-quarter earnings announcement this afternoon, Elastic disclosed that it is acquiring Vancouver, Canada based security vendor CMD. Financial terms of the deal are not being publicly disclosed.

CMD‘s technology provides runtime security for cloud infrastructure, helping organizations gain better visibility into processes that are running. The startup was founded in 2016 and has raised $21.6 million in funding to date. The company’s last round was a $15 million Series B that was announced in 2019, led by GV. 

Elastic CEO and co-founder Shay Banon told TechCrunch that his company will be welcoming the employees of CMD into his company, but did not disclose precisely how many would be coming over. CMD CEO and co-founder Santosh Krishan and his fellow co-founder Jake King will both be taking executive roles within Elastic.

Both build.security and CMD are set to become part of Elastic’s security organization. The two technologies will be integrated into the Elastic Stack platform that provides visibility into what an organization is running, as well as security insights to help limit risk. Elastic has been steadily growing its security capabilities in recent years, acquiring Endgame Security in 2019 for $234 million.

Banon explained that, as organizations increasingly move to the cloud and make use of Kubernetes, they are looking for more layers of introspection and protection for Linux. That’s where CMD’s technology comes in. CMD’s security service is built with an open source technology known as eBPF. With eBPF, it’s possible to hook into a Linux operating system for visibility and security control. Work is currently ongoing to extend eBPF for Windows workloads, as well.

CMD isn’t the only startup that has been building based on eBP. Isovalent, which announced a $29 million Series A round led by Andreessen Horowitz and Google in November 2020, is also active in the space. The Linux Foundation also recently announced the creation of an eBPF Foundation, with the participation of Facebook, Google, Microsoft, Netflix and Isovalent.

Fundamentally, Banon sees a clear alignment between what CMD was building and what Elastic aims to deliver for its users.

“We have a saying at Elastic – while you observe, why not protect?” Banon said. “With CMD if you look at everything that they do, they also have this deep passion and belief that it starts with observability. “

It will take time for Elastic to integrate the CMD technology into the Elastic Stack, though it won’t be too long. Banon noted that one of the benefits of acquiring a startup is that it’s often easier to integrate than a larger, more established vendor.

“With all of these acquisitions that we make we spend time integrating them into a single product line,” Banon said.

That means Elastic needs to take the technology that other companies have built and fold it into its stack and that sometimes can take time, Banon explained. He noted that it took two years to integrate the Endgame technology after that acquisition.

“Typically that lends itself to us joining forces with smaller companies with really innovative technology that can be more easily taken and integrated into our stack,” Banon said.

#canada, #cloud, #cloud-computing, #cloud-infrastructure, #cmd, #elasticsearch, #facebook, #kubernetes, #linux, #open-source-technology, #security, #shay-banon, #vancouver

Cisco beefing up app monitoring portfolio with acquisition of Epsagon for $500M

Cisco announced on Friday that it’s acquiring Israeli applications monitoring startup Epsagon at a price pegged at $500 million. The purchase gives Cisco a more modern microservices-focused component for its growing applications monitoring portfolio.

The Israeli business publication Globes reported it had gotten confirmation from Cisco that the deal was for $500 million, but Cisco would not confirm that price with TechCrunch.

The acquisition comes on top of a couple other high profile app monitoring deals including AppDynamics, which the company bought in 2018 for $3.7 billion and ThousandEyes, which it nabbed last year for $1 billion.

With Epsagon, the company is getting a way to monitor more modern applications built with containers and Kubernetes. Epsagon’s value proposition is a solution built from the ground up to monitor these kinds of workloads, giving users tracing and metrics, something that’s not always easy to do given the ephemeral nature of containers.

As Cisco’s Liz Centoni wrote in a blog post announcing the deal, Epsagon adds to the company’s concept of a full-stack offering in their applications monitoring portfolio. Instead of having a bunch of different applications monitoring tools for different tasks, the company envisions one that works together.

“Cisco’s approach to full-stack observability gives our customers the ability to move beyond just monitoring to a paradigm that delivers shared context across teams and enables our customers to deliver exceptional digital experiences, optimize for cost, security and performance and maximize digital business revenue,” Centoni wrote.

That experience point is particularly important because when an application isn’t working, it isn’t happening in a vacuum. It has a cascading impact across the company, possibly affecting the core business itself and certainly causing customer distress, which could put pressure on customer service to field complaints, and the site reliability team to fix it. In the worst case, it could result in customer loss and an injured reputation.

If the application monitoring system can act as an early warning system, it could help prevent the site or application from going down in the first place, and when it does go down, help track the root cause to get it up and running more quickly.

The challenge here for Cisco is incorporating Epsagon into the existing components of the application monitoring portfolio and delivering that unified monitoring experience without making it feel like a Frankenstein’s monster of a solution globbed together from the various pieces.

Epsagon launched in 2018 and has raised $30 million. According to a report in the Israeli publication, Calcalist, the company was on the verge of a big Series B round with a valuation in the range of $200 million when it accepted this offer. It certainly seems to have given its early investors a good return. The deal is expected to close later this year.

#applications-performance-monitoring, #cisco, #containers, #enterprise, #epsagon, #exit, #fundings-exits, #israeli-startups, #kubernetes, #ma, #mergers-and-acquisitions, #startups, #tc

Extra Crunch roundup: 3 lies VCs tell, betting big on Kubernetes, NYC’s enterprise boom

Although older adults are one of the fastest-growing demographics, they’re quite underserved when it comes to consumer tech.

The global population of people older than 65 will reach 1.5 billion by 2050, and members of this cohort — who are leading longer, active lives — have plenty of money to spend.

Still, most startups persist in releasing products aimed at serving younger users, says Lawrence Kosick, co-founder of GetSetUp, an edtech company that targets 50+ learners.

“If you can provide a valuable, scalable service for the older adult market, there’s a lot of opportunity to drive growth through partnerships,” he notes.


Full Extra Crunch articles are only available to members.
Use discount code ECFriday to save 20% off a one- or two-year subscription.


Cropped photo a photo of author Sukhinder Singh Cassidy

Image Credits: Sukhinder Singh Cassidy

On Thursday, August 19, Managing Editor Danny Crichton will interview Sukhinder Singh Cassidy, author of “Choose Possibility,” on Twitter Spaces at 2 p.m. PDT/5 p.m. EDT/9 p.m. UTC.

Singh Cassidy, founder of premium talent marketplace theBoardlist, will discuss making the leap into entrepreneurship after leaving Google, her time as CEO-in-Residence at venture capital firm Accel Partners and the framework she’s developed for taking career risks.

They’ll take questions from the audience, so please add a reminder to your calendar to join the conversation.

Thanks very much for reading Extra Crunch this week! Have a great weekend.

Walter Thompson
Senior Editor, TechCrunch
@yourprotagonist

Dear Sophie: Can I hire an engineer whose green card is being sponsored by another company?

lone figure at entrance to maze hedge that has an American flag at the center

Image Credits: Bryce Durbin/TechCrunch

Dear Sophie,

I want to extend an offer to an engineer who has been working in the U.S. on an H-1B for almost five years. Her current employer is sponsoring her for an EB-2 green card, and our startup wants to hire her as a senior engineer.

What happens to her green card process? Can we take it over?

— Recruiting in Richmond

3 lies VCs tell ourselves about startup valuations

Image of a Pinocchio silhouette.

Image Credits: Dmitrii_Guzhanin (opens in a new window) / Getty Images

In a candid guest post, Scott Lenet, president of Touchdown Ventures, writes about the cognitive dissonance currently plaguing venture capital.

Yes, there’s an incredible amount of competition for deals, but there’s also a path to bringing soaring startup valuations back to earth.

For example, early investors have an inherent conflict of interest with later participants and many VCs are thirsty “logo hunters” who just want bragging rights.

At some point, “venture capitalists need to stop engaging in self-delusion about why a valuation that is too high might be OK,” writes Lenet.

‘The tortoise and the hare’ story is playing out right now in VC

HARE & TORTOISE WITH RACE NUMBERS ON GRASS

Image Credits: Getty Images under a GK Hart/Vikki Hart (opens in a new window) license.

Aesop’s fable about the determined tortoise who defeated an arrogant hare has many interpretations, e.g., the value of perseverance, the virtue of taking on bullies, how an outsized ego can undermine natural talent.

In the case of venture capital, the allegory is relevant because a slow, steady and more personal approach generates better outcomes, says Marc Schröder, managing partner of MGV.

“We simply must take the time to get to know founders.”

What’s driving the global surge in retail media spending?

Shopping cart with dollar sign and colorful shopping bags.

Image Credits: Getty Images under a jayk7 (opens in a new window) license.

As the pandemic changed consumer behavior and regulations began to reshape digital marketing tools, advertisers are turning to retail media.

Using the reams of data collected at the individual and aggregate level, retail media produce high-margin revenue streams. “And like most things, there is a bad, a good and a much better way of doing things,” advises Cynthia Luo, head of marketing at e-commerce marketing stack Epsilo.

New York City’s enterprise tech startups could be heading for a superheated exit wave

“We lied when we said that The Exchange was done covering 2021 venture capital performance,” Anna Heim and Alex Wilhelm admit.

Yesterday, they reviewed a detailed report from NYC-based VC group Work-Bench on the city’s enterprise tech startups.

“New York City’s enterprise footprint is now large enough that it must be considered a leading market for the startup varietal,” Anna and Alex conclude, “making its results a bellwether to some degree.”

“And if New York City is laying the groundwork for a huge wave of unicorn exits in the coming four to eight quarters, we should expect to see something similar in other enterprise markets around the world.”

Disaster recovery can be an effective way to ease into the cloud

Ladder leaning on white puffy cloud on blue studio background, white surface, drop shadow

Image Credits: PM Images (opens in a new window) / Getty Images

Given the rapid pace of digital transformation, nearly every business will eventually migrate some — or most — aspects of their operations to the cloud.

Before making the wholesale shift to digital, companies can start getting comfortable by using disaster recovery as a service (DRaaS). Even a partially managed DRaaS can make an organization more resilient and lighten the load for its IT team.

Plus, it’s also a savvy way for tech leaders to get shot-callers inside their companies to get on board the cloud bandwagon.

Regulations can define the best places to build and invest

A view of a woman's eye looking through a hole in some colorful paper

Image Credits: PeopleImages (opens in a new window) / Getty Images

“The decisions of government, the broader legal system and its combined level of scrutiny toward a particular subject” can affect market timing and the durability of an idea, Noorjit Sidhu, an early-stage investor at Plug & Play Ventures, writes in a guest column.

There are three areas currently facing regulatory scrutiny that have the potential to “provide outsized returns,” Sidhu writes: taxes, telemedicine and climate.

VCs unfazed by Chinese regulatory shakeups (so far)

“China’s technology scene has been in the news for all the wrong reasons in recent months,” Anna Heim and Alex Wilhelm write about the Chinese government’s crackdown on a host of technology companies.

“The result of the government fusillade against some of the best-known companies in China was falling share prices,” they write.

But has it affected the venture capital market? SoftBank this week said it would pause investments in China, but the numbers through Q2 indicate China is steadier than Alex and Anna expected.

Perform a quality of earnings analysis to make the most of M&A

Hand counting pieces of m&ms making up pie chart

Image Credits: Westend61 (opens in a new window) / Getty Images under a license.

If you’re a startup founder, odds are, at some point, you’ll raise a Series A (and B and C and D, hopefully), perform a strategic acquisition, and maybe even sell your company.

When those things occur, you’ll need to know how to do a quality of earnings (QofE) to maximize value, Pierre-Alexandre Heurtebize, investment and M&A director at HoriZen Capital, writes in a guest column.

He walks through a framework for thinking and organizing a QofE for “every M&A and private equity transition you may be part of.”

VCs are betting big on Kubernetes: Here are 5 reasons why

3d rendering of Staircase and cloud.

Image Credits: Getty Images under a akinbostanci (opens in a new window) license.

“What was once solely an internal project at Google has since been open-sourced and has become one of the most talked about technologies in software development and operations,” Ben Ofiri, the co-founder and CEO of the Kubernetes troubleshooting platform Komodor, writes of Kubernetes, which he calls “the new Linux.”

“This technology isn’t going anywhere, so any platform or tooling that helps make it more secure, simple to use and easy to troubleshoot will be well appreciated by the software development community.”

#draas, #ec-roundup, #entrepreneurship, #extra-crunch-roundup, #green-card, #kubernetes, #startups, #tc, #venture-capital

VCs are betting big on Kubernetes: Here are 5 reasons why

I worked at Google for six years. Internally, you have no choice — you must use Kubernetes if you are deploying microservices and containers (it’s actually not called Kubernetes inside of Google; it’s called Borg). But what was once solely an internal project at Google has since been open-sourced and has become one of the most talked about technologies in software development and operations.

For good reason. One person with a laptop can now accomplish what used to take a large team of engineers. At times, Kubernetes can feel like a superpower, but with all of the benefits of scalability and agility comes immense complexity. The truth is, very few software developers truly understand how Kubernetes works under the hood.

I like to use the analogy of a watch. From the user’s perspective, it’s very straightforward until it breaks. To actually fix a broken watch requires expertise most people simply do not have — and I promise you, Kubernetes is much more complex than your watch.

How are most teams solving this problem? The truth is, many of them aren’t. They often adopt Kubernetes as part of their digital transformation only to find out it’s much more complex than they expected. Then they have to hire more engineers and experts to manage it, which in a way defeats its purpose.

Where you see containers, you see Kubernetes to help with orchestration. According to Datadog’s most recent report about container adoption, nearly 90% of all containers are orchestrated.

All of this means there is a great opportunity for DevOps startups to come in and address the different pain points within the Kubernetes ecosystem. This technology isn’t going anywhere, so any platform or tooling that helps make it more secure, simple to use and easy to troubleshoot will be well appreciated by the software development community.

In that sense, there’s never been a better time for VCs to invest in this ecosystem. It’s my belief that Kubernetes is becoming the new Linux: 96.4% of the top million web servers’ operating systems are Linux. Similarly, Kubernetes is trending to become the de facto operating system for modern, cloud-native applications. It is already the most popular open-source project within the Cloud Native Computing Foundation (CNCF), with 91% of respondents using it — a steady increase from 78% in 2019 and 58% in 2018.

While the technology is proven and adoption is skyrocketing, there are still some fundamental challenges that will undoubtedly be solved by third-party solutions. Let’s go deeper and look at five reasons why we’ll see a surge of startups in this space.

 

Containers are the go-to method for building modern apps

Docker revolutionized how developers build and ship applications. Container technology has made it easier to move applications and workloads between clouds. It also provides as much resource isolation as a traditional hypervisor, but with considerable opportunities to improve agility, efficiency and speed.

#cloud, #cloud-computing, #cloud-infrastructure, #cloud-native-computing-foundation, #cloud-native-computing, #column, #databricks, #ec-cloud-and-enterprise-infrastructure, #ec-column, #ec-enterprise-applications, #enterprise, #google, #kubernetes, #linux, #microservices, #new-relic, #openshift, #rapid7, #red-hat, #startups, #ubuntu, #web-services

Platform-as-a-service startup Porter aims to become go-to platform for deploying, managing cloud-based apps

By the time Porter co-founders Trevor Shim and Justin Rhee decided to build a company around DevOps, the pair were well versed in doing remote development on Kubernetes. And like other users, were consistently getting burnt by the technology.

They realized that for all of the benefits, Rhee told TechCrunch that the technology was there, but users were having to manage the complexity of hosting solutions as well as incur the costs associated with a big DevOps team.

They decided to build out a solution externally and went through Y Combinator’s Summer 2020 batch, where they found other startup companies trying to do the same.

Today, Porter announced $1.5 million in seed funding from Venrock, Translink Capital, Soma Capital and several angel investors. It’s goal is to build a Platform-as-a-Service that any team can use to manage applications in its own cloud, essentially delivering the full flexibility of Kubernetes through a Heroku-like experience.

Why Heroku? It is the hosting platform that developers are used to, and not just small companies, but also later stage companies. When they want to move to Amazon Web Services, Google Cloud or DigitalOcean, Porter will be that bridge, Shim added.

However, while Heroku is still popular, the pair say companies are thinking the platform is getting outdated because it is standing still technology-wise. Each year, companies move on from the platform due to technical limitations and cost, Rhee said.

A big part of the bet Porter is taking is not charging users for hosting, and its cost is a pure SaaS product,he said. They aren’t looking to be resellers, so companies can use their own cloud, but Porter will provide the automation and users can pay with their AWS and GCP credits, which gives them flexibility.

A common pattern is a move into Kubernetes, but “the zinger we talk about,” is if Heroku was built in 2021, it would have been built on Kubernetes, Shim added.

“So we see ourselves as a successor’s successor,” he said.

To be that bridge, the company will use the new funding to increase its engineering bandwidth with the goal of “becoming the de facto standard for all startups.” Shim said.

Porter’s platform went live in February, and in six months became the sixth-fastest growing open source platform download on GitHub, said Ethan Batraski, partner at Venrock. He met the company through YC and was “super impressed with Rhee’s and Shim’s vision.

“Heroku has 100,000 developers, but I believe it has stagnated,” Batraski added. “Porter already has 100 startups on its platform. The growth they’ve seen — four or five times — is what you want to see at this stage.”

His firm has long focused on data infrastructure and is seeing the stack get more complex, saying “at the same time, more developers are wanting to build out an app over a week, and scale it to millions of users, but that takes people resources. With Kubernetes it can turn everyone into an expert developer without them knowing it,” he added.

“Heroku has 100,000 developers, but I believe it has stagnated,” Batraski added. “Porter already has 100 startups on its platform. The growth they’ve seen — four or five times — is what you want to see at this stage.”

 

#apps, #cloud, #cloud-computing, #cloud-infrastructure, #cloud-storage, #developer, #ethan-batraski, #funding, #heroku, #justin-rhee, #kubernetes, #recent-funding, #saas, #soma-capital, #startups, #tc, #translink-capital, #trevor-shim, #venrock, #y-combinator

The end of open source?

Several weeks ago, the Linux community was rocked by the disturbing news that University of Minnesota researchers had developed (but, as it turned out, not fully executed) a method for introducing what they called “hypocrite commits” to the Linux kernel — the idea being to distribute hard-to-detect behaviors, meaningless in themselves, that could later be aligned by attackers to manifest vulnerabilities.

This was quickly followed by the — in some senses, equally disturbing — announcement that the university had been banned, at least temporarily, from contributing to kernel development. A public apology from the researchers followed.

Though exploit development and disclosure is often messy, running technically complex “red team” programs against the world’s biggest and most important open-source project feels a little extra. It’s hard to imagine researchers and institutions so naive or derelict as not to understand the potentially huge blast radius of such behavior.

Equally certain, maintainers and project governance are duty bound to enforce policy and avoid having their time wasted. Common sense suggests (and users demand) they strive to produce kernel releases that don’t contain exploits. But killing the messenger seems to miss at least some of the point — that this was research rather than pure malice, and that it casts light on a kind of software (and organizational) vulnerability that begs for technical and systemic mitigation.

Projects of the scale and utter criticality of the Linux kernel aren’t prepared to contend with game-changing, hyperscale threat models.

I think the “hypocrite commits” contretemps is symptomatic, on every side, of related trends that threaten the entire extended open-source ecosystem and its users. That ecosystem has long wrestled with problems of scale, complexity and free and open-source software’s (FOSS) increasingly critical importance to every kind of human undertaking. Let’s look at that complex of problems:

  • The biggest open-source projects now present big targets.
  • Their complexity and pace have grown beyond the scale where traditional “commons” approaches or even more evolved governance models can cope.
  • They are evolving to commodify each other. For example, it’s becoming increasingly hard to state, categorically, whether “Linux” or “Kubernetes” should be treated as the “operating system” for distributed applications. For-profit organizations have taken note of this and have begun reorganizing around “full-stack” portfolios and narratives.
  • In so doing, some for-profit organizations have begun distorting traditional patterns of FOSS participation. Many experiments are underway. Meanwhile, funding, headcount commitments to FOSS and other metrics seem in decline.
  • OSS projects and ecosystems are adapting in diverse ways, sometimes making it difficult for for-profit organizations to feel at home or see benefit from participation.

Meanwhile, the threat landscape keeps evolving:

  • Attackers are bigger, smarter, faster and more patient, leading to long games, supply-chain subversion and so on.
  • Attacks are more financially, economically and politically profitable than ever.
  • Users are more vulnerable, exposed to more vectors than ever before.
  • The increasing use of public clouds creates new layers of technical and organizational monocultures that may enable and justify attacks.
  • Complex commercial off-the-shelf (COTS) solutions assembled partly or wholly from open-source software create elaborate attack surfaces whose components (and interactions) are accessible and well understood by bad actors.
  • Software componentization enables new kinds of supply-chain attacks.
  • Meanwhile, all this is happening as organizations seek to shed nonstrategic expertise, shift capital expenditures to operating expenses and evolve to depend on cloud vendors and other entities to do the hard work of security.

The net result is that projects of the scale and utter criticality of the Linux kernel aren’t prepared to contend with game-changing, hyperscale threat models. In the specific case we’re examining here, the researchers were able to target candidate incursion sites with relatively low effort (using static analysis tools to assess units of code already identified as requiring contributor attention), propose “fixes” informally via email, and leverage many factors, including their own established reputation as reliable and frequent contributors, to bring exploit code to the verge of being committed.

This was a serious betrayal, effectively by “insiders” of a trust system that’s historically worked very well to produce robust and secure kernel releases. The abuse of trust itself changes the game, and the implied follow-on requirement — to bolster mutual human trust with systematic mitigations — looms large.

But how do you contend with threats like this? Formal verification is effectively impossible in most cases. Static analysis may not reveal cleverly engineered incursions. Project paces must be maintained (there are known bugs to fix, after all). And the threat is asymmetrical: As the classic line goes — blue team needs to protect against everything, red team only needs to succeed once.

I see a few opportunities for remediation:

  • Limit the spread of monocultures. Stuff like Alva Linux and AWS’ Open Distribution of ElasticSearch are good, partly because they keep widely used FOSS solutions free and open source, but also because they inject technical diversity.
  • Reevaluate project governance, organization and funding with an eye toward mitigating complete reliance on the human factor, as well as incentivizing for-profit companies to contribute their expertise and other resources. Most for-profit companies would be happy to contribute to open source because of its openness, and not despite it, but within many communities, this may require a culture change for existing contributors.
  • Accelerate commodification by simplifying the stack and verifying the components. Push appropriate responsibility for security up into the application layers.

Basically, what I’m advocating here is that orchestrators like Kubernetes should matter less, and Linux should have less impact. Finally, we should proceed as fast as we can toward formalizing the use of things like unikernels.

Regardless, we need to ensure that both companies and individuals provide the resources open source needs to continue.

#column, #developer, #kernel, #kubernetes, #linux, #open-source-software, #operating-systems, #opinion, #university-of-minnesota

Microsoft brings more of its Azure services to any Kubernetes cluster

At its Build developer conference today, Microsoft announced a new set of Azure services (in preview) that businesses can now run on virtually any CNCF-conformant Kubernetes cluster with the help of its Azure Arc multi-cloud service.

Azure Arc, similar to tools like Google’s Anthos or AWS’s upcoming EKS Anywhere, provides businesses with a single tool to manage their container clusters across clouds and on-premises data centers. Since its launch back in late 2019, Arc enabled some of the core Azure services to run directly in these clusters as well, though the early focus was on a small set of data services, with the team also later adding some machine learning tools to Arc as well. With today’s update, the company is greatly expanding this set of containerized Azure services that work with Arc.

These new services include Azure App Service for building and managing web apps and APIs, Azure Functions for event-driven programming, Azure Logic Apps for building automated workflows, Azure Event Grid for event routing, and Azure API Management for… you guessed it… managing internal and external APIs.

“The app services are now Azure Arc-enabled, which means customers can deploy Web Apps, Functions, API gateways, Logic Apps and Event Grid services on pre-provisioned Kubernetes clusters,” Microsoft explained in its annual “Book of News” for this year’s Build. “This takes advantage of features including deployment slots for A/B testing, storage queue triggers and out-of-box connectors from the app services, regardless of run location. With these portable turnkey services, customers can save time building apps, then manage them consistently across hybrid and multicloud environments using Azure Arc.”

read

#api, #aws, #azure, #azure-arc, #cloud-computing, #cloud-infrastructure, #computing, #google-cloud-platform, #kubernetes, #machine-learning, #microsoft, #microsoft-build-2021, #microsoft-azure, #tc, #web-apps

Styra, the startup behind Open Policy Agent, nabs $40M to expand its cloud-native authorization tools

As cloud-native apps continue to become increasingly central to how organizations operate, a startup founded by the creators of a popular open-source tool to manage authorization for cloud-native application environments is announcing some funding to expand its efforts at commercializing the opportunity.

Styra, the startup behind Open Policy Agent, has picked up $40 million in a Series B round of funding led by Battery Ventures. Also participating are previous backers A. Capital, Unusual Ventures and Accel; and new backers CapitalOne Ventures, Citi Ventures and Cisco Investments. Styra has disclosed CapitalOne is also one of its customers, along with e-commerce site Zalando and the European Patent Office.

Styra is sitting on the classic opportunity of open source technology: scale and demand.

OPA — which can be used across Kubernetes, containerized and other environments — now has racked up some 75 million downloads and is adding some 1 million downloads weekly, with Netflix, Capital One, Atlassian and Pinterest among those that are using OPA for internal authorization purposes. The fact that OPA is open source is also important:

“Developers are at the top of the food chain right now,” CEO Bill Mann said in an interview, “They choose which technology on which to build the framework, and they want what satisfies their requirements, and that is open source. It’s a foundational change: if it isn’t open source it won’t pass the test.”

But while some of those adopting OPA have hefty engineering teams of their own to customize how OPA is used, the sheer number of downloads (and potential active users stemming from that) speak to the opportunity for a company to build tools to help manage that and customize it for specific use cases in cases where those wanting to use OPA may lack the resources (or appetite) to build and scale custom implementations themselves.

As with many of the enterprise startups getting funded at the moment, Styra has proven itself in particular over the last year, with the switch to remote work, workloads being managed across a number of environments, and the ever-persistent need for better security around what people can and should not be using. Authorization is a particularly acute issue when considering the many access points that need to be monitored: as networks continue to grow across multiple hubs and applications, having a single authorization tool for the whole stack becomes even more important.

Styra said that some of the funding will be used to continue evolving its product, specifically by creating better and more efficient ways to apply authorization policies by way of code; and by bringing in more partners to expand the scope of what can be covered by its technology.

“We are extremely impressed with the Styra team and the progress they’ve made in this dynamic market to date,” said Dharmesh Thakker, a general partner at Battery Ventures. “Everyone who is moving to cloud, and adopting containerized applications, needs Styra for authorization—and in the light of today’s new, remote-first work environment, every enterprise is now moving to the cloud.” Thakker is joining the board with this round.

#applications, #cloud, #cloud-native, #containers, #developer, #enterprise, #funding, #kubernetes, #open-source, #styra

Google’s Anthos multi-cloud platform gets improved logging, Windows container support and more

Google today announced a sizable update to its Anthos multi-cloud platform that lets you build, deploy and manage containerized applications anywhere, including on Amazon’s AWS and (in preview) on Microsoft Azure.

Version 1.7 includes new features like improved metrics and logging for Anthos on AWS, a new Connect gateway to interact with any cluster right from Google Cloud and a preview of Google’s managed control plane for Anthos Service Mesh. Other new features include Windows container support for environments that use VMware’s vSphere platform and new tools for developers to make it easier for them to deploy their applications to any Anthos cluster.

Today’s update comes almost exactly two years after Google CEO Sundar Pichai originally announced Anthos at its Cloud Next event in 2019 (before that, Google called this project the ‘Google Cloud Services Platform,’ which launched three years ago). Hybrid- and multi-cloud, it’s fair to say, takes a key role in the Google Cloud roadmap — and maybe more so for Google than for any of its competitors. And recently, Google brought on industry veteran Jeff Reed to become the VP of Product Management in charge of Anthos.

Reed told me that he believes that there are a lot of factors right now that are putting Anthos in a good position. “The wind is at our back. We bet on Kubernetes, bet on containers — those were good decisions,” he said. Increasingly, customers are also now scaling out their use of Kubernetes and have to figure out how to best scale out their clusters and deploy them in different environments — and to do so, they need a consistent platform across these environments. He also noted that when it comes to bringing on new Anthos customers, it’s really those factors that determine whether a company will look into Anthos or not.

He acknowledged that there are other players in this market, but he argues that Google Cloud’s take on this is also quite different. “I think we’re pretty unique in the sense that we’re from the cloud, cloud-native is our core approach,” he said. “A lot of what we talk about in [Anthos] 1.7 is about how we leverage the power of the cloud and use what we call ‘an anchor in the cloud’ to make your life much easier. We’re more like a cloud vendor there, but because we support on-prem, we see some of those other folks.” Those other folks being IBM/Red Hat’s OpenShift and VMware’s Tanzu, for example. 

The addition of support for Windows containers in vSphere environments also points to the fact that a lot of Anthos customers are classical enterprises that are trying to modernize their infrastructure, yet still rely on a lot of legacy applications that they are now trying to bring to the cloud.

Looking ahead, one thing we’ll likely see is more integrations with a wider range of Google Cloud products into Anthos. And indeed, as Reed noted, inside of Google Cloud, more teams are now building their products on top of Anthos themselves. In turn, that then makes it easier to bring those services to an Anthos-managed environment anywhere. One of the first of these internal services that run on top of Anthos is Apigee. “Your Apigee deployment essentially has Anthos underneath the covers. So Apigee gets all the benefits of a container environment, scalability and all those pieces — and we’ve made it really simple for that whole environment to run kind of as a stack,” he said.

I guess we can expect to hear more about this in the near future — or at Google Cloud Next 2021.

 

#anthos, #apigee, #aws, #ceo, #chrome-os, #cisco, #cloud, #cloud-computing, #cloud-infrastructure, #computing, #enterprise, #google, #google-cloud, #google-cloud-platform, #ibm, #kubernetes, #microsoft, #microsoft-windows, #red-hat, #sundar-pichai, #vmware

Esri brings its flagship ArcGIS platform to Kubernetes

Esri, the geographic information system (GIS), mapping and spatial analytics company, is hosting its (virtual) developer summit today. Unsurprisingly, it is making a couple of major announcements at the event that range from a new design system and improved JavaScript APIs to support for running ArcGIS Enterprise in containers on Kubernetes.

The Kubernetes project was a major undertaking for the company, Esri Product Managers Trevor Seaton and Philip Heede told me. Traditionally, like so many similar products, ArcGIS was architected to be installed on physical boxes, virtual machines or cloud-hosted VMs. And while it doesn’t really matter to end-users where the software runs, containerizing the application means that it is far easier for businesses to scale their systems up or down as needed.

Esri ArcGIS Enterprise on Kubernetes deployment

Esri ArcGIS Enterprise on Kubernetes deployment

“We have a lot of customers — especially some of the larger customers — that run very complex questions,” Seaton explained. “And sometimes it’s unpredictable. They might be responding to seasonal events or business events or economic events, and they need to understand not only what’s going on in the world, but also respond to their many users from outside the organization coming in and asking questions of the systems that they put in place using ArcGIS. And that unpredictable demand is one of the key benefits of Kubernetes.”

Deploying Esri ArcGIS Enterprise on Kubernetes

Deploying Esri ArcGIS Enterprise on Kubernetes

The team could have chosen to go the easy route and put a wrapper around its existing tools to containerize them and call it a day, but as Seaton noted, Esri used this opportunity to re-architect its tools and break it down into microservices.

“It’s taken us a while because we took three or four big applications that together make up [ArcGIS] Enterprise,” he said. “And we broke those apart into a much larger set of microservices. That allows us to containerize specific services and add a lot of high availability and resilience to the system without adding a lot of complexity for the administrators — in fact, we’re reducing the complexity as we do that and all of that gets installed in one single deployment script.”

While Kubernetes simplifies a lot of the management experience, a lot of companies that use ArcGIS aren’t yet familiar with it. And as Seaton and Heede noted, the company isn’t forcing anyone onto this platform. It will continue to support Windows and Linux just like before. Heede also stressed that it’s still unusual — especially in this industry — to see a complex, fully integrated system like ArcGIS being delivered in the form of microservices and multiple containers that its customers then run on their own infrastructure.

Image Credits: Esri

In addition to the Kubernetes announcement, Esri also today announced new JavaScript APIs that make it easier for developers to create applications that bring together Esri’s server-side technology and the scalability of doing much of the analysis on the client-side. Back in the day, Esri would support tools like Microsoft’s Silverlight and Adobe/Apache Flex for building rich web-based applications. “Now, we’re really focusing on a single web development technology and the toolset around that,” Esri product manager Julie Powell told me.

A bit later this month, Esri also plans to launch its new design system to make it easier and faster for developers to create clean and consistent user interfaces. This design system will launch April 22, but the company already provided a bit of a teaser today. As Powell noted, the challenge for Esri is that its design system has to help the company’s partners to put their own style and branding on top of the maps and data they get from the ArcGIS ecosystem.

 

#computing, #developer, #enterprise, #esri, #gis, #javascript, #kubernetes, #linux, #microsoft-windows, #software, #tc, #vms

Aqua Security raises $135M at a $1B valuation for its cloud native security service

Aqua Security, a Boston- and Tel Aviv-based security startup that focuses squarely on securing cloud-native services, today announced that it has raised a $135 million Series E funding round at a $1 billion valuation. The round was led by ION Crossover Partners. Existing investors M12 Ventures, Lightspeed Venture Partners, Insight Partners, TLV Partners, Greenspring Associates and Acrew Capital also participated. In total, Aqua Security has now raised $265 million since it was founded in 2015.

The company was one of the earliest to focus on securing container deployments. And while many of its competitors were acquired over the years, Aqua remains independent and is now likely on a path to an IPO. When it launched, the industry focus was still very much on Docker and Docker containers. To the detriment of Docker, that quickly shifted to Kubernetes, which is now the de facto standard. But enterprises are also now looking at serverless and other new technologies on top of this new stack.

“Enterprises that five years ago were experimenting with different types of technologies are now facing a completely different technology stack, a completely different ecosystem and a completely new set of security requirements,” Aqua CEO Dror Davidoff told me. And with these new security requirements came a plethora of startups, all focusing on specific parts of the stack.

Image Credits: Aqua Security

What set Aqua apart, Dror argues, is that it managed to 1) become the best solution for container security and 2) realized that to succeed in the long run, it had to become a platform that would secure the entire cloud-native environment. About two years ago, the company made this switch from a product to a platform, as Davidoff describes it.

“There was a spree of acquisitions by CheckPoint and Palo Alto [Networks] and Trend [Micro],” Davidoff said. “They all started to acquire pieces and tried to build a more complete offering. The big advantage for Aqua was that we had everything natively built on one platform. […] Five years later, everyone is talking about cloud-native security. No one says ‘container security’ or ‘serverless security’ anymore. And Aqua is practically the broadest cloud-native security [platform].”

One interesting aspect of Aqua’s strategy is that it continues to bet on open source, too. Trivy, its open-source vulnerability scanner, is the default scanner for GitLab’s Harbor Registry and the CNCF’s Artifact Hub, for example.

“We are probably the best security open-source player there is because not only do we secure from vulnerable open source, we are also very active in the open-source community,” Davidoff said (with maybe a bit of hyperbole). “We provide tools to the community that are open source. To keep evolving, we have a whole open-source team. It’s part of the philosophy here that we want to be part of the community and it really helps us to understand it better and provide the right tools.”

In 2020, Aqua, which mostly focuses on mid-size and larger companies, doubled the number of paying customers and it now has more than half a dozen customers with an ARR of over $1 million each.

Davidoff tells me the company wasn’t actively looking for new funding. Its last funding round came together only a year ago, after all. But the team decided that it wanted to be able to double down on its current strategy and raise sooner than originally planned. ION had been interested in working with Aqua for a while, Davidoff told me, and while the company received other offers, the team decided to go ahead with ION as the lead investor (with all of Aqua’s existing investors also participating in this round).

“We want to grow from a product perspective, we want to grow from a go-to-market [perspective] and expand our geographical coverage — and we also want to be a little more acquisitive. That’s another direction we’re looking at because now we have the platform that allows us to do that. […] I feel we can take the company to great heights. That’s the plan. The market opportunity allows us to dream big.”

 

#acrew-capital, #aqua, #aqua-security, #boston, #checkpoint, #cloud, #cloud-computing, #cloud-infrastructure, #cloud-storage, #computing, #docker, #enterprise, #greenspring-associates, #insight-partners, #ion-crossover-partners, #kubernetes, #lightspeed-venture-partners, #palo-alto, #recent-funding, #security, #serverless-computing, #software, #startups, #tc, #tel-aviv, #tlv-partners

Microsoft’s Azure Arc multi-cloud platform now supports machine learning workloads

With Azure Arc, Microsoft offers a service that allows its customers to run Azure in any Kubernetes environment, no matter where that container cluster is hosted. From Day One, Arc supported a wide range of use cases, but one feature that was sorely missing when it first launched was support for machine learning (ML). But one of the advantages of a tool like Arc is that it allows enterprises to run their workloads close to their data and today, that often means using that data to train ML models.

At its Ignite conference, Microsoft today announced that it bringing exactly this capability to Azure Arc with the addition of Azure Machine Learning to the set of Arc-enabled data services.

“By extending machine learning capabilities to hybrid and multicloud environments, customers can run training models where the data lives while leveraging existing infrastructure investments. This reduces data movement and network latency, while meeting security and compliance requirements,” Azure GM Arpan Shah writes in today’s announcement.

This new capability is now available to Arc customers.

In addition to bringing this new machine learning capability to Arc, Microsoft also today announced that Azure Arc enabled Kubernetes, which allows users to deploy standard Kubernetes configurations to their clusters anywhere, is now generally available.

Also new in this world of hybrid Azure services is support for Azure Kubernetes Service on Azure Stack HCI. That’s a mouthful, but Azure Stack HCI is Microsoft’s platform for running Azure on a set of standardized, hyperconverged hardware inside a customer’s datacenter. The idea pre-dates Azure Arc, but it remains a plausible alternative for enterprises who want to run Azure in their own data center and has continued support from vendors like Dell, Lenovo, HPE, Fujitsu and DataOn.

On the open-source side of Arc, Microsoft also today stressed that Arc is built to work with any Kubernetes distribution that is conformant to the standard of the Cloud Native Computing Foundation (CNCF) and that it has worked with RedHat, Canonical, Rancher and now Nutanix to test and validate their Kubernetes implementations on Azure Arc.

#cloud-computing, #cloud-infrastructure, #cloud-native-computing-foundation, #computing, #dell, #fujitsu, #hpe, #kubernetes, #lenovo, #machine-learning, #microsoft, #microsoft-ignite-2021, #microsoft-azure, #ml, #nutanix, #red-hat, #redhat, #tc

Why F5 spent $2.2B on 3 companies to focus on cloud native applications

It’s essential for older companies to recognize changes in the marketplace or face the brutal reality of being left in the dust. F5 is an old-school company that launched back in the 90s, yet has been able to transform a number of times in its history to avoid major disruption. Over the last two years, the company has continued that process of redefining itself, this time using a trio of acquisitions — NGINX, Shape Security and Volterra — totaling $2.2 billion to push in a new direction.

While F5 has been associated with applications management for some time, it recognized that the way companies developed and managed applications was changing in a big way with the shift to Kubernetes, microservices and containerization. At the same time, applications have been increasingly moving to the edge, closer to the user. The company understood that it needed to up its game in these areas if it was going to keep up with customers.

Taken separately, it would be easy to miss that there was a game plan behind the three acquisitions, but together they show a company with a clear opinion of where they want to go next. We spoke to F5 president and CEO François Locoh-Donou to learn why he bought these companies and to figure out the method in his company’s acquisition spree madness.

Looking back, looking forward

F5, which was founded in 1996, has found itself at a number of crossroads in its long history, times where it needed to reassess its position in the market. A few years ago it found itself at one such juncture. The company had successfully navigated the shift from physical appliance to virtual, and from data center to cloud. But it also saw the shift to cloud native on the horizon and it knew it had to be there to survive and thrive long term.

“We moved from just keeping applications performing to actually keeping them performing and secure. Over the years, we have become an application delivery and security company. And that’s really how F5 grew over the last 15 years,” said Locoh-Donou.

Today the company has over 18,000 customers centered in enterprise verticals like financial services, healthcare, government, technology and telecom. He says that the focus of the company has always been on applications and how to deliver and secure them, but as they looked ahead, they wanted to be able to do that in a modern context, and that’s where the acquisitions came into play.

As F5 saw it, applications were becoming central to their customers’ success and their IT departments were expending too many resources connecting applications to the cloud and keeping them secure. So part of the goal for these three acquisitions was to bring a level of automation to this whole process of managing modern applications.

“Our view is you fast forward five or 10 years, we are going to move to a world where applications will become adaptive, which essentially means that we are going to bring automation to the security and delivery and performance of applications, so that a lot of that stuff gets done in a more native and automated way,” Locoh-Donou said.

As part of this shift, the company saw customers increasingly using microservices architecture in their applications. This means instead of delivering a large monolithic application, developers were delivering them in smaller pieces inside containers, making it easier to manage, deploy and update.

At the same time, it saw companies needing a new way to secure these applications as they shifted from data center to cloud to the edge. And finally, that shift to the edge would require a new way to manage applications.

#cloud, #cloud-native, #ec-cloud-and-enterprise-infrastructure, #enterprise, #f5, #kubernetes, #ma, #nginx, #shape-security, #tc, #volterra

Google Cloud puts its Kubernetes Engine on autopilot

Google Cloud today announced a new operating mode for its Kubernetes Engine (GKE) that turns over the management of much of the day-to-day operations of a container cluster to Google’s own engineers and automated tools. With Autopilot, as the new mode is called, Google manages all of the Day 2 operations of managing these clusters and their nodes, all while implementing best practices for operating and securing them.

This new mode augments the existing GKE experience, which already managed most of the infrastructure of standing up a cluster. This ‘standard’ experience, as Google Cloud now calls it, is still available and allows users to customize their configurations to their heart’s content and manually provision and manage their node infrastructure.

Drew Bradstock, the Group Product Manager for GKE, told me that the idea behind Autopilot was to bring together all of the tools that Google already had for GKE and bring them together with its SRE teams who know how to run these clusters in production — and have long done so inside of the company.

“Autopilot stitches together auto-scaling, auto-upgrades, maintenance, Day 2 operations and — just as importantly — does it in a hardened fashion,” Bradstock noted. “[…] What this has allowed our initial customers to do is very quickly offer a better environment for developers or dev and test, as well as production, because they can go from Day Zero and the end of that five-minute cluster creation time, and actually have Day 2 done as well.”

Image Credits: Google

From a developer’s perspective, nothing really changes here, but this new mode does free up teams to focus on the actual workloads and less on managing Kubernetes clusters. With Autopilot, businesses still get the benefits of Kubernetes, but without all of the routine management and maintenance work that comes with that. And that’s definitely a trend we’ve been seeing as the Kubernetes ecosystem has evolved. Few companies, after all, see their ability to effectively manage Kubernetes as their real competitive differentiator.

All of that comes at a price, of course, at a flat fee of $0.10 per hour and cluster (there’s also a free GKE tier that provides $74.40 in billing credits), plus, of course, the usual fees for resources that your clusters consume. Google offers a 99.95% SLA for the control plane of its Autopilot clusters and a 99.9% SLA for Autopilot pods in multiple zones.

Autopilot for GKE joins a set of container-centric products in the Google Cloud portfolio that also include Anthos for running in multi-cloud environments and Cloud Run, Google’s serverless offering. “[Autopilot] is really [about] bringing the automation aspects in GKE we have for running on Google Cloud, and bringing it all together in an easy-to-use package, so that if you’re newer to Kubernetes, or you’ve got a very large fleet, it drastically reduces the amount of time, operations and even compute you need to use,” Bradstock explained.

And while GKE is a key part of Anthos, that service is more about brining Google’s config management, service mesh and other tools to an enterprise’s own data center. Autopilot of GKE is, at least for now, only available on Google Cloud.

“On the serverless side, Cloud Run is really, really great for an opinionated development experience,” Bradstock added. “So you can get going really fast if you want an app to be able to go from zero to 1000 and back to zero — and not worry about anything at all and have it managed entirely by Google. That’s highly valuable and ideal for a lot of development. Autopilot is more about simplifying the entire platform people work on when they want to leverage the Kubernetes ecosystem, be a lot more in control and have a whole bunch of apps running within one environment.”

 

#cloud, #cloud-computing, #cloud-infrastructure, #containers, #enterprise, #gke, #google, #kubernetes, #tc

Container security acquisitions increase as companies accelerate shift to cloud

Last week, another container security startup came off the board when Rapid7 bought Alcide for $50 million. The purchase is part of a broader trend in which larger companies are buying up cloud-native security startups at a rapid clip. But why is there so much M&A action in this space now?

Palo Alto Networks was first to the punch, grabbing Twistlock for $410 million in May 2019. VMware struck a year later, snaring Octarine. Cisco followed with PortShift in October and Red Hat snagged StackRox last month before the Rapid7 response last week.

This is partly because many companies chose to become cloud-native more quickly during the pandemic. This has created a sharper focus on security, but it would be a mistake to attribute the acquisition wave strictly to COVID-19, as companies were shifting in this direction pre-pandemic.

It’s also important to note that security startups that cover a niche like container security often reach market saturation faster than companies with broader coverage because customers often want to consolidate on a single platform, rather than dealing with a fragmented set of vendors and figuring out how to make them all work together.

Containers provide a way to deliver software by breaking down a large application into discrete pieces known as microservices. These are packaged and delivered in containers. Kubernetes provides the orchestration layer, determining when to deliver the container and when to shut it down.

This level of automation presents a security challenge, making sure the containers are configured correctly and not vulnerable to hackers. With myriad switches this isn’t easy, and it’s made even more challenging by the ephemeral nature of the containers themselves.

Yoav Leitersdorf, managing partner at YL Ventures, an Israeli investment firm specializing in security startups, says these challenges are driving interest in container startups from large companies. “The acquisitions we are seeing now are filling gaps in the portfolio of security capabilities offered by the larger companies,” he said.

#cloud, #cloud-native, #containers, #ec-cloud-and-enterprise-infrastructure, #ec-news-analysis, #enterprise, #kubernetes, #ma, #mergers-and-acquisitions, #security, #tc

Rapid7 acquires Kubernetes security startup Alcide for $50M

Rapid7, the Boston-based security operations company, has been making moves into the cloud recently and this morning it announced that it has acquired Kubernetes security startup Alcide for $50 million.

As the world shifts to cloud native using Kubernetes to manage containerized workloads, it’s tricky ensuring that the containers are configured correctly to keep them safe. What’s more, Kubernetes is designed to automate the management of containers, taking humans out of the loop and making it even more imperative that the security protocols are applied in an automated fashion as well.

Brian Johnson, SVP of Cloud Security at Rapid7 says that this requires a specialized kind of security product and that’s why his company is buying Alcide. “Companies operating in the cloud need to be able to identify and respond to risk in real time, and looking at cloud infrastructure or containers independently simply doesn’t provide enough context to truly understand where you are vulnerable,” he explained.

“With the addition of Alcide, we can help organizations obtain comprehensive, unified visibility across their entire cloud infrastructure and cloud native applications so that they can continue to rapidly innovate while still remaining secure,” he added.

Today’s purchase builds on the company’s acquisition of DivvyCloud last April for $145 million. That’s almost $200 million for the two companies that allow the company to help protect cloud workloads in a fairy broad way.

It’s also part of an industry trend with a number of Kubernetes security startups coming off the board in the last year as bigger companies look to enhance their container security chops by buying the talent and technology. This includes VMWare nabbing Octarine last May, Cisco getting PortShift in October and RedHat buying StackRox last month.

Alcide was founded in 2016 in Tel Aviv, part of the active Israeli security startup scene. It raised about $12 million along the way, according to Crunchbase data.

#cloud, #cloud-native, #container-security, #enterprise, #exit, #fundings-exits, #kubernetes, #ma, #rapid7, #security, #startups, #tc

Run:AI raises $30M Series B for its AI compute platform

Run:AI, a Tel Aviv-based company that helps businesses orchestrate and optimize their AI compute infrastructure, today announced that it has raised a $30 million Series B round. The new round was led by Insight Partners, with participation from existing investors TLV Partners and S Capital. This brings the company’s total funding to date to $43 million.

At the core of Run:AI’s platform is the ability to effectively virtualize and orchestrate AI workloads on top of its Kubernetes-based scheduler. Traditionally, it was always hard to virtualize GPUs, so even as demand for training AI models has increased, a lot of the physical GPUs often set idle for long periods because it was hard to dynamically allocate them between projects.

Image Credits: Run.AI

The promise behind Run:AI’s platform is that it allows its users to abstract away all of the AI infrastructure and pool all of their GPU resources — no matter whether in the cloud or on-premises. This also makes it easier for businesses to share these resources between users and teams. In the process, IT teams also get better insights into how their compute resources are being used.

“Every enterprise is either already rearchitecting themselves to be built around learning systems powered by AI, or they should be,” said Lonne Jaffe, managing director at Insight Partners and now a board member at Run:AI.” Just as virtualization and then container technology transformed CPU-based workloads over the last decades, Run:AI is bringing orchestration and virtualization technology to AI chipsets such as GPUs, dramatically accelerating both AI training and inference. The system also future-proofs deep learning workloads, allowing them to inherit the power of the latest hardware with less rework. In Run:AI, we’ve found disruptive technology, an experienced team and a SaaS-based market strategy that will help enterprises deploy the AI they’ll need to stay competitive.”

Run:AI says that it is currently working with customers in a wide variety of industries, including automotive, finance, defense, manufacturing and healthcare. These customers, the company says, are seeing their GPU utilization increase from 25 to 75% on average.

“The new funds enable Run:AI to grow the company in two important areas: first, to triple the size of our development team this year,” the company’s CEO Omri Geller told me. “We have an aggressive roadmap for building out the truly innovative parts of our product vision — particularly around virtualizing AI workloads — a bigger team will help speed up development in this area. Second, a round this size enables us to quickly expand sales and marketing to additional industries and markets.”

#artificial-intelligence, #cloud, #computing, #developer, #enterprise, #finance, #gpu, #hardware-acceleration, #insight-partners, #kubernetes, #lonne-jaffe, #recent-funding, #run-ai, #s-capital, #startups, #tc, #technology, #tel-aviv, #tlv-partners

RedHat is acquiring container security company StackRox

RedHat today announced that it’s acquiring container security startup StackRox . The companies did not share the purchase price.

RedHat, which is perhaps best known for its enterprise Linux products has been making the shift to the cloud in recent years. IBM purchased the company in 2018 for a hefty $34 billion and has been leveraging that acquisition as part of a shift to a hybrid cloud strategy under CEO Arvind Krishna.

The acquisition fits nicely with RedHat OpenShift, its container platform, but the company says it will continue to support StackRox usage on other platforms including AWS, Azure and Google Cloud Platform. This approach is consistent with IBM’s strategy of supporting multi-cloud, hybrid environments.

In fact, Red Hat president and CEO Paul Cormier sees the two companies working together well. “Red Hat adds StackRox’s Kubernetes-native capabilities to OpenShift’s layered security approach, furthering our mission to bring product-ready open innovation to every organization across the open hybrid cloud across IT footprints,” he said in a statement.

CEO Kamal Shah, writing in a company blog post announcing the acquisition, explained that the company made a bet a couple of years ago on Kubernetes and it has paid off. “Over two and half years ago, we made a strategic decision to focus exclusively on Kubernetes and pivoted our entire product to be Kubernetes-native. While this seems obvious today; it wasn’t so then. Fast forward to 2020 and Kubernetes has emerged as the de facto operating system for cloud-native applications and hybrid cloud environments,” Shah wrote.

Shah sees the purchase as a way to expand the company and the road map more quickly using the resources of Red Hat (and IBM), a typical argument from CEOs of smaller acquired companies. But the trick is always finding a way to stay relevant inside such a large organization.

StackRox’s acquisition is part of some consolidation we have been seeing in the Kubernetes space in general and the security space more specifically. That includes Palo Alto Networks acquiring competitor TwistLock for $410 million in 2019. Another competitor, Aqua Security, which has raised $130 million, remains independent.

StackRox was founded in 2014 and raised over $65 million, according to Crunchbase data. Investors included Menlo Ventures, Redpoint and Sequoia Capital. The deal is expected to close this quarter subject to normal regulatory scrutiny.

#cloud, #container-security, #enterprise, #exit, #fundings-exits, #ibm, #kubernetes, #ma, #mergers-and-acquisitions, #red-hat, #security, #stackrox, #startups

Google grants $3 million to the CNCF to help it run the Kubernetes infrastructure

Back in 2018, Google announced that it would provide $9 million in Google Cloud Platform credits — divided over three years — to the Cloud Native Computing Foundation (CNCF) to help it run the development and distribution infrastructure for the Kubernetes project. Previously, Google owned and managed those resources for the community. Today, the two organizations announced that Google is adding on to this grant with another $3 million annual donation to the CNCF to “help ensure the long-term health, quality and stability of Kubernetes and its ecosystem.”

As Google notes, the funds will go to the testing and infrastructure of the Kubernetes project, which currently sees over 2,300 monthly pull requests that trigger about 400,000 integration test runs, all of which use about 300,000 core hours on GCP.

“I’m really happy that we’re able to continue to make this investment,” Aparna Sinha, a director of product management at Google and the chairperson of the CNCF governing board, told me. “We know that it is extremely important for the long-term health, quality and stability of Kubernetes and its ecosystem and we’re delighted to be partnering with the Cloud Native Computing Foundation on an ongoing basis. At the end of the day, the real goal of this is to make sure that developers can develop freely and that Kubernetes, which is of course so important to everyone, continues to be an excellent, solid, stable standard for doing that.”

Sinha also noted that Google contributes a lot of code to the project, with 128,000 code contributions in the last twelve months alone. But on top of these technical contributions, the team is also making in-kind contributions through community engagement and mentoring, for example, in addition to the kind of financial contributions the company is announcing today.

“The Kubernetes project has been growing so fast — the releases are just one after the other,” said Priyanka Sharma, the General Manager of the CNCF. “And there are big changes, all of this has to run somewhere. […] This specific contribution of the $3 million, that’s where that comes in. So the Kubernetes project can be stress-free, [knowing] they have enough credits to actually run for a full year. And that security is critical because you don’t want Kubernetes to be wondering where will this run next month. This gives the developers and the contributors to the project the confidence to focus on feature sets, to build better, to make Kubernetes ever-evolving.”

It’s worth noting that while both Google and the CNCF are putting their best foot forward here, there have been some questions around Google’s management around the Istio service mesh project, which was incubated by Google and IBM a few years ago. At some point in 2017, there was a proposal to bring it under the CNCF umbrella, but that never happened. This year, Istio became one of the founding projects of Open Usage Commons, though that group is mostly concerned with trademarks, not with project governance. And while all of this may seem like a lot of inside baseball — and it is — but it had some members of the open-source community question Google’s commitment to organizations like the CNCF.

“Google contributes to a lot of open-source projects. […] There’s a lot of them, many are with open-source foundations under the Linux Foundation, many of them are otherwise,” Sinha said when I asked her about this. “There’s nothing new, or anything to report about anything else. In particular, this discussion — and our focus very much with the CNCF here is on Kubernetes, which I think — out of everything that we do — is by far the biggest contribution or biggest amount of time and biggest amount of commitment relative to anything else.”

#aparna-sinha, #cloud, #cloud-computing, #cloud-infrastructure, #cloud-native-computing-foundation, #cloud-native-computing, #cncf, #computing, #developer, #free-software, #google, #google-cloud-platform, #kubernetes, #priyanka-sharma, #product-management, #tc, #web-services

New Relic acquires Kubernetes observability platform Pixie Labs

Two months ago, Kubernetes observability platform Pixie Labs launched into general availability and announced a $9.15 million Series A funding round led by Benchmark, with participation from GV. Today, the company is announcing its acquisition by New Relic, the publicly traded monitoring and observability platform.

The Pixie Labs brand and product will remain in place and allow New Relic to extend its platform to the edge. From the outset, the Pixie Labs team designed the service to focus on providing observability for cloud-native workloads running on Kubernetes clusters. And while most similar tools focus on operators and IT teams, Pixie set out to build a tool that developers would want to use. Using eBPF, a relatively new way to extend the Linux kernel, the Pixie platform can collect data right at the source and without the need for an agent.

At the core of the Pixie developer experience are what the company calls “Pixie scripts.” These allow developers to write their debugging workflows, though the company also provides its own set of these and anybody in the community can contribute and share them as well. The idea here is to capture a lot of the informal knowledge around how to best debug a given service.

“We’re super excited to bring these companies together because we share a mission to make observability ubiquitous through simplicity,” Bill Staples, New Relic’s Chief Product Officer, told me. “[…] According to IDC, there are 28 million developers in the world. And yet only a fraction of them really practice observability today. We believe it should be easier for every developer to take a data-driven approach to building software and Kubernetes is really the heart of where developers are going to build software.”

It’s worth noting that New Relic already had a solution for monitoring Kubernetes clusters. Pixie, however, will allow it to go significantly deeper into this space. “Pixie goes much, much further in terms of offering on-the-edge, live debugging use cases, the ability to run those Pixie scripts. So it’s an extension on top of the cloud-based monitoring solution we offer today,” Staples said.

The plan is to build integrations into New Relic into Pixie’s platform and to integrate Pixie use cases with New Relic One as well.

Currently, about 300 teams use the Pixie platform. These range from small startups to large enterprises and as Staples and Asgar noted, there was already a substantial overlap between the two customer bases.

As for why he decided to sell, Pixie co-founder (and former Google AI CEO Zain Asgar told me that it was all about accelerating Pixie’s vision.

“We started Pixie to create this magical developer experience that really allows us to redefine how application developers monitor, secure and manage their applications,” Asgar said. “One of the cool things is when we actually met the team at New Relic and we got together with Bill and [New Relic founder and CEO] Lou [Cirne], we realized that there was almost a complete alignment around this vision […], and by joining forces with New Relic, we can actually accelerate this entire process.”

New Relic has recently done a lot of work on open-sourcing various parts of its platform, including its agents, data exporters and some of its tooling. Pixie, too, will now open-source its core tools. Open-sourcing the service was always on the company’s roadmap, but the acquisition now allows it to push this timeline forward.

“We’ll be taking Pixie and making it available to the community through open source, as well as continuing to build out the commercial enterprise-grade offering for it that extends the New Relic one platform,” Staples explained. Asgar added that it’ll take the company a little while to release the code, though.

“The same fundamental quality that got us so excited about Lew as an EIR in 2007, got us excited about Zain and Ishan in 2017 — absolutely brilliant engineers, who know how to build products developers love,” Bessemer Ventures General Partner Eric Vishria told me. “New Relic has always captured developer delight. For all its power, Kubernetes completely upends the monitoring paradigm we’ve lived with for decades.  Pixie brings the same — easy to use, quick time to value, no-nonsense approach to the Kubernetes world as New Relic brought to APM.  It is a match made in heaven.”

#acquisition, #benchmark, #containers, #enterprise, #exit, #google, #kubernetes, #monitoring, #new-relic, #observability, #performance-management, #startups, #tc

The cloud can’t solve all your problems

The way a team functions and communicates dictates the operational efficiency of a startup and sets the scene for its culture. It’s way more important than what social events and perks are offered, so it’s the responsibility of a founder and/or CEO to provide their team with a technology approach that will empower them to achieve and succeed — now and in the future.

With that in mind, moving to the cloud might seem like a no-brainer because of its huge benefits around flexibility, accessibility and the potential to rapidly scale, while keeping budgets in check.

But there’s an important consideration here: Cloud providers won’t magically give you efficient teams.

Designing a startup for scale means investing in the right technology today to underpin growth for tomorrow and beyond.

It will get you going in the right direction, but you need to think even farther ahead. Designing a startup for scale means investing in the right technology today to underpin growth for tomorrow and beyond. Let’s look at how you approach and manage your cloud infrastructure will impact the effectiveness of your teams and your ability to scale.

Hindsight is 20/20

Adopting cloud is easy, but adopting it properly with best practices and in a secure way? Not so much. You might think that when you move to cloud, the cloud providers will give you everything you need to succeed. But even though they’re there to provide a wide breadth of services, these services won’t necessarily have the depth that you will need to run efficiently and effectively.

Yes, your cloud infrastructure is working now, but think beyond the first prototype or alpha and toward production. Considering where you want to get to, and not just where you are, will help you avoid costly mistakes. You definitely don’t want to struggle through redefining processes and ways of working when you’re also managing time sensitivities and multiple teams.

If you don’t think ahead, you’ll have to put all new processes in. It will take a whole lot longer, cost more money and cause a lot more disruption to teams than if you do it earlier.

For any founder, making strategic technology decisions right now should be a primary concern. It feels more natural to put off those decisions until you come face to face with the problem, but you’ll just end up needing to redo everything as you scale and cause your teams a world of hurt. If you don’t give this problem attention at the beginning, you’re just scaling the problems with the team. Flaws are then embedded within your infrastructure, and they’ll continue to scale with the teams. When these things are rushed, corners are cut and you will end up spending even more time and money on your infrastructure.

Build effective teams and reduce bottlenecks

When you’re making strategic decisions on how to approach your technology stack and cloud infrastructure, the biggest consideration should be what makes an effective team. Given that, keep these things top of mind:

  • Speed of delivery: Having developers able to self-serve cloud infrastructure with best practices built-in will enable speed. Development tools that factor in visibility and communication integrations for teams will give transparency on how they are iterating, problems, bugs or integration failures.
  • Speed of testing: This is all about ensuring fast feedback loops as your team works on critical new iterations and features. Developers should be able to test as much as possible locally and through continuous integration systems before they are ready for code review.
  • Troubleshooting problems: Good logging, monitoring and observability services, gives teams awareness of issues and the ability to resolve problems quickly or reproduce customer complaints in order to develop fixes.

    #cloud, #cloud-computing, #cloud-infrastructure, #cloud-services, #column, #computing, #developer, #kubernetes, #saas, #startups