VCs are betting big on Kubernetes: Here are 5 reasons why

I worked at Google for six years. Internally, you have no choice — you must use Kubernetes if you are deploying microservices and containers (it’s actually not called Kubernetes inside of Google; it’s called Borg). But what was once solely an internal project at Google has since been open-sourced and has become one of the most talked about technologies in software development and operations.

For good reason. One person with a laptop can now accomplish what used to take a large team of engineers. At times, Kubernetes can feel like a superpower, but with all of the benefits of scalability and agility comes immense complexity. The truth is, very few software developers truly understand how Kubernetes works under the hood.

I like to use the analogy of a watch. From the user’s perspective, it’s very straightforward until it breaks. To actually fix a broken watch requires expertise most people simply do not have — and I promise you, Kubernetes is much more complex than your watch.

How are most teams solving this problem? The truth is, many of them aren’t. They often adopt Kubernetes as part of their digital transformation only to find out it’s much more complex than they expected. Then they have to hire more engineers and experts to manage it, which in a way defeats its purpose.

Where you see containers, you see Kubernetes to help with orchestration. According to Datadog’s most recent report about container adoption, nearly 90% of all containers are orchestrated.

All of this means there is a great opportunity for DevOps startups to come in and address the different pain points within the Kubernetes ecosystem. This technology isn’t going anywhere, so any platform or tooling that helps make it more secure, simple to use and easy to troubleshoot will be well appreciated by the software development community.

In that sense, there’s never been a better time for VCs to invest in this ecosystem. It’s my belief that Kubernetes is becoming the new Linux: 96.4% of the top million web servers’ operating systems are Linux. Similarly, Kubernetes is trending to become the de facto operating system for modern, cloud-native applications. It is already the most popular open-source project within the Cloud Native Computing Foundation (CNCF), with 91% of respondents using it — a steady increase from 78% in 2019 and 58% in 2018.

While the technology is proven and adoption is skyrocketing, there are still some fundamental challenges that will undoubtedly be solved by third-party solutions. Let’s go deeper and look at five reasons why we’ll see a surge of startups in this space.

 

Containers are the go-to method for building modern apps

Docker revolutionized how developers build and ship applications. Container technology has made it easier to move applications and workloads between clouds. It also provides as much resource isolation as a traditional hypervisor, but with considerable opportunities to improve agility, efficiency and speed.

#cloud, #cloud-computing, #cloud-infrastructure, #cloud-native-computing-foundation, #cloud-native-computing, #column, #databricks, #ec-cloud-and-enterprise-infrastructure, #ec-column, #ec-enterprise-applications, #enterprise, #google, #kubernetes, #linux, #microservices, #new-relic, #openshift, #rapid7, #red-hat, #startups, #ubuntu, #web-services

Edge Delta raises $15M Series A to take on Splunk

Seattle-based Edge Delta, a startup that is building a modern distributed monitoring stack that is competing directly with industry heavyweights like Splunk, New Relic and Datadog, today announced that it has raised a $15 million Series A funding round led by Menlo Ventures and Tim Tully, the former CTO of Splunk. Previous investors MaC Venture Capital and Amity Ventures also participated in this round, which brings the company’s total funding to date to $18 million.

“Our thesis is that there’s no way that enterprises today can continue to analyze all their data in real time,” said Edge Delta co-founder and CEO Ozan Unlu, who has worked in the observability space for about 15 years already (including at Microsoft and Sumo Logic). “The way that it was traditionally done with these primitive, centralized models — there’s just too much data. It worked 10 years ago, but gigabytes turned into terabytes and now terabytes are turning into petabytes. That whole model is breaking down.”

Image Credits: Edge Delta

He acknowledges that traditional big data warehousing works quite well for business intelligence and analytics use cases. But that’s not real-time and also involves moving a lot of data from where it’s generated to a centralized warehouse. The promise of Edge Delta is that it can offer all of the capabilities of this centralized model by allowing enterprises to start to analyze their logs, metrics, traces and other telemetry right at the source. This, in turn, also allows them to get visibility into all of the data that’s generated there, instead of many of today’s systems, which only provide insights into a small slice of this information.

While competing services tend to have agents that run on a customer’s machine, but typically only compress the data, encrypt it and then send it on to its final destination, Edge Delta’s agent starts analyzing the data right at the local level. With that, if you want to, for example, graph error rates from your Kubernetes cluster, you wouldn’t have to gather all of this data and send it off to your data warehouse where it has to be indexed before it can be analyzed and graphed.

With Edge Delta, you could instead have every single node draw its own graph, which Edge Delta can then combine later on. With this, Edge Delta argues, its agent is able to offer significant performance benefits, often by orders of magnitude. This also allows businesses to run their machine learning models at the edge, as well.

Image Credits: Edge Delta

“What I saw before I was leaving Splunk was that people were sort of being choosy about where they put workloads for a variety of reasons, including cost control,” said Menlo Ventures’ Tim Tully, who joined the firm only a couple of months ago. “So this idea that you can move some of the compute down to the edge and lower latency and do machine learning at the edge in a distributed way was incredibly fascinating to me.”

Edge Delta is able to offer a significantly cheaper service, in large part because it doesn’t have to run a lot of compute and manage huge storage pools itself since a lot of that is handled at the edge. And while the customers obviously still incur some overhead to provision this compute power, it’s still significantly less than what they would be paying for a comparable service. The company argues that it typically sees about a 90 percent improvement in total cost of ownership compared to traditional centralized services.

Image Credits: Edge Delta

Edge Delta charges based on volume and it is not shy to compare its prices with Splunk’s and does so right on its pricing calculator. Indeed, in talking to Tully and Unlu, Splunk was clearly on everybody’s mind.

“There’s kind of this concept of unbundling of Splunk,” Unlu said. “You have Snowflake and the data warehouse solutions coming in from one side, and they’re saying, ‘hey, if you don’t care about real time, go use us.’ And then we’re the other half of the equation, which is: actually there’s a lot of real-time operational use cases and this model is actually better for those massive stream processing datasets that you required to analyze in real time.”

But despite this competition, Edge Delta can still integrate with Splunk and similar services. Users can still take their data, ingest it through Edge Delta and then pass it on to the likes of Sumo Logic, Splunk, AWS’s S3 and other solutions.

Image Credits: Edge Delta

“If you follow the trajectory of Splunk, we had this whole idea of building this business around IoT and Splunk at the Edge — and we never really quite got there,” Tully said. “I think what we’re winding up seeing collectively is the edge actually means something a little bit different. […] The advances in distributed computing and sophistication of hardware at the edge allows these types of problems to be solved at a lower cost and lower latency.”

The Edge Delta team plans to use the new funding to expand its team and support all of the new customers that have shown interest in the product. For that, it is building out its go-to-market and marketing teams, as well as its customer success and support teams.

 

#aws, #big-data, #business-intelligence, #cloud, #computing, #cto, #data-security, #data-warehouse, #datadog, #enterprise, #information-technology, #mac-venture-capital, #machine-learning, #menlo-ventures, #microsoft, #new-relic, #real-time, #recent-funding, #seattle, #splunk, #startups, #sumo-logic, #system-administration, #tc, #technology

3 lessons we learned after raising $6.3M from 50 investors

It was August 2019, and the fundraising process was not going well.

My co-founder and I had left our product management jobs at New Relic several months prior, deciding to finally plunge into building Reclaim after nearly a year of late nights and weekends spent prototyping and iterating on ideas. We had bits and pieces of a product, but the majority of it was what we might call “slideware.”

When you can’t raise big on the vision, you need to raise big on the proof. And the proof comes from building, learning, iterating and getting traction with your first few hundred users.

When we spoke to many other founders, they all told us the same thing: Go raise, raise big, and raise now. So we did that, even though we were puzzled as to why anyone would give us money with little more than a slide deck to our names. We spent nearly three months pitching dozens of VCs, hoping to raise $3 million to $4 million in a seed round to hire our founding team and build the product out.

Initially, we were excited. There was lots of inbound interest, and we were starting to hear a lot of crazy numbers getting thrown around by a lot of Important People. We thought for sure we were maybe a week away from term sheets. We celebrated preemptively. How could it possibly be this easy?

Then in July, almost in an instant, everything started to dry up. The verbal offers for term sheets didn’t materialize into real offers. We had term sheets, but they were from investors that didn’t seem to care much about what we were building or what problems we wanted to solve. We quickly realized that we hadn’t really built momentum around the product or the vision, but were instead caught up in what we later learned to be “deal flow.”

Basically, investors were interested because other investors were interested. And once enough of them weren’t, nobody was.

Fortunately, as I write this today, Reclaim has raised a total of $6.3 million on great terms across a group of incredible investors and partners. But it wasn’t easy, and it required us to embrace our failure and learn three important lessons that I believe every founder should consider before they decide to go out and pitch investors.

Lesson 1: Build big before you raise big

In 2019, we were hunting for what some referred to as a “mango seed” — that is, a seed round that was large enough that it was perceptibly closer to a light Series A financing. Being pre-product at the time, we had to lean on our experience and our vision to drive conviction and urgency among investors. Unfortunately, it just wasn’t enough. Investors either felt that our experience was a bad fit for the space we were entering (productivity/scheduling) or that our vision wasn’t compelling enough to merit investment on the terms we wanted.

When we did get offers, they involved swallowing some pretty bitter pills: We would be forced to take bad terms that were overly dilutive (at least from our perspective), work with an investor who we didn’t think had high conviction in our product strategy, or relinquish control in the company from an extremely early stage. None of these seemed like good options.

#column, #ec-column, #ec-how-to, #index-ventures, #new-relic, #startups

New Relic’s business remodel will leave new CEO with work to do

For Bill Staples, the freshly appointed CEO at New Relic, who takes over on July 1, yesterday was a good day. After more than 20 years in the industry, he was given his own company to run. It’s quite an accomplishment, but now the hard work begins.

Lew Cirne, New Relic’s founder and CEO, who is stepping into the executive chairman role, spent the last several years rebuilding the company’s platform and changing its revenue model, aiming for what he hopes is long-term success.

“All the work we did in re-platforming our data tier and our user interface and the migration to consumption business model, that’s not so we can be a $1 billion New Relic — it’s so we can be a multibillion-dollar New Relic. And we are willing to forgo some short-term opportunity and take some short-term pain in order to set us up for long-term success,” Cirne told TechCrunch after yesterday’s announcement.

On the positive side of the equation, New Relic is one of the market leaders in the application performance monitoring space. Gartner has the company in third place behind Dynatrace and Cisco AppDynamics, and ahead of DataDog. While the Magic Quadrant might not be gospel, it does give you a sense of the relative market positions of each company in a given space.

New Relic competes in the application performance monitoring business, or APM for short. APM enables companies to keep tabs on the health of their applications. That allows them to cut off problems before they happen, or at least figure out why something is broken more quickly. In a world where users can grow frustrated quickly, APM is an important part of the customer experience infrastructure. If your application isn’t working well, customers won’t be happy with the experience and quickly find a rival service to use.

In addition to yesterday’s CEO announcement, New Relic reported earnings. TechCrunch decided to dig into the company’s financials to see just what challenges Staples may face as he moves into the corner office. The resulting picture is one that shows a company doing hard work for a more future-aligned product map and business model, albeit one that may not generate the sort of near-term growth that gives Staples ample breathing room with public investors.

Near-term growth, long-term hopes

Making long-term bets on a company’s product and business model future can be difficult for Wall Street to swallow in the near term. But such work can garner an incredibly lucrative result; Adobe is a good example of a company that went from license sales to subscription incomes. There are others in the midst of similar transitions, and they often take growth penalties as older revenues are recycled in favor of a new top line.

So when we observe New Relic’s recent result and guidance for the rest of the year, we’re more looking for future signs of life than quick gains.

Starting with the basics, New Relic had a better-than-anticipated quarter. An analysis showed the company’s profit and adjusted profit per share both beat expectations. And the company announced $173 million in total revenue, around $6 million more than the market expected.

So, did its shares rise? Yes, but just 5%, leaving them far under their 52-week high. Why such a modest bump after so strong a report? The company’s guidance, we reckon. Per New Relic, it expects its current quarter to bring 6% to 7% growth compared to the year-ago period. And it anticipates roughly 6% growth for its current fiscal year (its fiscal 2022, which will conclude at the end of calendar Q1 2022).

#application-performance-monitoring, #cloud, #ec-cloud-and-enterprise-infrastructure, #enterprise, #finance, #lew-cirne, #new-relic, #personnel, #tc

New Relic is bringing in a new CEO as founder Lew Cirne moves to executive chairman role

At the market close this afternoon ahead of its earnings report, New Relic, an applications performance monitoring company, announced that founder Lew Cirne would be stepping down as CEO and moving into the executive chairman role.

At the same time, the company announced that Bill Staples, a software industry vet, would be taking over as CEO. Staples joined the company last year as chief product officer before being quickly promoted to president and chief product officer in January. Today’s promotion marks a rapid rise through the ranks to lead the company.

Cirne said when he began thinking about stepping into that executive chairman role, he was looking for a trusted partner to take his place as CEO, and he found that in Staples. “Every founder’s dream is for the company to have a long lasting impact, and then when the time is right for them to step into a different role. To do that, you need a trusted partner that will lead with the right core values and bring to the table what the company needs as an active partner. And so I’m really excited to move to the executive chairman role [and to have Bill be that person],” Cirne told me.

For Staples, who has worked at large organizations throughout his career, this opportunity to lead the company as CEO is the pinnacle of his long career arc. He called the promotion humbling, but one he believes he is ready to take on.

“This is a new chapter for me, a new experience to be a CEO of a public company with a billion dollar plus value valuation, but I think the experience I have in the seat of our customers, as well as the experience I’ve had at Microsoft and Adobe, very large companies with very large stakes running large organizations has really prepared me well for this next phase,” Staples said.

Cirne says he plans to take some time off this summer to give Staples the space to grow as the leader of the company without being in the shadow of the founder and long-time CEO, but he plans to come back and work with him as the executive chairman moving forward come the fall.

As he step into this new role, Staples will be taking over. “Certainly I have a lot to learn about what it takes to be a great CEO, but I also come in with a lot of confidence that I’ve managed organizations at scale. You know I’ve been part of P&Ls that were many times larger than New Relic, and I have confidence that I can help New Relic grow as a company.”

Hope Cochran, managing director at Madrona Ventures, who is also the chairman of the New Relic Board said that the board fully backs of the decision to pass the CEO torch from Cirne to Staples. “With the foundation that Lew built and Bill’s leadership, New Relic has a very bright future ahead and a clear path to accelerate growth as the leader in observability,” she said in a statement.

The official transition is scheduled to take place on July 1st.

#ceo-appointment, #cloud, #enterprise, #hope-cochran, #lew-cirne, #new-relic, #personnel, #tc

New Relic expands its AIOps services

In recent years, the publicly traded observability service New Relic started adding more machine learning-based tools to its platform for AI-assisted incident response when things don’t quite go as planned. Today, it is expanding this feature set with the launch of a number of new capabilities for what it calls its “New Relic Applied Intelligence Service.”

This expansion includes an anomaly detection service that is even available for free users, the ability to group alerts from multiple tools when the models think it’s a single issue that is triggering all of these alerts and new ML-based root cause analysis to help eliminate some of the guesswork when problems occur. Also new (and in public beta) is New Relic’s ability to detect patterns and outliers in log data that is stored in the company’s data platform.

The main idea here, New Relic’s director of product marketing Michael Olson told me, is to make it easier for companies of all sizes to reap the benefits of AI-enhanced ops.

Image Credits: New Relic

“It’s been about a year since we introduced our first set of AIops capabilities with New Relic Applied Intelligence to the market,” he said. “During that time, we’ve seen significant growth in adoption of AIops capabilities through New Relic. But one of the things that we’ve heard from organizations that have yet to foray into adopting AIops capabilities as part of their incident response practice is that they often find that things like steep learning curves and long implementation and training times — and sometimes lack of confidence, or knowledge of AI and machine learning — often stand in the way.”

The new platform should be able to detect emerging problems in real time — without the team having to pre-configure alerts. And when it does so, it’ll smartly group all of the alerts from New Relic and other tools together to cut down on the alert noise and let engineers focus on the incident.

“Instead of an alert storm when a problem occurs across multiple tools, engineers get one actionable issue with alerts automatically grouped based on things like time and frequency, based on the context that they can read in the alert messages. And then now with this launch, we’re also able to look at relationship data across your systems to intelligently group and correlate alerts,” Olson explained.

Image Credits: New Relic

Maybe the highlight for the ops teams that will use these new features, though, is New Relic’s ability to pinpoint the probable root cause of a problem. As Guy Fighel, the general manager of applied intelligence and vice president of product engineering at New Relic, told me, the idea here is not to replace humans but to augment teams.

“We provide a non-black-box experience for teams to craft the decisions and correlation and logic based on their own knowledge and infuse the system with their own knowledge,” Fighel noted. “So you can get very specific based on your environment and needs. And so because of that and because we see a lot of data coming from different tools — all going into New Relic One as the data platform — our probable root cause is very accurate. Having said that, it is still a probable root cause. So although we are opinionated about it, we will never tell you, ‘hey, go fix that, because we’re 100% sure that’s the case.’ You’re the human, you’re in control.”

The AI system also asks users for feedback, so that the model gets refined with every new incident, too.

Fighel tells me that New Relic’s tools rely on a variety of statistical analysis methods and machine learning models. Some of those are unique to individual users while others are used across the company’s user base. He also stressed that all of the engineers who worked on this project have a background in site reliability engineering — so they are intimately familiar with the problems in this space.

With today’s launch, New Relic is also adding a new integration with PagerDuty and other incident management tools so that the state of a given issue can be synchronized bi-directionally between them.

“We want to meet our customers where they are and really be data source agnostic and enable customers to pull in data from any source, where we can then enrich that data, reduce noise and ultimately help our customers solve problems faster,” said Olson.

#artificial-intelligence, #cloud, #cybernetics, #developer, #enterprise, #machine-learning, #new-relic, #pagerduty, #performance-management, #science-and-technology

Microsoft’s Dapr open-source project to help developers build cloud-native apps hits 1.0

Dapr, the Microsoft-incubated open-source project that aims to make it easier for developers to build event-driven, distributed cloud-native applications, hit its 1.0 milestone today, signifying the project’s readiness for production use cases. Microsoft launched the Distributed Application Runtime (that’s what “Dapr” stand for) back in October 2019. Since then, the project released 14 updates and the community launched integrations with virtually all major cloud providers, including Azure, AWS, Alibaba and Google Cloud.

The goal for Dapr, Microsoft Azure CTO Mark Russinovich told me, was to democratize cloud-native development for enterprise developers.

“When we go look at what enterprise developers are being asked to do — they’ve traditionally been doing client, server, web plus database-type applications,” he noted. “But now, we’re asking them to containerize and to create microservices that scale out and have no-downtime updates — and they’ve got to integrate with all these cloud services. And many enterprises are, on top of that, asking them to make apps that are portable across on-premises environments as well as cloud environments or even be able to move between clouds. So just tons of complexity has been thrown at them that’s not specific to or not relevant to the business problems they’re trying to solve.”

And a lot of the development involves re-inventing the wheel to make their applications reliably talk to various other services. The idea behind Dapr is to give developers a single runtime that, out of the box, provides the tools that developers need to build event-driven microservices. Among other things, Dapr provides various building blocks for things like service-to-service communications, state management, pub/sub and secrets management.

Image Credits: Dapr

“The goal with Dapr was: let’s take care of all of the mundane work of writing one of these cloud-native distributed, highly available, scalable, secure cloud services, away from the developers so they can focus on their code. And actually, we took lessons from serverless, from Functions-as-a-Service where with, for example Azure Functions, it’s event-driven, they focus on their business logic and then things like the bindings that come with Azure Functions take care of connecting with other services,” Russinovich said.

He also noted that another goal here was to do away with language-specific models and to create a programming model that can be leveraged from any language. Enterprises, after all, tend to use multiple languages in their existing code, and a lot of them are now looking at how to best modernize their existing applications — without throwing out all of their current code.

As Russinovich noted, the project now has more than 700 contributors outside of Microsoft (though the core commuters are largely from Microsoft) and a number of businesses started using it in production before the 1.0 release. One of the larger cloud providers that is already using it is Alibaba. “Alibaba Cloud has really fallen in love with Dapr and is leveraging it heavily,” he said. Other organizations that have contributed to Dapr include HashiCorp and early users like ZEISS, Ignition Group and New Relic.

And while it may seem a bit odd for a cloud provider to be happy that its competitors are using its innovations already, Russinovich noted that this was exactly the plan and that the team hopes to bring Dapr into a foundation soon.

“We’ve been on a path to open governance for several months and the goal is to get this into a foundation. […] The goal is opening this up. It’s not a Microsoft thing. It’s an industry thing,” he said — but he wasn’t quite ready to say to which foundation the team is talking.

 

#alibaba, #alibaba-cloud, #aws, #cloud, #cloud-computing, #cloud-infrastructure, #cloud-services, #cloud-storage, #computing, #developer, #enterprise, #google, #hashicorp, #mark-russinovich, #microservices, #microsoft, #microsoft-azure, #new-relic, #serverless-computing, #tc

New Relic acquires Kubernetes observability platform Pixie Labs

Two months ago, Kubernetes observability platform Pixie Labs launched into general availability and announced a $9.15 million Series A funding round led by Benchmark, with participation from GV. Today, the company is announcing its acquisition by New Relic, the publicly traded monitoring and observability platform.

The Pixie Labs brand and product will remain in place and allow New Relic to extend its platform to the edge. From the outset, the Pixie Labs team designed the service to focus on providing observability for cloud-native workloads running on Kubernetes clusters. And while most similar tools focus on operators and IT teams, Pixie set out to build a tool that developers would want to use. Using eBPF, a relatively new way to extend the Linux kernel, the Pixie platform can collect data right at the source and without the need for an agent.

At the core of the Pixie developer experience are what the company calls “Pixie scripts.” These allow developers to write their debugging workflows, though the company also provides its own set of these and anybody in the community can contribute and share them as well. The idea here is to capture a lot of the informal knowledge around how to best debug a given service.

“We’re super excited to bring these companies together because we share a mission to make observability ubiquitous through simplicity,” Bill Staples, New Relic’s Chief Product Officer, told me. “[…] According to IDC, there are 28 million developers in the world. And yet only a fraction of them really practice observability today. We believe it should be easier for every developer to take a data-driven approach to building software and Kubernetes is really the heart of where developers are going to build software.”

It’s worth noting that New Relic already had a solution for monitoring Kubernetes clusters. Pixie, however, will allow it to go significantly deeper into this space. “Pixie goes much, much further in terms of offering on-the-edge, live debugging use cases, the ability to run those Pixie scripts. So it’s an extension on top of the cloud-based monitoring solution we offer today,” Staples said.

The plan is to build integrations into New Relic into Pixie’s platform and to integrate Pixie use cases with New Relic One as well.

Currently, about 300 teams use the Pixie platform. These range from small startups to large enterprises and as Staples and Asgar noted, there was already a substantial overlap between the two customer bases.

As for why he decided to sell, Pixie co-founder (and former Google AI CEO Zain Asgar told me that it was all about accelerating Pixie’s vision.

“We started Pixie to create this magical developer experience that really allows us to redefine how application developers monitor, secure and manage their applications,” Asgar said. “One of the cool things is when we actually met the team at New Relic and we got together with Bill and [New Relic founder and CEO] Lou [Cirne], we realized that there was almost a complete alignment around this vision […], and by joining forces with New Relic, we can actually accelerate this entire process.”

New Relic has recently done a lot of work on open-sourcing various parts of its platform, including its agents, data exporters and some of its tooling. Pixie, too, will now open-source its core tools. Open-sourcing the service was always on the company’s roadmap, but the acquisition now allows it to push this timeline forward.

“We’ll be taking Pixie and making it available to the community through open source, as well as continuing to build out the commercial enterprise-grade offering for it that extends the New Relic one platform,” Staples explained. Asgar added that it’ll take the company a little while to release the code, though.

“The same fundamental quality that got us so excited about Lew as an EIR in 2007, got us excited about Zain and Ishan in 2017 — absolutely brilliant engineers, who know how to build products developers love,” Bessemer Ventures General Partner Eric Vishria told me. “New Relic has always captured developer delight. For all its power, Kubernetes completely upends the monitoring paradigm we’ve lived with for decades.  Pixie brings the same — easy to use, quick time to value, no-nonsense approach to the Kubernetes world as New Relic brought to APM.  It is a match made in heaven.”

#acquisition, #benchmark, #containers, #enterprise, #exit, #google, #kubernetes, #monitoring, #new-relic, #observability, #performance-management, #startups, #tc

New Relic is changing its pricing model to encourage broader monitoring

In the monitoring world, typically when you spin up a new instance, you pay a fee to monitor it. If you are particularly active in any given month, that can result in a hefty bill at the end of the month. That leads to limiting what you choose to monitor to control costs. New Relic wants to change that, and today it announced that it’s moving to a model where customers pay by the user instead with a smaller less costly data component.

The company is also simplifying its product set with the goal of encouraging customers to instrument everything instead of deciding what to monitor and what to leave out to control cost. “What we’re announcing is a completely reimagined platform. We’re simplifying our products from 11 to three, and we eliminate those barriers to standardizing on a single source of truth,” New Relic founder and CEO Lew Cirne told TechCrunch.

The way the company can afford to make this switch is by exposing the underlying telemetry database that it created to run its own products. By taking advantage of this database to track all of your APM, tracing and metric data all in one place, Cirne says they can control costs much better and pass those savings onto customers, whose bills should be much smaller based on a this new pricing model, he said.

“Prior to this, there has not been any technology that’s good at gathering all of those data types into a single database, what we would call a telemetry database. And we actually created one ourselves and it’s the backbone of all of our products. [Up until now], we haven’t really exposed it to our customers, so that they can put all their data into it,” he said.

New Relic Telemetry Data. Image: New Relic

The company is distilling the product set into three main categories. The first is the Telemetry Data Platform, which offers a single way to gather any events, logs or traces, whether from their agents or someone else’s or even open source monitoring tools like Prometheus.

The second product is called Full-stack Observability. This includes all of their previous products, which were sold separately such as APM, mobility, infrastructure and logging. Finally they are offering an intelligence layer called New Relic AI.

Cirne says by simplifying the product set and changing the way they bill, it will save customers money through the efficiencies they have uncovered. In practice he says, pricing will consist of a combination of users and data, but he believes their approach will result in much lower bills and more cost certainty for customers.

“It’ll vary by customer so this is just a rough estimate but imagine that the typical New Relic bill under this model will be a 70% per user charge and 30% data charge, roughly, but so if that’s the case, and if you look at our competitors, 100% of the bill is data,” he said.

The new approach is available starting today. Companies can try it with 100 GB single user account.

#applications-performance-monitoring, #cloud, #enterprise, #lew-cirne, #logging, #monitoring, #new-relic, #tc, #tracing