Chinese crackdown on tech giants threatens its cloud market growth

As Chinese tech companies come under regulatory scrutiny at home, concerns and pressures are escalating among investors and domestic tech companies including China’s four big cloud companies, BATH (Baidu AI, Alibaba Cloud, Tencent Cloud and Huawei Cloud), according to an analyst report.

Despite a series of antitrust and internet-related regulation crackdowns, the four leading cloud companies have been growing steadily. As the current scrutiny is not particularly focused on the cloud sector and the demand for digital transformation, artificial intelligence and smart industries remains firm, China’s cloud infrastructure market size mounted to $6.6 billion, which is an increase of 54% compared with the previous year, in the second quarter of 2021.

Nonetheless, share prices of three of them–Baidu, Alibaba and Tencent– have fallen between 18% and 30% over the last 6 month, which could make investors cautious on betting on the Chinese tech companies.

“Chinese tech companies could always rely on their local market, especially when access to lucrative Western markets was blocked. But increasing domestic regulatory pressures over the past nine months have been a frustrating headwind for those companies that have seen their cloud businesses grow significantly over the past years,” said Canalys Vice President Alex Smith.

The four big cloud titans dominate the Chinese cloud market, accounting for 80% of total cloud spending. Alibaba Cloud maintained its frontrunner status with a 33.8% market share, in the second quarter of this year. Huawei, which had 19.3% of China’s market size in 2Q21, is the one that has avoided regulatory measures so far.

“Huawei is an infrastructure and device company that also happens to have developed a strong cloud business. When it comes to cloud infrastructure, we focus on the BATH companies, not just BAT. Huawei is in a strong position to drive growth, particularly in the public sector where it has a good standing and long-term relationship with the government,” Canalys Chief Analyst Matthew Ball said.

While Chinese regulators intensify scrutiny of its technology companies, the crackdowns wreak havoc on its own markets and the shares of China-based companies.

Beijing passed the Data Security Law in June that started to go into effect early September for protecting critical data related to national security and issued draft guidelines on regulating the algorithms companies, targeting ByteDance, Alibaba Group, Tencent and DiDi and others, in late August.

#alibaba-cloud, #artificial-intelligence, #asia, #china, #cloud, #cloud-infrastructure, #huawei-cloud, #policy, #tc

Real-time database platform SingleStore raises $80M more, now at a $940M valuation

Organizations are swimming in data these days, and so solutions to help manage and use that data in more efficient ways will continue to see a lot of attention and business. In the latest development, SingleStore — which provides a platform to enterprises to help them integrate, monitor and query their data as a single entity, regardless of whether that data is stored in multiple repositories — is announcing another $80 million in funding, money that it will be using to continue investing in its platform, hiring more talent and overall business expansion. Sources close to the company tell us that the company’s valuation has grown to $940 million.

The round, a Series F, is being led by Insight Partners, with new investor Hewlett Packard Enterprise, and previous backers Khosla Ventures, Dell Capital, Rev IV, Glynn Capital, and GV (formerly Google Ventures) also participating. The startup has to date raised $264 million, including most recently an $80 million Series E as recently as last December, just on the heels of rebranding from MemSQL.

The fact that there are three major strategic investors in this Series F — HPE, Dell and Google — may say something about the traction that SingleStore is seeing, but so too do its numbers: 300%+ increase in new customer acquisition for its cloud service and 150%+ year-over-year growth in cloud

Raj Verma, SingleStore’s CEO, said in an interview that its cloud revenues have grown by 150% year over year and now account for some 40% of all revenues (up from 10% a year ago). New customer numbers, meanwhile, have grown by over 300%.

“The flywheel is now turning around,” Verma said. “We didn’t need this money. We’ve barely touched our Series E. But I think there has been a general sentiment among our board and management that we are now ready for the prime time. We think SingleStore is one of the best kept secrets in the database market. Now we want to aggressively be an option for people looking for a platform for intensive data applications or if they want to consolidate databases to 1 from 3, 5 or 7 repositories. We are where the world is going: real-time insights.”

With database management and the need for more efficient and cost-effective tools to manage that becoming an ever-growing priority — one that definitely got a fillip in the last 18 months with Covid-19 pushing people into more remote working environments. That means SingleStore is not without competitors, with others in the same space including Amazon, Microsoft, Snowflake, PostgreSQL, MySQL, Redis and more. Others like Firebolt are tackling the challenges of handing large, disparate data repositories from another angle. (Some of these, I should point out, are also partners: SingleStore works with data stored on AWS, Microsoft Azure, Google Cloud Platform, and Red Hat, and Verma describes those who do compute work as “not database companies; they are using their database capabilities for consumption for cloud compute.”)

But the company has carved a place for itself with enterprises and has thousands now on its books, including GE, IEX Cloud, Go Guardian, Palo Alto Networks, EOG Resources, and SiriusXM + Pandora.

“SingleStore’s first-of-a-kind cloud database is unmatched in speed, scale, and simplicity by anything in the market,” said Lonne Jaffe, managing director at Insight Partners, in a statement. “SingleStore’s differentiated technology allows customers to unify real-time transactions and analytics in a single database.” Vinod Khosla from Khosla Ventures added that “SingleStore is able to reduce data sprawl, run anywhere, and run faster with a single database, replacing legacy databases with the modern cloud.”

#amazon, #aws, #ceo, #cloud-computing, #cloud-infrastructure, #computing, #database, #database-management, #enterprise, #funding, #glynn-capital, #google-cloud-platform, #google-ventures, #hewlett-packard-enterprise, #khosla-ventures, #lonne-jaffe, #memsql, #microsoft, #mysql, #palo-alto-networks, #postgresql, #red-hat, #redis, #series-e, #singlestore, #snowflake, #vinod-khosla

Monad emerges from stealth with $17M to solve the cybersecurity big data problem

Cloud security startup Monad, which offers a platform for extracting and connecting data from various security tools, has launched from stealth with $17 million in Series A funding led by Index Ventures. 

Monad was founded on the belief that enterprise cybersecurity is a growing data management challenge, as organizations try to understand and interpret the masses of information that’s siloed within disconnected logs and databases. Once an organization has extracted data from their security tools, Monad’s Security Data Platform enables them to centralize that data within a data warehouse of choice, and normalize and enrich the data so that security teams have the insights they need to secure their systems and data effectively.

“Security is fundamentally a big data problem,” said Christian Almenar, CEO and co-founder of Monad. “Customers are often unable to access their security data in the streamlined manner that DevOps and cloud engineering teams need to build their apps quickly while also addressing their most pressing security and compliance challenges. We founded Monad to solve this security data challenge and liberate customers’ security data from siloed tools to make it accessible via any data warehouse of choice.”

The startup’s Series A funding round, which was also backed by Sequoia Capital, brings its total amount of investment raised to  $19 million and comes 12 months after its Sequoia-led seed round. The funds will enable Monad to scale its development efforts for its security data cloud platform, the startup said.

Monad was founded in May 2020 by security veterans Christian Almenar and Jacolon Walker. Almenar previously co-founded serverless security startup Intrinsic which was acquired by VMware in 2019, while Walker served as CISO and security engineer at OpenDoor, Collective Health, and Palantir.

#big-data, #cloud-computing, #cloud-infrastructure, #computer-security, #computing, #data-management, #data-warehouse, #devops, #funding, #information-technology, #intrinsic, #opendoor, #palantir, #security, #security-tools, #sequoia-capital, #serverless-computing, #technology, #vmware

Elastic acquisition spree continues as it acquires security startup CMD

Just days after Elastic announced the acquisition of build.security, the company is making yet another security acquisition. As part of its second-quarter earnings announcement this afternoon, Elastic disclosed that it is acquiring Vancouver, Canada based security vendor CMD. Financial terms of the deal are not being publicly disclosed.

CMD‘s technology provides runtime security for cloud infrastructure, helping organizations gain better visibility into processes that are running. The startup was founded in 2016 and has raised $21.6 million in funding to date. The company’s last round was a $15 million Series B that was announced in 2019, led by GV. 

Elastic CEO and co-founder Shay Banon told TechCrunch that his company will be welcoming the employees of CMD into his company, but did not disclose precisely how many would be coming over. CMD CEO and co-founder Santosh Krishan and his fellow co-founder Jake King will both be taking executive roles within Elastic.

Both build.security and CMD are set to become part of Elastic’s security organization. The two technologies will be integrated into the Elastic Stack platform that provides visibility into what an organization is running, as well as security insights to help limit risk. Elastic has been steadily growing its security capabilities in recent years, acquiring Endgame Security in 2019 for $234 million.

Banon explained that, as organizations increasingly move to the cloud and make use of Kubernetes, they are looking for more layers of introspection and protection for Linux. That’s where CMD’s technology comes in. CMD’s security service is built with an open source technology known as eBPF. With eBPF, it’s possible to hook into a Linux operating system for visibility and security control. Work is currently ongoing to extend eBPF for Windows workloads, as well.

CMD isn’t the only startup that has been building based on eBP. Isovalent, which announced a $29 million Series A round led by Andreessen Horowitz and Google in November 2020, is also active in the space. The Linux Foundation also recently announced the creation of an eBPF Foundation, with the participation of Facebook, Google, Microsoft, Netflix and Isovalent.

Fundamentally, Banon sees a clear alignment between what CMD was building and what Elastic aims to deliver for its users.

“We have a saying at Elastic – while you observe, why not protect?” Banon said. “With CMD if you look at everything that they do, they also have this deep passion and belief that it starts with observability. “

It will take time for Elastic to integrate the CMD technology into the Elastic Stack, though it won’t be too long. Banon noted that one of the benefits of acquiring a startup is that it’s often easier to integrate than a larger, more established vendor.

“With all of these acquisitions that we make we spend time integrating them into a single product line,” Banon said.

That means Elastic needs to take the technology that other companies have built and fold it into its stack and that sometimes can take time, Banon explained. He noted that it took two years to integrate the Endgame technology after that acquisition.

“Typically that lends itself to us joining forces with smaller companies with really innovative technology that can be more easily taken and integrated into our stack,” Banon said.

#canada, #cloud, #cloud-computing, #cloud-infrastructure, #cmd, #elasticsearch, #facebook, #kubernetes, #linux, #open-source-technology, #security, #shay-banon, #vancouver

NS1 brings open-source service NetBox to the cloud

New York City based startup NS1 got its start providing organizations with managed DNS services to help accelerate application delivery and reliability. With its new NetBox Cloud service that is being announced in preview today, NS1 is expanding its services into a new area beyond DNS. 

It can often be a challenging task for a network administrator in an enterprise to understand where all the networking infrastructure is and how it’s all supposed to be connected.  That’s a job for an emerging class of enterprise technology known as Infrastructure Resource Management (IRM) that NS1 is now jumping into. TechCrunch profiled NS1 in a wide-ranging EC-1 series last month. The company provides DNS as a service, for some of the biggest sites on the internet. DNS, or domain name system is about connecting IP addresses to domain names and NS1 has technology that helps organizations to intelligently optimize application traffic delivery. 

With its new NetBox Cloud service, NS1 is providing a managed service for NetBox which is a popular open source IRM tool that was initially built by developer Jeremy Stretch, while he was working at cloud provider DigitalOcean. Stretch joined NS1 as a distinguished engineer in April of this year, with NS1 now supporting the open source project.

Stretch recounted that at one point during his tenure at DigitalOcean he was using Microsoft Excel spreadsheets to track IP address management. Using a spreadsheet to track IP addresses doesn’t scale, so Stretch coded the initial version of NetBox in 2015 to address that need. Over the last several years, NetBox has expanded with additional capabilities that will now also help users of NS1’s NetBox Cloud service.

Stretch explained that Netbox’s role is primarily in modelling network infrastructure in an approach that provides what he referred to as a “source of truth” for network infrastructure. The basic idea is to enable organizations to model their desired state of their networks and then from that point they can draw in monitoring to verify that the operational state is the same as the desired state. 

“So the idea of this source of truth is that it is the actual documented authoritative record of what is supposed to be configured on the network,” Stretch said.

NetBox has continued to grow over the years as a popular open source tool, but it hasn’t been particularly accessible to enterprises that required commercial support to get started, or that wanted a managed service. The goal with the new service is to make it easier for organizations of any size to get started with NetBox to better manage their networks.

NS1 co-founder and CEO Kris Beevers told TechCrunch that while Stretch has done a solid job of building the NetBox open source community, there hasn’t been a commercial service for NetBox. Beevers said that while NetBox has had broad adoption as an open source effort, in his view there are a lot of enterprises that will want commercial support and a managed service.

One key theme that Beevers reiterated time and again in the Extra Crunch EC-1 series is that NS1 is very experimental as a business, and that same theme holds true for NetBox. The primary objective for the initial beta release of the NetBox Cloud is all about figuring out exactly who is trying to adopt the technology and learning what challenges commercial users will face. Fundamentally, Beevers said that NS1 will be actively iterating on NetBox Cloud to make sure it addresses the things that enterprises care about.

“From the NS1 point of view, this is just such a compelling open source product and community and we want to drive barriers to adoption as low as we possibly can,” Beevers said.

NS1 was founded in 2013 and has raised $118.4 million in funding, including a $40 million Series D which the company closed in July 2020.

#cloud-computing, #cloud-infrastructure, #digitalocean, #dns, #enterprise, #network-administrator, #new-york-city

Insight Partners leads $30M round into Metabase, developing enterprise business intelligence tools

Open-source business intelligence company Metabase announced Thursday a $30 million Series B round led by Insight Partners.

Existing investors Expa and NEA joined in on the round, which gives the San Francisco-based company a total of $42.5 million in funding since it was founded in 2015. Metabase previously raised $8 million in Series A funding back in 2019, led by NEA.

Metabase was developed within venture studio Expa and spun out as an easy way for people to interact with data sets, co-founder and CEO Sameer Al-Sakran told TechCrunch.

“When someone wants access to data, they may not know what to measure or how to use it, all they know is they have the data,” Al-Sakran said. “We provide a self-service access layer where they can ask a question, Metabase scans the data and they can use the results to build models, create a dashboard and even slice the data in ways they choose without having an analyst build out the database.”

He notes that not much has changed in the business intelligence realm since Tableau came out more than 15 years ago, and that computers can do more for the end user, particularly to understand what the user is going to do. Increasingly, open source is the way software and information wants to be consumed, especially for the person that just wants to pull the data themselves, he added.

George Mathew, managing director of Insight Partners, believes we are seeing the third generation of business intelligence tools emerging following centralized enterprise architectures like SAP, then self-service tools like Tableau and Looker and now companies like Metabase that can get users to discovery and insights quickly.

“The third generation is here and they are leading the charge to insights and value,” Mathew added. “In addition, the world has moved to the cloud, and BI tools need to move there, too. This generation of open source is a better and greater example of all three of those.”

To date, Metabase has been downloaded 98 million times and used by more than 30,000 companies across 200 countries. The company pursued another round of funding after building out a commercial offering, Metabase Enterprise, that is doing well, Al-Sakran said.

The new funding round enables the company to build out a sales team and continue with product development on both Metabase Enterprise and Metabase Cloud. Due to Metabase often being someone’s first business intelligence tool, he is also doubling down on resources to help educate customers on how to ask questions and learn from their data.

“Open source has changed from floppy disks to projects on the cloud, and we think end users have the right to see what they are running,” Al-Sakran said. “We are continuing to create new features and improve performance and overall experience in efforts to create the BI system of the future.

 

#artificial-intelligence, #business-intelligence, #business-software, #cloud, #cloud-computing, #cloud-infrastructure, #data-management, #enterprise, #expa, #funding, #george-mathew, #insight-partners, #metabase, #nea, #recent-funding, #sameer-al-sakran, #startups, #tc

Disaster recovery can be an effective way to ease into the cloud

Operating in the cloud is soon going to be a reality for many businesses whether they like it or not. Points of contention with this shift often arise from unfamiliarity and discomfort with cloud operations. However, cloud migrations don’t have to be a full lift and shift.

Instead, leaders unfamiliar with the cloud should start by moving over their disaster recovery program to the cloud, which helps to gain familiarity and understanding before a full migration of production workloads.

What is DRaaS?

Disaster recovery as a service (DRaaS) is cloud-based disaster recovery delivered as a service to organizations in a self-service, partially managed or fully managed service model. The agility of DR in the cloud affords businesses a geographically diverse location to failover operations and run as close to normal as possible following a disruptive event. DRaaS emphasizes speed of recovery so that this failover is as seamless as possible. Plus, technology teams can offload some of the more burdensome aspects of maintaining and testing their disaster recovery.

When it comes to disaster recovery testing, allow for extra time to let your IT staff learn the ins and outs of the cloud environment.

DRaaS is a perfect candidate for a first step into the cloud for five main reasons:

  • Using DRaaS helps leaders get accustomed to the ins and outs of cloud before conducting a full production shift.
  • Testing cycles of the DRaaS solution allows IT teams to see firsthand how their applications will operate in a cloud environment, enabling them to identify the applications that will need a full or partial refactor before migrating to the cloud.
  • With DRaaS, technology leaders can demonstrate an early win in the cloud without risking full production.
  • DRaaS success helps gain full buy-in from stakeholders, board members and executives.
  • The replication tools that DRaaS uses are sometimes the same tools used to migrate workloads for production environments — this helps the technology team practice their cloud migration strategy.

Steps to start your DRaaS journey to the cloud

Define your strategy

Do your research to determine if DRaaS is right for you given your long-term organizational goals. You don’t want to start down a path to one cloud environment if that cloud isn’t aligned with your company’s objectives, both for the short and long term. Having cross-functional conversations among business units and with company executives will assist in defining and iterating your strategy.

#as-a-service, #cloud, #cloud-computing, #cloud-infrastructure, #cloud-migration, #cloud-storage, #column, #computing, #data-recovery, #disaster-recovery, #ec-cloud-and-enterprise-infrastructure, #ec-column, #saas, #startups

VCs are betting big on Kubernetes: Here are 5 reasons why

I worked at Google for six years. Internally, you have no choice — you must use Kubernetes if you are deploying microservices and containers (it’s actually not called Kubernetes inside of Google; it’s called Borg). But what was once solely an internal project at Google has since been open-sourced and has become one of the most talked about technologies in software development and operations.

For good reason. One person with a laptop can now accomplish what used to take a large team of engineers. At times, Kubernetes can feel like a superpower, but with all of the benefits of scalability and agility comes immense complexity. The truth is, very few software developers truly understand how Kubernetes works under the hood.

I like to use the analogy of a watch. From the user’s perspective, it’s very straightforward until it breaks. To actually fix a broken watch requires expertise most people simply do not have — and I promise you, Kubernetes is much more complex than your watch.

How are most teams solving this problem? The truth is, many of them aren’t. They often adopt Kubernetes as part of their digital transformation only to find out it’s much more complex than they expected. Then they have to hire more engineers and experts to manage it, which in a way defeats its purpose.

Where you see containers, you see Kubernetes to help with orchestration. According to Datadog’s most recent report about container adoption, nearly 90% of all containers are orchestrated.

All of this means there is a great opportunity for DevOps startups to come in and address the different pain points within the Kubernetes ecosystem. This technology isn’t going anywhere, so any platform or tooling that helps make it more secure, simple to use and easy to troubleshoot will be well appreciated by the software development community.

In that sense, there’s never been a better time for VCs to invest in this ecosystem. It’s my belief that Kubernetes is becoming the new Linux: 96.4% of the top million web servers’ operating systems are Linux. Similarly, Kubernetes is trending to become the de facto operating system for modern, cloud-native applications. It is already the most popular open-source project within the Cloud Native Computing Foundation (CNCF), with 91% of respondents using it — a steady increase from 78% in 2019 and 58% in 2018.

While the technology is proven and adoption is skyrocketing, there are still some fundamental challenges that will undoubtedly be solved by third-party solutions. Let’s go deeper and look at five reasons why we’ll see a surge of startups in this space.

 

Containers are the go-to method for building modern apps

Docker revolutionized how developers build and ship applications. Container technology has made it easier to move applications and workloads between clouds. It also provides as much resource isolation as a traditional hypervisor, but with considerable opportunities to improve agility, efficiency and speed.

#cloud, #cloud-computing, #cloud-infrastructure, #cloud-native-computing-foundation, #cloud-native-computing, #column, #databricks, #ec-cloud-and-enterprise-infrastructure, #ec-column, #ec-enterprise-applications, #enterprise, #google, #kubernetes, #linux, #microservices, #new-relic, #openshift, #rapid7, #red-hat, #startups, #ubuntu, #web-services

Tech leaders can be the secret weapon for supercharging ESG goals

Environmental, social and governance (ESG) factors should be key considerations for CTOs and technology leaders scaling next generation companies from day one. Investors are increasingly prioritizing startups that focus on ESG, with the growth of sustainable investing skyrocketing.

What’s driving this shift in mentality across every industry? It’s simple: Consumers are no longer willing to support companies that don’t prioritize sustainability. According to a survey conducted by IBM, the COVID-19 pandemic has elevated consumers’ focus on sustainability and their willingness to pay out of their own pockets for a sustainable future. In tandem, federal action on climate change is increasing, with the U.S. rejoining the Paris Climate Agreement and a recent executive order on climate commitments.

Over the past few years, we have seen an uptick in organizations setting long-term sustainability goals. However, CEOs and chief sustainability officers typically forecast these goals, and they are often long term and aspirational — leaving the near and midterm implementation of ESG programs to operations and technology teams.

Until recently, choosing cloud regions meant considering factors like cost and latency to end users. But carbon is another factor worth considering.

CTOs are a crucial part of the planning process, and in fact, can be the secret weapon to help their organization supercharge their ESG targets. Below are a few immediate steps that CTOs and technology leaders can take to achieve sustainability and make an ethical impact.

Reducing environmental impact

As more businesses digitize and more consumers use devices and cloud services, the energy needed by data centers continues to rise. In fact, data centers account for an estimated 1% of worldwide electricity usage. However, a forecast from IDC shows that the continued adoption of cloud computing could prevent the emission of more than 1 billion metric tons of carbon dioxide from 2021 through 2024.

Make compute workloads more efficient: First, it’s important to understand the links between computing, power consumption and greenhouse gas emissions from fossil fuels. Making your app and compute workloads more efficient will reduce costs and energy requirements, thus reducing the carbon footprint of those workloads. In the cloud, tools like compute instance auto scaling and sizing recommendations make sure you’re not running too many or overprovisioned cloud VMs based on demand. You can also move to serverless computing, which does much of this scaling work automatically.

Deploy compute workloads in regions with lower carbon intensity: Until recently, choosing cloud regions meant considering factors like cost and latency to end users. But carbon is another factor worth considering. While the compute capabilities of regions are similar, their carbon intensities typically vary. Some regions have access to more carbon-free energy production than others, and consequently the carbon intensity for each region is different.

So, choosing a cloud region with lower carbon intensity is often the simplest and most impactful step you can take. Alistair Scott, co-founder and CTO of cloud infrastructure startup Infracost, underscores this sentiment: “Engineers want to do the right thing and reduce waste, and I think cloud providers can help with that. The key is to provide information in workflow, so the people who are responsible for infraprovisioning can weigh the CO2 impact versus other factors such as cost and data residency before they deploy.”

Another step is to estimate your specific workload’s carbon footprint using open-source software like Cloud Carbon Footprint, a project sponsored by ThoughtWorks. Etsy has open-sourced a similar tool called Cloud Jewels that estimates energy consumption based on cloud usage information. This is helping them track progress toward their target of reducing their energy intensity by 25% by 2025.

Make social impact

Beyond reducing environmental impact, CTOs and technology leaders can have significant, direct and meaningful social impact.

Include societal benefits in the design of your products: As a CTO or technology founder, you can help ensure that societal benefits are prioritized in your product roadmaps. For example, if you’re a fintech CTO, you can add product features to expand access to credit in underserved populations. Startups like LoanWell are on a mission to increase access to capital for those typically left out of the financial system and make the loan origination process more efficient and equitable.

When thinking about product design, a product needs to be as useful and effective as it is sustainable. By thinking about sustainability and societal impact as a core element of product innovation, there is an opportunity to differentiate yourself in socially beneficial ways. For example, Lush has been a pioneer of package-free solutions, and launched Lush Lens — a virtual package app leveraging cameras on mobile phones and AI to overlay product information. The company hit 2 million scans in its efforts to tackle the beauty industry’s excessive use of (plastic) packaging.

Responsible AI practices should be ingrained in the culture to avoid social harms: Machine learning and artificial intelligence have become central to the advanced, personalized digital experiences everyone is accustomed to — from product and content recommendations to spam filtering, trend forecasting and other “smart” behaviors.

It is therefore critical to incorporate responsible AI practices, so benefits from AI and ML can be realized by your entire user base and that inadvertent harm can be avoided. Start by establishing clear principles for working with AI responsibly, and translate those principles into processes and procedures. Think about AI responsibility reviews the same way you think about code reviews, automated testing and UX design. As a technical leader or founder, you get to establish what the process is.

Impact governance

Promoting governance does not stop with the board and CEO; CTOs play an important role, too.

Create a diverse and inclusive technology team: Compared to individual decision-makers, diverse teams make better decisions 87% of the time. Additionally, Gartner research found that in a diverse workforce, performance improves by 12% and intent to stay by 20%.

It is important to reinforce and demonstrate why diversity, equity and inclusion is important within a technology team. One way you can do this is by using data to inform your DEI efforts. You can establish a voluntary internal program to collect demographics, including gender, race and ethnicity, and this data will provide a baseline for identifying diversity gaps and measuring improvements. Consider going further by baking these improvements into your employee performance process, such as objectives and key results (OKRs). Make everyone accountable from the start, not just HR.

These are just a few of the ways CTOs and technology leaders can contribute to ESG progress in their companies. The first step, however, is to recognize the many ways you as a technology leader can make an impact from day one.

#artificial-intelligence, #carbon-footprint, #cloud, #cloud-computing, #cloud-infrastructure, #cloud-services, #column, #energy, #environmentalism, #esg, #etsy, #greenhouse-gas-emissions, #greentech, #machine-learning, #open-source-software, #opinion, #sustainability, #tc, #thoughtworks

Platform-as-a-service startup Porter aims to become go-to platform for deploying, managing cloud-based apps

By the time Porter co-founders Trevor Shim and Justin Rhee decided to build a company around DevOps, the pair were well versed in doing remote development on Kubernetes. And like other users, were consistently getting burnt by the technology.

They realized that for all of the benefits, Rhee told TechCrunch that the technology was there, but users were having to manage the complexity of hosting solutions as well as incur the costs associated with a big DevOps team.

They decided to build out a solution externally and went through Y Combinator’s Summer 2020 batch, where they found other startup companies trying to do the same.

Today, Porter announced $1.5 million in seed funding from Venrock, Translink Capital, Soma Capital and several angel investors. It’s goal is to build a Platform-as-a-Service that any team can use to manage applications in its own cloud, essentially delivering the full flexibility of Kubernetes through a Heroku-like experience.

Why Heroku? It is the hosting platform that developers are used to, and not just small companies, but also later stage companies. When they want to move to Amazon Web Services, Google Cloud or DigitalOcean, Porter will be that bridge, Shim added.

However, while Heroku is still popular, the pair say companies are thinking the platform is getting outdated because it is standing still technology-wise. Each year, companies move on from the platform due to technical limitations and cost, Rhee said.

A big part of the bet Porter is taking is not charging users for hosting, and its cost is a pure SaaS product,he said. They aren’t looking to be resellers, so companies can use their own cloud, but Porter will provide the automation and users can pay with their AWS and GCP credits, which gives them flexibility.

A common pattern is a move into Kubernetes, but “the zinger we talk about,” is if Heroku was built in 2021, it would have been built on Kubernetes, Shim added.

“So we see ourselves as a successor’s successor,” he said.

To be that bridge, the company will use the new funding to increase its engineering bandwidth with the goal of “becoming the de facto standard for all startups.” Shim said.

Porter’s platform went live in February, and in six months became the sixth-fastest growing open source platform download on GitHub, said Ethan Batraski, partner at Venrock. He met the company through YC and was “super impressed with Rhee’s and Shim’s vision.

“Heroku has 100,000 developers, but I believe it has stagnated,” Batraski added. “Porter already has 100 startups on its platform. The growth they’ve seen — four or five times — is what you want to see at this stage.”

His firm has long focused on data infrastructure and is seeing the stack get more complex, saying “at the same time, more developers are wanting to build out an app over a week, and scale it to millions of users, but that takes people resources. With Kubernetes it can turn everyone into an expert developer without them knowing it,” he added.

“Heroku has 100,000 developers, but I believe it has stagnated,” Batraski added. “Porter already has 100 startups on its platform. The growth they’ve seen — four or five times — is what you want to see at this stage.”

 

#apps, #cloud, #cloud-computing, #cloud-infrastructure, #cloud-storage, #developer, #ethan-batraski, #funding, #heroku, #justin-rhee, #kubernetes, #recent-funding, #saas, #soma-capital, #startups, #tc, #translink-capital, #trevor-shim, #venrock, #y-combinator

4 key areas SaaS startups must address to scale infrastructure for the enterprise

Startups and SMBs are usually the first to adopt many SaaS products. But as these customers grow in size and complexity — and as you rope in larger organizations — scaling your infrastructure for the enterprise becomes critical for success.

Below are four tips on how to advance your company’s infrastructure to support and grow with your largest customers.

Address your customers’ security and reliability needs

If you’re building SaaS, odds are you’re holding very important customer data. Regardless of what you build, that makes you a threat vector for attacks on your customers. While security is important for all customers, the stakes certainly get higher the larger they grow.

Given the stakes, it’s paramount to build infrastructure, products and processes that address your customers’ growing security and reliability needs. That includes the ethical and moral obligation you have to make sure your systems and practices meet and exceed any claim you make about security and reliability to your customers.

Here are security and reliability requirements large customers typically ask for:

Formal SLAs around uptime: If you’re building SaaS, customers expect it to be available all the time. Large customers using your software for mission-critical applications will expect to see formal SLAs in contracts committing to 99.9% uptime or higher. As you build infrastructure and product layers, you need to be confident in your uptime and be able to measure uptime on a per customer basis so you know if you’re meeting your contractual obligations.

While it’s hard to prioritize asks from your largest customers, you’ll find that their collective feedback will pull your product roadmap in a specific direction.

Real-time status of your platform: Most larger customers will expect to see your platform’s historical uptime and have real-time visibility into events and incidents as they happen. As you mature and specialize, creating this visibility for customers also drives more collaboration between your customer operations and infrastructure teams. This collaboration is valuable to invest in, as it provides insights into how customers are experiencing a particular degradation in your service and allows for you to communicate back what you found so far and what your ETA is.

Backups: As your customers grow, be prepared for expectations around backups — not just in terms of how long it takes to recover the whole application, but also around backup periodicity, location of your backups and data retention (e.g., are you holding on to the data too long?). If you’re building your backup strategy, thinking about future flexibility around backup management will help you stay ahead of these asks.

#amazon-web-services, #api, #cloud, #cloud-infrastructure, #cloud-storage, #column, #data-center, #dlp, #ec-cloud-and-enterprise-infrastructure, #ec-column, #ec-enterprise-applications, #ec-how-to, #enterprise, #enterprise-saas, #multitenancy, #saas, #software-as-a-service, #sso, #startups, #web-services

Serverless Stack raises $1M for open-source application framework

Open-source framework startup Serverless Stack announced Friday that it raised $1 million in seed funding from a group of investors that includes Greylock Partners, SV Angel and Y Combinator.

The company was founded in 2017 by Jay V and Frank Wang in San Francisco, and they were part of Y Combinator’s 2021 winter batch.

Serverless Stack’s technology enables engineers to more easily build full-stack serverless apps. CEO V said he and Wang were working in this space for years with the aim of exposing it to a broader group of people.

While tooling around in the space, they determined that the ability to build serverless apps was not getting better, so they joined Y Combinator to hone their idea on how to make the process easier.

Here’s how the technology works: The open-source framework allows developers to test and make changes to their applications by directly connecting their local machines to the cloud. The problem with what V called an “old-school process” is that developers would upload their apps to the cloud, wait for it to run and then make any changes. Instead, Serverless Stack connects directly to the cloud for the ability to debug applications locally, he added.

Since its launch six months ago, Serverless Stack has grown to over 2,000 stars on GitHub and was downloaded more than 60,000 times.

Dalton Caldwell, managing director of YC, met V and Wang at the cohort and said he was “super impressed” because the pair were working in the space for a long time.

“These folks are experts — there are probably just half a dozen people who know as much as they do, as there aren’t that many people working on this technology,” Caldwell told TechCrunch. “The proof is in the pudding, and if they can get people to adopt it, like they did on GitHub so far, and keep that community engagement, that is my strongest signal of staying power.”

V has earmarked the new funding to expand the team, including hiring engineers to support new use cases.

Serverless initially gravitated toward specific use cases — APIs are now allowing its community to chime in and it is using that as a guide, V said. It recently announced more of a full-stack use case for building out APIs with a database and also building out the front end frameworks.

Ultimately, V’s roadmap includes building out more tools with a vision of getting Serverless Stack to the point where a developer can come on with an idea and take it all the way to an IPO using his platform.

“That’s why we want the community to drive the roadmap,” V told TechCrunch. “We are focused on what they are building and when they are in production, how they are managing it. Eventually, we will build out a dashboard to make it easier for them to manage all of their applications.”

 

#apps, #cloud, #cloud-infrastructure, #dalton-caldwell, #developer, #frank-wang, #funding, #github, #greylock-partners, #jay-v, #recent-funding, #serverless-computing, #serverless-stack, #startups, #sv-angel, #tc, #y-combinator

Microsoft’s cyber startup spending spree continues with CloudKnox acquisition

Microsoft has acquired identity and access management (IAM) startup CloudKnox Security, the tech giant’s fourth cybersecurity acquisition this year.

The deal, the terms of which were not disclosed, is the latest cybersecurity acquisition by Microsoft, which just last week announced that it’s buying threat intelligence startup RiskIQ. The firm also recently acquired IoT security startups CyberX and Refirm Labs as it moved to beef up its security portfolio. Security is big business for Microsoft, which made more than $10 billion in security-related revenue in 2020 — a 40% increase from the year prior.

CloudKnox, which was founded in 2015 and emerged from stealth two years later, helps organizations to enforce least-privilege principles to reduce risk and help prevent security breaches. The startup had raised $22.8 million prior to the acquisition, with backing from ClearSky, Sorenson Ventures, Dell Technologies Capital, and Foundation Capital. 

The company’s activity-based authorization service will equip Azure Active Directory customers with “granular visibility, continuous monitoring and automated remediation for hybrid and multi-cloud permissions,” according to a blog post by Joy Chik, corporate vice president of identity at Microsoft. 

Chik said that while organizations were reaping the benefits of cloud adoption, particularly as they embrace flexible working models, they often struggled to assess, prevent and enforce privileged access across hybrid and multi-cloud environments.

“CloudKnox offers complete visibility into privileged access,” Chik said. “It helps organizations right-size permissions and consistently enforce least-privilege principles to reduce risk, and it employs continuous analytics to help prevent security breaches and ensure compliance. This strengthens our comprehensive approach to cloud security.”

In addition to Azure Active Directory, Microsoft also plans to integrate CloudKnox with its other cloud security services including 365 Defender, Azure Defender, and Azure Sentinel.

Commenting on the deal, Balaji Parimi, CloudKnox founder and CEO, said: “By joining Microsoft, we can unlock new synergies and make it easier for our mutual customers to protect their multi-cloud and hybrid environments and strengthen their security posture.”

#access-management, #active-directory, #ceo, #cloud-computing, #cloud-infrastructure, #computer-security, #computing, #cyberx, #dell-technologies-capital, #foundation-capital, #microsoft, #palo-alto-networks, #riskiq, #security, #security-startups, #technology

“Developers, as you know, do not like to pay for things”

In the previous part of this EC-1, we looked at the technical details of CockroachDB and how it provides accurate data instantaneously anywhere on the planet. In this installment, we’re going to take a look at the product side of Cockroach, with a particular focus on developer relations.

As a business, Cockroach Labs has many things going for it. The company’s approach to distributed database technology is novel. And, as more companies operate on a global level, CockroachDB has the potential to gain some significant market share internationally. The company is seven years into a typical 10-year maturity model for databases, has raised $355 million, and holds a $2 billion market value. It’s considered a double unicorn. Few database companies can say this.

The company is now aggressively expanding into the database-as-a-service space, offering its own technology in a fully managed package, expanding the spectrum of clients who can take immediate advantage of its products.

But its growth depends upon securing the love of developers while also making its product easier to use for new customers. To that end, I’m going to analyze the company’s pivot to the cloud as well as its extensive outreach to developers as it works to set itself up for long-term, sustainable success.

Cockroach Labs looks to the cloud

These days, just about any company of consequence provides services via the internet, and a growing number of these services are powered by products and services from native cloud providers. Gartner forecasted in 2019 that cloud services are growing at an annual rate of 17.5%, and there’s no sign that the growth has abated at all.

Its founders’ history with Google back in the mid-2000s has meant that Cockroach Labs has always been aware of the impact of cloud services on the commercial web. Unsurprisingly, CockroachDB could run cloud native right from its first release, given that its architecture presupposes the cloud in its operation — as we saw in part 2 of this EC-1.

#cloud, #cloud-computing, #cloud-infrastructure, #cockroach-labs, #cockroachdb, #cockroachdb-ec-1, #database-management, #databases, #distributed-computing, #ec-cloud-and-enterprise-infrastructure, #ec-enterprise-applications, #ec-1, #enterprise, #mysql, #oracle, #relational-database, #saas, #startups, #tc

Microsoft launches Windows 365

Microsoft today launched Windows 365, a service that gives businesses the option to easily let their employees access a Windows 10 desktop from the cloud (with Windows 11 coming once it’s generally available). Think game streaming, but for your desktop. It’ll be available for business users (and only business users), on August 2, 2021.

Announced through a somewhat inscrutable press release, Windows 365 has been long expected and is really just an evolution of existing remote desktop services.

But hey, you may say, doesn’t Microsoft already offer Azure Virtual Desktop that gives businesses the option to let their employees access a Windows PC in the cloud? Yes, but the difference seems to be that Windows 365 is far easier to use and involves none of the complexity of setting up a full Azure Virtual Desktop environment in the Azure cloud.

But couldn’t Microsoft have made Azure Virtual Desktop easier to use instead of launching yet another virtual desktop service? Yes, but Azure Virtual Desktop is very much an enterprise service and by default, that means it must play nicely with the rest of the complexities of a company’s existing infrastructure. The pandemic pressed it into service in smaller companies because they had few alternatives, but in many ways, today’s launch is Microsoft admitting that it was far too difficult to manage for them. Windows 365, on the other hand, is somewhat of a fresh slate. It’s also available through a basic subscription service.

“Microsoft also continues to innovate in Azure Virtual Desktop for those organizations with deep virtualization experience that want more customization and flexibility options,” the company says. At least we know why the company renamed Windows Virtual Desktop to Azure Virtual desktop now. That would’ve gotten quite confusing.

Image Credits: Microsoft

This also gives Microsoft the opportunity to talk about “a new hybrid personal computing category” its CEO Satya Nadella calls a ‘Cloud PC.’ It’s a bit unclear what exactly that’s supposed to be, but it’s a new category.

“Just like applications were brought to the cloud with SaaS, we are now bringing the operating system to the cloud, providing organizations with greater flexibility and a secure way to empower their workforce to be more productive and connected, regardless of location,” Nadella explains in today’s press release.

But isn’t that just a thin client? Maybe? But we’re not talking hardware here. It’s really just a virtualized operating system in the cloud that you can access from anywhere — and that’s a category that’s been around for a long time.

“Hybrid work has fundamentally changed the role of technology in organizations today,” said Jared Spataro, corporate vice president, Microsoft 365. “With workforces more disparate than ever before, organizations need a new way to deliver a great productivity experience with increased versatility, simplicity and security. Cloud PC is an exciting new category of hybrid personal computing that turns any device into a personalized, productive and secure digital workspace. Today’s announcement of Windows 365 is just the beginning of what will be possible as we blur the lines between the device and the cloud.”

 

 

#ceo, #cloud, #cloud-infrastructure, #computing, #jared-spataro, #microsoft, #microsoft-365, #microsoft-windows, #operating-system, #satya-nadella, #tc, #technology, #thin-client, #thin-clients, #windows, #windows-10

Why former Alibaba scientist wants to back founders outside the Ivory Tower

Min Wanli had a career path much coveted by those pursuing a career in computer science. A prodigy, Min was accepted to a top research university in China at the age of 14. He subsequently obtained Ph.D. degrees in physics and statistics from the University of Chicago before spending nearly a combined decade at IBM and Google.

Like many young, aspiring Chinese scientists working in the United States, Min returned to China when the country’s internet boom was underway in the early 2010s. He joined Alibaba’s fledgling cloud arm and was at the forefront of applying its tech to industrial scenarios, like using visual identification to mitigate highway traffic and computing power to improve factory efficiency.

Then in July 2019, Min took a leap. He resigned from Alibaba Cloud, which had become a major growth driver for the e-commerce goliath and at the time China’s largest public cloud infrastructure provider (it still is). With no experience in investment, he started a new venture capital firm called North Summit Capital.

“A lot of enterprises were quite skeptical of ‘digital transformation’ around 2016 and 2017. But by 2019, after they had seen success cases [from Alibaba Cloud], they no longer questioned its viability,” said Min in his office overlooking a cluster of urban villages and highrise offices in Shenzhen. Clad in a well-ironed light blue shirt, he talked with a childlike, earnest smile.

“Suddenly, everyone wanted to go digital. But how am I supposed to meet their needs with a team of just 400-500 people?”

Min’s solution was not to serve the old-school factories and corporations himself but to finance and support a raft of companies to do so. Soon he closed the first fund for North Summit with “several hundreds of millions of dollars” from an undisclosed high-net-worth individual from the United Arab Emirates, whom Min had met when he represented Alibaba at a Duhai tech conference in 2018.

“Venture capital is like a magnifier through which I can connect with a lot of tech companies and share my lessons from the past, so they can quickly and effectively work with their clients from traditional industries,” Min said.

“For example, I’d discuss with my portfolio firms whether they should focus on selling hardware pieces or software first, or give them equal weight.”

Min strives to be deeply involved in the companies he backs. North Summit invests early, with check sizes so far ranging from roughly $5 million to $25 million. Min also started a technology service company called Quadtalent to provide post-investment support to his portfolio.

Photo: North Summit Capital’s office in Shenzhen

The notion of digital transformation is both buzzy and daunting for many investors due to the highly complex and segmented nature of traditional industries. But Min has a list of criteria to help narrow down his targets.

First, an investable area should be data-intensive. Subway tracks, for example, could benefit from implementing large amounts of sensors that monitor the rail system’s stauts. Second, an area’s manufacturing or business process should be capital-intensive, such as production lines that use exorbitant equipment. And lastly, the industry should be highly dependent on repetitive human experience, like police directing traffic.

Solving industrial problems require not just founders’ computing ingenuity but more critically, their experience in a traditional sector. As such, Min goes beyond the “Ivory Tower” of computer science wizards when he looks for entrepreneurs.

“What we need today is a type of inter-disciplinary talent who can do ‘compound algorithms.’ That means understanding sensor signals, business rationales, manufacturing, as well as computer algorithms. Applying neural network through an algorithmic black box without the other factors is simply futile.”

Min faces ample competition as investors hunt down the next ABB, Schneider, or Siemens of China. The country is driving towards technological independence in all facets of the economy and the national mandate takes on new urgency as COVID-19 disrupts global supply chains. The result is skyrocketing valuations for startups touting “industrial upgrade” solutions, Min noted.

But factory bosses don’t care whether their automation solution providers are unerdogs or startup unicorns. “At the end of the day, the factory CFO will only ask, ‘how much more money does this piece of software or equipment help us save or make?’”

The investor is cautious about deploying his maiden fund. Two years into operation, North Summit has closed four deals: TopScore, a 17-year-old footwear manufacturer embracing automation; Lingumi, a London-based English learning app targeting Chinese pre-school kids; Aerodyne, a Malaysian drone service provider; and Extreme Vision, a marketplace connecting small-and-medium enterprises to affordable AI vision solutions. 

This year, North Summit aims to invest close to $100 million in companies inside and outside China. Optical storage and robotic process automation (RPA) are just two areas that have been on Min’s radar in recent days.

#abb, #alibaba, #alibaba-cloud, #alibaba-group, #asia, #china, #cloud-computing, #cloud-infrastructure, #computing, #dubai, #funding, #ibm, #manufacturing, #siemens, #tc, #united-arab-emirates, #university-of-chicago, #venture-capital

Want in on the next $100B in cybersecurity?

As a Battery Ventures associate in 1999, I used to spend my nights highlighting actual magazines called Red Herring, InfoWorld and The Industry Standard, plus my personal favorites StorageWorld and Mass High Tech (because the other VC associates rarely scanned these).

As a 23-year-old, I’d circle the names of much older CEOs who worked at companies like IBM, EMC, Alcatel or Nortel to learn more about what they were doing. The companies were building mainframe-to-server replication technologies, IP switches and nascent web/security services on top.

Flash forward 22 years and, in a way, nothing has changed. We have gone from command line to GUI to now API as the interface innovation. But humans still need an interface, one that works for more types of people on more types of devices. We no longer talk about the OSI stack — we talk about the decentralized blockchain stack. We no longer talk about compute, data storage and analysis on a mainframe, but rather on the cloud.

The problems and opportunities have stayed quite similar, but the markets and opportunities have gotten much larger. AWS and Azure cloud businesses alone added $23 billion of run-rate revenue in the last year, growing at 32% and 50%, respectively — high growth on an already massive base.

The size of the cybersecurity market has gotten infinitely larger as software eats the world and more people are able to sit and feast at the table from anywhere on Earth (and, soon enough, space).

The size of the cybersecurity market, in particular, has gotten infinitely larger as software eats the world and more people are able to sit and feast at the table from anywhere on Earth (and, soon enough, space).

Over the course of the last few months, my colleague Spencer Calvert and I released a series of pieces about why this market opportunity is growing so rapidly: the rise of multicloud environments, data being generated and stored faster than anyone can keep up with it, SaaS applications powering virtually every function across an organization and CISOs’ rise in political power and strategic responsibility.

This all ladders up to an estimated — and we think conservative — $100 billion of new market value by 2025 alone, putting total market size at close to $280 billion.

In other words, opportunities are ripe for massive business value creation in cybersecurity. We think many unicorns will be built in these spaces, and while we are still in the early innings, there are a few specific areas where we’re looking to make bets (and one big-picture, still-developing area). Specifically, Upfront is actively looking for companies building in:

  1. Data security and data abstraction.
  2. Zero-trust, broadly applied.
  3. Supply chains.

Data security and abstraction

Data is not a new thesis, but I am excited to look at the change in data stacks from an initial cybersecurity lens. What set of opportunities can emerge if we view security at the bottom of the stack — foundational — rather than as an application at the top or to the side?

Image Credits: Upfront Ventures

For example, data is expanding faster than we can secure it. We need to first know where the (structured and unstructured) data is located, what data is being stored, confirm proper security posture and prioritize fixing the most important issues at the right speed.

Doing this at scale requires smart passive mapping, along with heuristics and rules to pull the signal from the noise in an increasingly data-rich (noisy) world. Open Raven, an Upfront portfolio company, is building a solution to discover and protect structured and unstructured data at scale across cloud environments. New large platform companies will be built in the data security space as the point of control moves from the network layer to the data layer.

We believe Open Raven is poised to be a leader in this space and also will power a new generation of “output” or application companies yet to be funded. These companies could be as big as Salesforce or Workday, built with data abstracted and managed differently from the start.

If we look at security data at the point it is created or discovered, new platforms like Open Raven may lead to the emergence of an entirely new ecosystem of apps, ranging from those Open Raven is most likely to build in-house — like compliance workflows — to entirely new companies that rebuild apps we have used since the beginning of time, which includes everything from people management systems to CRMs to product analytics to your marketing attribution tools.

Platforms that lead with a security-first, foundational lens have the potential to power a new generation of applications companies with a laser-focus on the customer engagement layer or the “output” layer, leaving the data cataloging, opinionated data models and data applications to third parties that handle data mapping, security and compliance.

Image Credits: Upfront Ventures

Put simply, if full-stack applications look like layers of the Earth, with UX as the crust, that crust can become better and deeper with foundational horizontal companies underneath meeting all the requirements surrounding personally identifiable information and GDPR, which are foisted upon companies that currently have data everywhere. This can free up time for new application companies to focus their creative talent even more deeply on the human-to-software engagement layer, building superhuman apps for every existing category.

Zero-trust

Zero-trust was first coined in 2010, but applications are still being discovered and large businesses are being built around the idea. Zero-trust, for those getting up to speed, is the assumption that anyone accessing your system, devices, etc., is a bad actor.

This could sound paranoid, but think about the last time you visited a Big Tech campus. Could you walk in past reception and security without a guest pass or name badge? Absolutely not. Same with virtual spaces and access. My first in-depth course on zero-trust security was with Fleetsmith. I invested in Fleetsmith in 2017, a young team building software to manage apps, settings and security preferences for organizations powered by Apple devices. Zero-trust in the context of Fleetsmith was about device setup and permissions. Fleetsmith was acquired by Apple in mid-2020.

About the same time as the Fleetsmith acquisition, I met Art Poghosyan and the team at Britive. This team is also deploying zero-trust for dynamic permissioning in the cloud. Britive is being built under the premise of zero-trust Just-in-time (JIT) access, whereby users are granted ephemeral access dynamically rather than the legacy process of “checking out” and “checking in” credentials.

By granting temporary privilege access instead of “always-on” credentials, Britive is able to drastically reduce cyber risks associated with over-privileged accounts, the time to manage privilege access and the workflows to streamline privileged access management across multicloud environments.

What’s next in zero-based trust (ZBT)? We see device and access as the new perimeter, as workers flex devices and locations for their work and have invested around this with Fleetsmith and now Britive. But we still think there is more ground to cover for ZBT to permeate more mundane processes. Passwords are an example of something that is, in theory, zero-trust (you must continually prove who you are). But they are woefully inadequate.

Phishing attacks to steal passwords are the most common path to data breaches. But how do you get users to adopt password managers, password rotation, dual-factor authentication or even passwordless solutions? We want to back simple, elegant solutions to instill ZBT elements into common workflows.

Supply chains

Modern software is assembled using third-party and open-source components. This assembly line of public code packages and third-party APIs is known as a supply chain. Attacks that target this assembly line are referred to as supply chain attacks.

Some supply chain attacks can be mitigated by existing application-security tools like Snyk and other SCA tools for open-source dependencies, such as Bridgecrew to automate security engineering and fix misconfigurations and Veracode for security scanning.

But other vulnerabilities can be extremely challenging to detect. Take the supply chain attack that took center stage — the SolarWinds hack of 2020 — in which a small snippet of code was altered in a SolarWinds update before spreading to 18,000 different companies, all of which relied on SolarWinds software for network monitoring or other services.

Image Credits: Upfront Ventures

How do you protect yourself from malicious code hidden in a version update of a trusted vendor that passed all of your security onboarding? How do you maintain visibility over your entire supply chain? Here we have more questions than answers, but securing supply chains is a space we will continue to explore, and we predict large companies will be built to securely vet, onboard, monitor and offboard third-party vendors, modules, APIs and other dependencies.

If you are building in any of the above spaces, or adjacent spaces, please reach out. We readily acknowledge that the cybersecurity landscape is rapidly changing, and if you agree or disagree with any of the arguments above, I want to hear from you!

#cloud, #cloud-computing, #cloud-infrastructure, #column, #cybersecurity, #data-management, #security, #software-as-a-service, #technology, #venture-capital

Vercel raises $102M Series C for its front-end development platform

Vercel, the company behind the popular open-source Next.js React framework, today announced that it has raised a $102 million Series C funding round led by Bedrock Capital. Existing investors Accel, CRV,
Geodesic Capital, Greenoaks Capital and GV also participated in this round, together with new investors 8VC, Flex Capital, GGV, Latacora, Salesforce Ventures and Tiger Global. In total, the company has now raised $163 million and its current valuation is $1.1 billion.

As Vercel notes, the company saw strong growth in recent months, with traffic to all sites and apps on its network doubling since October 2020. About half of the world’s largest 10,000 websites now use Next.js . Given the open-source nature of the Next.js framework, not all of these users are obviously Vercel customers, but its current paying customers include the likes of Carhartt, Github, IBM, McDonald’s and Uber.

Image Credits: Vercel

“For us, it all starts with a front-end developer,” Vercel CEO Guillermo Rauch told me. “Our goal is to create and empower those developers — and their teams — to create delightful, immersive web experiences for their customers.”

With Vercel, Rauch and his team took the Next.js framework and then built a serverless platform that specifically caters to this framework and allows developers to focus on building their front ends without having to worry about scaling and performance.

Older solutions, Rauch argues, were built in isolation from the cloud platforms and serverless technologies, leaving it up to the developers to deploy and scale their solutions. And while some potential users may also be content with using a headless content management system, Rauch argues that increasingly, developers need to be able to build solutions that can go deeper than the off-the-shelf solutions that many businesses use today.

Rauch also noted that developers really like Vercel’s ability to generate a preview URL for a site’s front end every time a developer edits the code. “So instead of just spending all your time in code review, we’re shifting the equation to spending your time reviewing or experiencing your front end. That makes the experience a lot more collaborative,” he said. “So now, designers, marketers, IT, CEOs […] can now come together in this collaboration of building a front end and say, ‘that shade of blue is not the right shade of blue.’”

“Vercel is leading a market transition through which we are seeing the majority of value-add in web and cloud application development being delivered at the front end, closest to the user, where true experiences are made and enjoyed,” said Geoff Lewis, founder and managing partner at Bedrock. “We are extremely enthusiastic to work closely with Guillermo and the peerless team he has assembled to drive this revolution forward and are very pleased to have been able to co-lead this round.”

#bedrock-capital, #ceo, #cloud, #cloud-computing, #cloud-infrastructure, #computing, #content-management-system, #developer, #funding, #fundings-exits, #geodesic-capital, #geoff-lewis, #github, #greenoaks-capital, #ibm, #managing-partner, #mcdonalds, #react, #recent-funding, #salesforce, #salesforce-ventures, #serverless-computing, #software, #startups, #tc, #tiger-global

Vantage raises $4M to help businesses understand their AWS costs

Vantage, a service that helps businesses analyze and reduce their AWS costs, today announced that it has raised a $4 million seed round led by Andreessen Horowitz. A number of angel investors, including Brianne Kimmel, Julia Lipton, Stephanie Friedman, Calvin French Owen, Ben and Moisey Uretsky, Mitch Wainer and Justin Gage, also participated in this round

Vantage started out with a focus on making the AWS console a bit easier to use — and help businesses figure out what they are spending their cloud infrastructure budgets on in the process. But as Vantage co-founder and CEO Ben Schaechter told me, it was the cost transparency features that really caught on with users.

“We were advertising ourselves as being an alternative AWS console with a focus on developer experience and cost transparency,” he said.”What was interesting is — even in the early days of early access before the formal GA launch in January — I would say more than 95% of the feedback that we were getting from customers was entirely around the cost features that we had in Vantage.”

Image Credits: Vantage

Like any good startup, the Vantage team looked at this and decided to double down on these features and highlight them in its marketing, though it kept the existing AWS Console-related tools as well. The reason the other tools didn’t quite take off, Schaechter believes, is because more and more, AWS users have become accustomed to infrastructure-as-code to do their own automatic provisioning. And with that, they spend a lot less time in the AWS Console anyway.

“But one consistent thing — across the board — was that people were having a really, really hard time twelve times a year, where they would get a shock AWS bill and had to figure out what happened. What Vantage is doing today is providing a lot of value on the transparency front there,” he said.

Over the course of the last few months, the team added a number of new features to its cost transparency tools, including machine learning-driven predictions (both on the overall account level and service level) and the ability to share reports across teams.

Image Credits: Vantage

While Vantage expects to add support for other clouds in the future, likely starting with Azure and then GCP, that’s actually not what the team is focused on right now. Instead, Schaechter noted, the team plans to add support for bringing in data from third-party cloud services instead.

“The number one line item for companies tends to be AWS, GCP, Azure,” he said. “But then, after that, it’s Datadog Cloudflare Sumo Logic, things along those lines. Right now, there’s no way to see, P&L or an ROI from a cloud usage-based perspective. Vantage can be the tool where that’s showing you essentially, all of your cloud costs in one space.”

That is likely the vision the investors bought in as well and even though Vantage is now going up against enterprise tools like Apptio’s Cloudability and VMware’s CloudHealth, Schaechter doesn’t seem to be all that worried about the competition. He argues that these are tools that were born in a time when AWS had only a handful of services and only a few ways of interacting with those. He believes that Vantage, as a modern self-service platform, will have quite a few advantages over these older services.

“You can get up and running in a few clicks. You don’t have to talk to a sales team. We’re helping a large number of startups at this stage all the way up to the enterprise, whereas Cloudability and Cloud Health are, in my mind, kind of antiquated enterprise offerings. No startup is choosing to use those at this point, as far as I know,” he said.

The team, which until now mostly consisted of Schaechter and his co-founder and CTO Brooke McKim, bootstrapped to company up to this point. Now they plan to use the new capital to build out its team (and the company is actively hiring right now), both on the development and go-to-market side.

The company offers a free starter plan for businesses that track up to $2,500 in monthly AWS cost, with paid plans starting at $30 per month for those who need to track larger accounts.

#amazon-web-services, #andreessen-horowitz, #apptio, #aws, #brianne-kimmel, #cloud, #cloud-computing, #cloud-infrastructure, #cloud-services, #cloudability, #cloudflare, #computing, #datadog, #enterprise, #information-technology, #machine-learning, #recent-funding, #startups, #sumo-logic, #tc, #technology, #vmware

Elisity raises $26M Series A to scale its AI cybersecurity platform

Elisity, a self-styled innovator that provides behavior-based enterprise cybersecurity, has raised $26 million in Series A funding.

The funding round was co-led by Two Bear Capital and AllegisCyber Capital, the latter of which has invested in a number of cybersecurity startups including Panaseer, with previous seed investor Atlantic Bridge also participating.

Elisity, which is led by industry veterans from Cisco, Qualys, and Viptela, says the funding will help it meet growing enterprise demand for its cloud-delivered Cognitive Trust platform, which it claims is the only platform intelligent enough to understand how assets and people connect beyond corporate perimeters.

The platform looks to help organizations transition from legacy access approaches to zero trust, a security model based on maintaining strict access controls and not trusting anyone — even employees — by default, across their entire digital footprint. This enables organizations to adopt a ‘work-from-anywhere’ model, according to the company, which notes that most companies today continue to rely on security and policies based on physical location or low-level networking constructs, such as VLAN, IP and MAC addresses, and VPNs.

Cognitive Trust, the company claims, can analyze the uniquely identify and context of people, apps and devices, including Internet of Things (IoT) and operational technology (OT), wherever they’re working. The company says its AI-driven behavioral intelligence, the platform can also continuously assess risk and instantly optimize access, connectivity and protection policies.

“CISOs are facing ever increasing attack surfaces caused by the shift to remote work, reliance on cloud-based services (and often multi-cloud), and the convergence of IT/OT networks,” said Mike Goguen, founder and managing partner at Two Bear Capital. “Elisity addresses all of these problems by not only enacting a zero trust model, but by doing so at the edge and within the behavioral context of each interaction. We are excited to partner with the CEO, James Winebrenner, and his team as they expand the reach of their revolutionary approach to enterprise security.”

Founded in 2018, Elisity — whose competitors include the likes of Vectra AI and Lastline closed a $7.5 million seed round in August that same year, led by Atlantic Bridge. With its seed round, Elisity began scaling its engineering, sales and marketing teams to ramp up ahead of the platform’s launch. 

Now it’s looking to scale in order to meet growing enterprise demand, which comes as many organizations move to a hybrid working model and seek the tools to help them secure distributed workforces. 

“When the security perimeter is no longer the network, we see an incredible opportunity to evolve the way enterprises connect and protect their people and their assets, moving away from strict network constructs to identity and context as the basis for secure access,” said Winebrenner. 

“With Elisity, customers can dispense with the complexity, cost and protracted timeline enterprises usually encounter. We can onboard a new customer in as little as 45 minutes, rather than months or years, moving them to an identity-based access policy, and expanding to their cloud and on-prem[ise] footprints over time without having to rip and replace existing identity providers and network infrastructure investments. We do this without making tradeoffs between productivity for employees and the network security posture.”

Elisity, which is based in California, currently employs around 30 staff. However, it currently has no women in its leadership team, nor on its board of directors. 

#allegiscyber-capital, #artificial-intelligence, #california, #ceo, #cisco, #cloud-computing, #cloud-infrastructure, #computer-security, #computing, #funding, #lastline, #managing-partner, #operational-technology, #qualys, #security, #technology, #viptela

Edge computing startup Macrometa gets $20M Series A led by Pelion Venture Partners

Macrometa, the edge computing cloud and global data network for app developers, announced today it has raised a $20 million Series A. The round was led by Pelion Venture Partners, with participation from returning investors DNX Ventures (the Japan and US-focused enterprise fund that led Macrometa’s seed round), Benhamou Global Ventures (BGV), Partech Partners, Fusion Fund, Sway Ventures and Shasta Ventures.

The startup, which is headquartered in Palo Alto with operations in Bulgaria and India, plans to use its Series A on feature development, acquiring more enterprise customers and integrating with content delivery networks (CDN), cloud and telecom providers. It will hire for its engineering and product development centers in the United States, Eastern Europe and India, and add new centers in Ukraine, Portugal, Greece, Mexico and Argentina.

The company’s last round of funding, an $7 million seed, was announced just eight months ago. Its Series A brings Macrometa’s total raised since its was founded in 2017 to $29 million.

As part of the new round, Macrometa expanded its board of directors, adding Pelion general partner Chris Cooper as a director, and Pelion senior associate Zain Rizavi and DNX Ventures principal Eva Nahari as board observers.

Macrometa’s global data network combines a globally distributed noSQL database and a low-latency stream data processing engine, enabling web and cloud develops to run and scale data-heavy, real-time cloud applications. The network allows developers to run apps concurrently across its 175 points of presence (PoPs), or edge regions, around the world, depending on which one is closest to an end user. Macrometa claims that the mean roundtrip time (RTT) for users on laptops or phones to its edge cloud and back is less than 50 milliseconds globally, or 50x to 100x faster than cloud platforms like DyanmoDB, MongoDB or Firebase.

A photo of Macrometa co-founder and CEO Chetan Venkatesh

Macrometa co-founder and CEO Chetan Venkatesh

Since its seed round last year, the company has accelerated its customer acquisition, especially among large global enterprises and web scale players, co-founder and chief executive officer Chetan Venkatesh told TechCrunch. Macrometa also made its self-service platform available to developers, who can try its serverless database, pub/sug, event processing and stateful compute runtime for free.

Macrometa recently became one of two distributed data companies (the other one is Fauna) partnered with Cloudflare for developers building new apps on Workers, its serverless application platform. Venkatesh said the combination of Macrometa and Cloudflare Workers enables data-driven APIs and web services to be 50x to 100x faster in performance and lower latency compared to the public cloud.

 

The COVID-19 pandemic accelerated Macrometa’s business significantly, said Venkatesh, because its enterprise and web scale customers needed to handle the unpredictable data traffic patterns created by remote work. The pandemic also “resulted in several secular and permanent shifts in cloud adoption and consumption,” he added, changing how people shop, consume media, content and entertainment. That has “exponentially increased the need for handling dynamic bursts of demands for application infrastructure securely,” he said.

One example of how enterprise clients use Macrometa is e-commerce providers who implemented its infrastructure with their existing CDN and cloud backends to provide more data and AI-based personalization for shoppers, including real-time recommendations, regionalized search at the edge and local data geo-fencing to comply with data and privacy regulations.

Some of Macrometa’s SaaS clients use its global data network as a global data cache for handling surges in usage and keep regional copies of data and API results across its regional data centers. Venkatesh added that several large telecom operators have used Macrometa’s data stream ingestion and complex event processing platform to replace legacy data ingest platforms like Splunk, Tibco and Apache Kafka.

In a statement, Pelion Venture Partners, general partner Chris Cooper said, “We believe the next phase of computing will be focused on the edge, ultimately bringing cloud-based workloads closer to the end user. As more and more workloads move away from a centralized cloud model, Macrometa is becoming the de facto edge provider to run data-heavy and compute-intensive workloads for developers and enterprises alike, globally.”

#app-developers, #cloud-infrastructure, #edge-computing, #fundings-exits, #macrometa, #serverless-computing, #startups, #tc

The rise of cybersecurity debt

Ransomware attacks on the JBS beef plant, and the Colonial Pipeline before it, have sparked a now familiar set of reactions. There are promises of retaliation against the groups responsible, the prospect of company executives being brought in front of Congress in the coming months, and even a proposed executive order on cybersecurity that could take months to fully implement.

But once again, amid this flurry of activity, we must ask or answer a fundamental question about the state of our cybersecurity defense: Why does this keep happening?

I have a theory on why. In software development, there is a concept called “technical debt.” It describes the costs companies pay when they choose to build software the easy (or fast) way instead of the right way, cobbling together temporary solutions to satisfy a short-term need. Over time, as teams struggle to maintain a patchwork of poorly architectured applications, tech debt accrues in the form of lost productivity or poor customer experience.

Complexity is the enemy of security. Some companies are forced to put together as many as 50 different security solutions from up to 10 different vendors to protect their sprawling technology estates.

Our nation’s cybersecurity defenses are laboring under the burden of a similar debt. Only the scale is far greater, the stakes are higher and the interest is compounding. The true cost of this “cybersecurity debt” is difficult to quantify. Though we still do not know the exact cause of either attack, we do know beef prices will be significantly impacted and gas prices jumped 8 cents on news of the Colonial Pipeline attack, costing consumers and businesses billions. The damage done to public trust is incalculable.

How did we get here? The public and private sectors are spending more than $4 trillion a year in the digital arms race that is our modern economy. The goal of these investments is speed and innovation. But in pursuit of these ambitions, organizations of all sizes have assembled complex, uncoordinated systems — running thousands of applications across multiple private and public clouds, drawing on data from hundreds of locations and devices.

Complexity is the enemy of security. Some companies are forced to put together as many as 50 different security solutions from up to 10 different vendors to protect their sprawling technology estates — acting as a systems integrator of sorts. Every node in these fantastically complicated networks is like a door or window that might be inadvertently left open. Each represents a potential point of failure and an exponential increase in cybersecurity debt.

We have an unprecedented opportunity and responsibility to update the architectural foundations of our digital infrastructure and pay off our cybersecurity debt. To accomplish this, two critical steps must be taken.

First, we must embrace open standards across all critical digital infrastructure, especially the infrastructure used by private contractors to service the government. Until recently, it was thought that the only way to standardize security protocols across a complex digital estate was to rebuild it from the ground up in the cloud. But this is akin to replacing the foundations of a home while still living in it. You simply cannot lift-and-shift massive, mission-critical workloads from private data centers to the cloud.

There is another way: Open, hybrid cloud architectures can connect and standardize security across any kind of infrastructure, from private data centers to public clouds, to the edges of the network. This unifies the security workflow and increases the visibility of threats across the entire network (including the third- and fourth-party networks where data flows) and orchestrates the response. It essentially eliminates weak links without having to move data or applications — a design point that should be embraced across the public and private sectors.

The second step is to close the remaining loopholes in the data security supply chain. President Biden’s executive order requires federal agencies to encrypt data that is being stored or transmitted. We have an opportunity to take that a step further and also address data that is in use. As more organizations outsource the storage and processing of their data to cloud providers, expecting real-time data analytics in return, this represents an area of vulnerability.

Many believe this vulnerability is simply the price we pay for outsourcing digital infrastructure to another company. But this is not true. Cloud providers can, and do, protect their customers’ data with the same ferocity as they protect their own. They do not need access to the data they store on their servers. Ever.

To ensure this requires confidential computing, which encrypts data at rest, in transit and in process. Confidential computing makes it technically impossible for anyone without the encryption key to access the data, not even your cloud provider. At IBM, for example, our customers run workloads in the IBM Cloud with full privacy and control. They are the only ones that hold the key. We could not access their data even if compelled by a court order or ransom request. It is simply not an option.

Paying down the principal on any kind of debt can be daunting, as anyone with a mortgage or student loan can attest. But this is not a low-interest loan. As the JBS and Colonial Pipeline attacks clearly demonstrate, the cost of not addressing our cybersecurity debt spans far beyond monetary damages. Our food and fuel supplies are at risk, and entire economies can be disrupted.

I believe that with the right measures — strong public and private collaboration — we have an opportunity to construct a future that brings forward the combined power of security and technological advancement built on trust.

#cloud-computing, #cloud-infrastructure, #cloud-management, #colonial-pipeline, #column, #cybersecurity, #cyberwarfare, #data-security, #developer, #encryption, #opinion, #security, #software-development, #tc

Microsoft brings more of its Azure services to any Kubernetes cluster

At its Build developer conference today, Microsoft announced a new set of Azure services (in preview) that businesses can now run on virtually any CNCF-conformant Kubernetes cluster with the help of its Azure Arc multi-cloud service.

Azure Arc, similar to tools like Google’s Anthos or AWS’s upcoming EKS Anywhere, provides businesses with a single tool to manage their container clusters across clouds and on-premises data centers. Since its launch back in late 2019, Arc enabled some of the core Azure services to run directly in these clusters as well, though the early focus was on a small set of data services, with the team also later adding some machine learning tools to Arc as well. With today’s update, the company is greatly expanding this set of containerized Azure services that work with Arc.

These new services include Azure App Service for building and managing web apps and APIs, Azure Functions for event-driven programming, Azure Logic Apps for building automated workflows, Azure Event Grid for event routing, and Azure API Management for… you guessed it… managing internal and external APIs.

“The app services are now Azure Arc-enabled, which means customers can deploy Web Apps, Functions, API gateways, Logic Apps and Event Grid services on pre-provisioned Kubernetes clusters,” Microsoft explained in its annual “Book of News” for this year’s Build. “This takes advantage of features including deployment slots for A/B testing, storage queue triggers and out-of-box connectors from the app services, regardless of run location. With these portable turnkey services, customers can save time building apps, then manage them consistently across hybrid and multicloud environments using Azure Arc.”

read

#api, #aws, #azure, #azure-arc, #cloud-computing, #cloud-infrastructure, #computing, #google-cloud-platform, #kubernetes, #machine-learning, #microsoft, #microsoft-build-2021, #microsoft-azure, #tc, #web-apps

Google Cloud launches Vertex AI, a new managed machine learning platform

At Google I/O today Google Cloud announced Vertex AI, a new managed machine learning platform that is meant to make it easier for developers to deploy and maintain their AI models. It’s a bit of an odd announcement at I/O, which tends to focus on mobile and web developers and doesn’t traditionally feature a lot of Google Cloud news, but the fact that Google decided to announce Vertex today goes to show how important it thinks this new service is for a wide range of developers.

The launch of Vertex is the result of quite a bit of introspection by the Google Cloud team. “Machine learning in the enterprise is in crisis, in my view,” Craig Wiley, the director of product management for Google Cloud’s AI Platform, told me. “As someone who has worked in that space for a number of years, if you look at the Harvard Business Review or analyst reviews, or what have you — every single one of them comes out saying that the vast majority of companies are either investing or are interested in investing in machine learning and are not getting value from it. That has to change. It has to change.”

Image Credits: Google

Wiley, who was also the general manager of AWS’s SageMaker AI service from 2016 to 2018 before coming to Google in 2019, noted that Google and others who were able to make machine learning work for themselves saw how it can have a transformational impact, but he also noted that the way the big clouds started offering these services was by launching dozens of services, “many of which were dead ends,” according to him (including some of Google’s own). “Ultimately, our goal with Vertex is to reduce the time to ROI for these enterprises, to make sure that they can not just build a model but get real value from the models they’re building.”

Vertex then is meant to be a very flexible platform that allows developers and data scientist across skill levels to quickly train models. Google says it takes about 80% fewer lines of code to train a model versus some of its competitors, for example, and then help them manage the entire lifecycle of these models.

Image Credits: Google

The service is also integrated with Vizier, Google’s AI optimizer that can automatically tune hyperparameters in machine learning models. This greatly reduces the time it takes to tune a model and allows engineers to run more experiments and do so faster.

Vertex also offers a “Feature Store” that helps its users serve, share and reuse the machine learning features and Vertex Experiments to help them accelerate the deployment of their models into producing with faster model selection.

Deployment is backed by a continuous monitoring service and Vertex Pipelines, a rebrand of Google Cloud’s AI Platform Pipelines that helps teams manage the workflows involved in preparing and analyzing data for the models, train them, evaluate them and deploy them to production.

To give a wide variety of developers the right entry points, the service provides three interfaces: a drag-and-drop tool, notebooks for advanced users and — and this may be a bit of a surprise — BigQuery ML, Google’s tool for using standard SQL queries to create and execute machine learning models in its BigQuery data warehouse.

We had two guiding lights while building Vertex AI: get data scientists and engineers out of the orchestration weeds, and create an industry-wide shift that would make everyone get serious about moving AI out of pilot purgatory and into full-scale production,” said Andrew Moore, vice president and general manager of Cloud AI and Industry Solutions at Google Cloud. “We are very proud of what we came up with in this platform, as it enables serious deployments for a new generation of AI that will empower data scientists and engineers to do fulfilling and creative work.”

#amazon-sagemaker, #analyst, #andrew-moore, #artificial-intelligence, #aws, #cloud, #cloud-computing, #cloud-infrastructure, #computing, #developer, #enterprise, #google, #google-cloud-platform, #google-i-o-2021, #harvard, #machine-learning, #product-management, #tc, #technology, #web-developers, #world-wide-web

Google Cloud Run gets committed use discounts and new security features

Cloud Run, Google Cloud’s serverless platform for containerized applications, is getting committed use discounts. Users who commit to spending a given amount on using Cloud Run for a year will get a 17% discount on the money they commit. The company offers a similar pre-commitment discount scheme for VM-based Compute Engine instances, as well as automatic ‘sustained use‘ discounts for machines that run for more than 25% of a month.

In addition, Google Cloud is also introducing a number of new security features for Cloud Run, including the ability to mount secrets from the Google Cloud Secret Manager and binary authorization to help define and enforce policies about how containers are deployed on the service. Cloud Run users can now also now use and manage their own encryption keys (by default, Cloud Run uses Google-managed keys) and a new Recommendation Hub inside of Cloud Run will now offer users recommendations for how to better protect their Cloud Run services.

Aparna Sinha, who recently became the director of product management for Google Cloud’s serverless platform, noted that these updates are part of Google Cloud’s push to build what she calls the “next generation of serverless.’

“We’re really excited to introduce our new vision for serverless, which I think is going to help redefine this space,” she told me. “In the past, serverless has meant a certain narrower type of compute, which is focused on functions or a very specific kind of applications, web services, etc. — and what we are talking about with redefining serverless is focusing on the power of serverless, which is the developer experience and the ease of use, but broadening it into a much more versatile platform, where many different types of applications can be run, and building in the Google way of doing DevOps and security and a lot of integrations so that you have access to everything that’s the best of cloud.”

She noted that Cloud Run saw “tremendous adoption” during the pandemic, something she attributes to the fact that businesses were looking to speed up time-to-value from their applications. IKEA, for example, which famously had a hard time moving from in-store to online sales, bet on Google Cloud’s serverless platform to bring down the refresh time of its online store and inventory management system from three hours to less than three minutes after switching to this model.

“That’s kind of the power of serverless, I think, especially looking forward, the ability to build real-time applications that have data about the context, about the inventory, about the customer and can therefore be much more reactive and responsive,” Sinha said. “This is an expectation that customers will have going forward and serverless is an excellent way to deliver that as well as be responsive to demand patterns, especially when they’re changing so much in today’s uncertain environment.”

Since the container model gives businesses a lot of flexibility in what they want to run in these containers — and how they want to develop these applications since Cloud Run is language-agnostic — Google is now seeing a lot of other enterprises move to this platform as well, both for deploying completely new applications but also to modernize some of their existing services.

For the companies that have predictable usage patterns, the committed use discounts should be an attractive option and it’s likely the more sophisticated organizations that are asking for the kinds of new security features that Google Cloud is introducing today.

“The next generation of serverless combines the best of serverless with containers to run a broad spectrum of apps, with no language, networking or regional restrictions,” Sinha writes in today’s announcement. “The next generation of serverless will help developers build the modern applications of tomorrow—applications that adapt easily to change, scale as needed, respond to the needs of their customers faster and more efficiently, all while giving developers the best developer experience.”

#aparna-sinha, #cloud, #cloud-computing, #cloud-infrastructure, #computing, #developer, #encryption, #google, #google-cloud, #google-compute-engine, #ikea, #online-sales, #product-management, #serverless-computing, #web-services

Quix raises $3.2M from Project A and others for its ‘Stream centric’ approach to data

Quix, a platform for Python developers working on streaming data, has secured a £2.3 Million ($3.2M)Seed funding round led by Project A Ventures in Germany, with participation from London’s Passion Capital and angel investors. The Quix Portal is also providing developers with a free subscription to a real-time data engineering platform.

Quix attracted angel investors including Frank Sagnier (CEO, Codemasters), Ian Hogarth (Co-author, State of AI Report), Chris Schagen (CMO, Contentful), and Michael Schrezenmaier (COO, Pipedrive).

Quix wants to change the way data is handled and processed from a database-centric approach to a ‘stream-centric’ approach, connecting machine learning models to real-time data streams. This is arguably the next paradigm in computing.

Use cases for Quix, it says, include developing electric vehicles, and fraud prevention in financial services. Some of its early customers are the NHS, Deloitte and McLaren.

Indeed, the founding team consists of former McLaren F1 engineers who are used to processing real-time data streams from the systems used by most Formula 1 teams.

Co-founder and CEO Michael Rosam said: “At Quix, we believe that it will soon be essential for every organization to automatically action data within milliseconds of it being created. Whether it’s personalizing digital experiences, developing electric vehicles, automating industrial machinery, deploying smart wearables in healthcare, or detecting financial fraud faster, the ability to run machine learning models on live data streams and immediately respond to rapidly changing environments is critical to delivering better experiences and outcomes to people.”

Over email he told me that Quix’s main advantage is that it allows developers to build streaming applications on Kafka without investing in cloud infrastructure first: “Uniquely, our API & SDK connects any Python code directly to the broker so that teams can run real-time machine learning models in-memory, reducing latency and cost compared to database-centric architectures.”

Quix is entering the data ecosystem alongside batch data processing platforms like Snowflake and Databricks, and event streaming platforms like Confluent, Materialize, and DBT. However, this ecosystem is very complementary with organizations usually combining multiple products into a production infrastructure based on the strengths of each proposition.

Sam Cash of Project A Ventures said: “Data streaming is the next paradigm in data architecture, given end-users accelerating demand for live, on-demand and personalized applications. The Quix team are leading the way in this market, by democratizing access to data streaming infrastructure, which until now has been the reserve of the largest companies.”

Malin Posern, Partner at Passion Capital commented: “The world today is generating unimaginable amounts of data from digital and physical activities. Businesses of all types and sizes will want to make use of their data in real-time in order to be competitive.”

#api, #ceo, #cloud-infrastructure, #codemasters, #computing, #coo, #data-stream, #databricks, #deloitte, #europe, #financial-services, #germany, #healthcare, #ian-hogarth, #kafka, #machine-learning, #mclaren, #nhs, #passion-capital, #pipedrive, #project-a-ventures, #python, #streaming-applications, #streaming-data, #streaming-media, #tc, #technology, #wireless-networking

Emerging open cloud security framework has backing of Microsoft, Google and IBM

Each of the big cloud platforms has its own methodology for passing on security information to logging and security platforms, leaving it to the vendors to find proprietary ways to translate that into a format that works for their tool. The Cloud Security Notification Framework (CSNF), a new working group that includes Microsoft, Google and IBM is trying to create a new open and standard way of delivering this information.

Nick Lippis, who is co-founder and co-chairman of ONUG, an open enterprise cloud community, which is the primary driver of CSNF says that what they’ve created is part standard and part open source. “What we’ve been really focusing on is how do we automate governance on the cloud. And so security was the place that was ripe for that where we can actually provide some value right away for the community,” he said.

While they’ve pulled in some of the big cloud vendors, they’ve also got large companies who consume cloud services like FedEx, Pfizer and Goldman Sachs. Conspicuously missing from the group is AWS, the biggest player in the cloud infrastructure market by far. But Lippis says that he hopes as the project matures, other companies including AWS will join.

“There’s lots of security programs and industry programs that get out there and that people are asking them to join, and so some companies want to wait to see how well this pans out [before making a commitment to it],” Lippis said. His hope is that over time, that Amazon will come around and join the group, but in the meantime they are working to get to the point everyone in the community will feel good about what they’re doing.

The idea is to start with security alerts and find a way to build a common format to give companies the same kind of system they have in the data center to track security alerts in the cloud. The way they hope to do that is with this open dialogue between the cloud vendors and the companies involved with the group.

“So the structure of that is that there’s a steering committee that is chaired by CISOs from these large cloud consumer brands, and also the cloud providers, and they provide voting and direction. And then there’s the working group where all the work is done. The beauty of what we do is that we have now consumers and also providers working together and collaborating,” he said.

Don Duet, a member of ONUG, who is CEO and co-founder of Concourse Labs, has been involved in the formation of the CSNF. He says to keep the project focused they are looking at this as a data management problem and they are establishing a common vocabulary for everyone to work within the group.

“How do you build a consensus on what are the types of terms that everybody can agree on and then you build the underlying basis so that the experts in your resource providers in this case, Cloud Service Providers, can bless how their data [connects] to those common standards,” Duet explained.

He says that particular problem is more of an organizational problem than a technical one, getting the various stakeholders together and just building consensus around this. At this point, they have that process in place and the next step is proving it by having the various companies involved