Google’s Anthos multi-cloud platform gets improved logging, Windows container support and more

Google today announced a sizable update to its Anthos multi-cloud platform that lets you build, deploy and manage containerized applications anywhere, including on Amazon’s AWS and (in preview) on Microsoft Azure.

Version 1.7 includes new features like improved metrics and logging for Anthos on AWS, a new Connect gateway to interact with any cluster right from Google Cloud and a preview of Google’s managed control plane for Anthos Service Mesh. Other new features include Windows container support for environments that use VMware’s vSphere platform and new tools for developers to make it easier for them to deploy their applications to any Anthos cluster.

Today’s update comes almost exactly two years after Google CEO Sundar Pichai originally announced Anthos at its Cloud Next event in 2019 (before that, Google called this project the ‘Google Cloud Services Platform,’ which launched three years ago). Hybrid- and multi-cloud, it’s fair to say, takes a key role in the Google Cloud roadmap — and maybe more so for Google than for any of its competitors. And recently, Google brought on industry veteran Jeff Reed to become the VP of Product Management in charge of Anthos.

Reed told me that he believes that there are a lot of factors right now that are putting Anthos in a good position. “The wind is at our back. We bet on Kubernetes, bet on containers — those were good decisions,” he said. Increasingly, customers are also now scaling out their use of Kubernetes and have to figure out how to best scale out their clusters and deploy them in different environments — and to do so, they need a consistent platform across these environments. He also noted that when it comes to bringing on new Anthos customers, it’s really those factors that determine whether a company will look into Anthos or not.

He acknowledged that there are other players in this market, but he argues that Google Cloud’s take on this is also quite different. “I think we’re pretty unique in the sense that we’re from the cloud, cloud-native is our core approach,” he said. “A lot of what we talk about in [Anthos] 1.7 is about how we leverage the power of the cloud and use what we call ‘an anchor in the cloud’ to make your life much easier. We’re more like a cloud vendor there, but because we support on-prem, we see some of those other folks.” Those other folks being IBM/Red Hat’s OpenShift and VMware’s Tanzu, for example. 

The addition of support for Windows containers in vSphere environments also points to the fact that a lot of Anthos customers are classical enterprises that are trying to modernize their infrastructure, yet still rely on a lot of legacy applications that they are now trying to bring to the cloud.

Looking ahead, one thing we’ll likely see is more integrations with a wider range of Google Cloud products into Anthos. And indeed, as Reed noted, inside of Google Cloud, more teams are now building their products on top of Anthos themselves. In turn, that then makes it easier to bring those services to an Anthos-managed environment anywhere. One of the first of these internal services that run on top of Anthos is Apigee. “Your Apigee deployment essentially has Anthos underneath the covers. So Apigee gets all the benefits of a container environment, scalability and all those pieces — and we’ve made it really simple for that whole environment to run kind of as a stack,” he said.

I guess we can expect to hear more about this in the near future — or at Google Cloud Next 2021.

 

#anthos, #apigee, #aws, #ceo, #chrome-os, #cisco, #cloud, #cloud-computing, #cloud-infrastructure, #computing, #enterprise, #google, #google-cloud, #google-cloud-platform, #ibm, #kubernetes, #microsoft, #microsoft-windows, #red-hat, #sundar-pichai, #vmware

0

Pulumi launches version 3.0 of its infrastructure-as-code platform

Pulumi was one of the first of what is now a growing number of infrastructure-as-code startups and today, at its developer conference, the company is launching version 3.0 of its cloud engineering platform. With 70 new features and about 1,000 improvements since version 2.0, this is Pulumi’s biggest release yet.

The new release includes features that range from support for Google Cloud as an infrastructure provider (now in preview) to a new Automation API that turns Pulumi into a library that can then be called from other applications. It basically allows developers to write tools that, for example, can then provision and configure their own infrastructure for each customer of a SaaS application, for example.

Image Credits: Pulumi

The company is also launching Pulumi Packages and Components for creating opinionated infrastructure building blocks that developers can then call up from their preferred languages.

Also new is support for Pulumi’s CI/CD Assistant across all the company’s paid plans. This feature makes it easier to deploy cloud infrastructure and applications through more than a dozen popular CI/CD platforms, including the likes of AWS Code Service, Azure DevOps, CircleCI, GitLab CI, Google Cloud Build, Jenkins, Travis CI and Spinnaker. Until now, you needed to be on a Team Pro or Enterprise plan to use this, but it’s now available to all paying users.

In addition, the company is expanding some of its enterprise features with, for example, SAML SSO, SCIm synchronization and new role types.

“When we started out on Pulumi, we knew we wanted to enable developers and infrastructure teams to
collaborate more closely to build more innovative software,” said Joe Duffy, Pulumi co-founder and
CEO. “What we didn’t know yet is that we’d end up calling this ‘Cloud Engineering,’ that our customers
would call it that too, and that they would go on this journey with us. We are now centering our entire
platform around this core idea which is now accelerating as the modern cloud continues to disrupt
entire business models. Pulumi 3.0 is an exciting milestone in realizing this vision of the future —
democratizing access to the cloud and helping teams build better software together — with much more
to come.”

#api, #aws, #cloud-computing, #cloud-infrastructure, #co-founder, #computing, #continuous-integration, #devops, #gitlab, #identity-management, #jenkins, #joe-duffy, #pulumi, #software-engineering, #tc, #technology, #version-control

0

Grocery startup Mercato spilled years of data, but didn’t tell its customers

A security lapse at online grocery delivery startup Mercato exposed tens of thousands of customer orders, TechCrunch has learned.

A person with knowledge of the incident told TechCrunch that the incident happened in January after one of the company’s cloud storage buckets, hosted on Amazon’s cloud, was left open and unprotected.

The company fixed the data spill, but has not yet alerted its customers.

Mercato was founded in 2015 and helps over a thousand smaller grocers and specialty food stores get online for pickup or delivery, without having to sign up for delivery services like Instacart or Amazon Fresh. Mercato operates in Boston, Chicago, Los Angeles, and New York, where the company is headquartered.

TechCrunch obtained a copy of the exposed data and verified a portion of the records by matching names and addresses against known existing accounts and public records. The data set contained more than 70,000 orders dating between September 2015 and November 2019, and included customer names and email addresses, home addresses, and order details. Each record also had the user’s IP address of the device they used to place the order.

The data set also included the personal data and order details of company executives.

It’s not clear how the security lapse happened since storage buckets on Amazon’s cloud are private by default, or when the company learned of the exposure.

Companies are required to disclose data breaches or security lapses to state attorneys-general, but no notices have been published where they are required by law, such as California. The data set had more than 1,800 residents in California, more than three times the number needed to trigger mandatory disclosure under the state’s data breach notification laws.

It’s also not known if Mercato disclosed the incident to investors ahead of its $26 million Series A raise earlier this month. Velvet Sea Ventures, which led the round, did not respond to emails requesting comment.

In a statement, Mercato chief executive Bobby Brannigan confirmed the incident but declined to answer our questions, citing an ongoing investigation.

“We are conducting a complete audit using a third party and will be contacting the individuals who have been affected. We are confident that no credit card data was accessed because we do not store those details on our servers. We will continually inform all authoritative bodies and stakeholders, including investors, regarding the findings of our audit and any steps needed to remedy this situation,” said Brannigan.


Know something, say something. Send tips securely over Signal and WhatsApp to +1 646-755-8849. You can also send files or documents using our SecureDrop. Learn more

#amazon, #boston, #california, #chicago, #cloud-computing, #cloud-infrastructure, #cloud-storage, #computer-security, #computing, #data-breach, #data-security, #ecommerce, #food, #instacart, #los-angeles, #mercato, #new-york, #security, #technology, #united-states, #velvet-sea-ventures

0

Former Amazon exec gives Chinese firms a tool to fight cyber threats

China is pushing forward an internet society where economic and public activities increasingly take place online. In the process, troves of citizen and government data get transferred to cloud servers, raising concerns over information security. One startup called ThreatBook sees an opportunity in this revolution and pledges to protect corporations and bureaucracies against malicious cyberattacks.

Antivirus and security software has been around in China for several decades, but until recently, enterprises were procuring them simply to meet compliance requests, Xue Feng, founder and CEO of six-year-old ThreatBook, told TechCrunch in an interview.

Starting around 2014, internet accessibility began to expand rapidly in China, ushering in an explosion of data. Information previously stored in physical servers was moving to the cloud. Companies realized that a cyber attack could result in a substantial financial loss and started to pay serious attention to security solutions.

In the meantime, cyberspace is emerging as a battlefield where competition between states plays out. Malicious actors may target a country’s critical digital infrastructure or steal key research from a university database.

“The amount of cyberattacks between countries is reflective of their geopolitical relationships,” observed Xue, who oversaw information security at Amazon China before founding ThreatBook. Previously, he was the director of internet security at Microsoft in China.

“If two countries are allies, they are less likely to attack one another. China has a very special position in geopolitics. Besides its tensions with the other superpowers, cyberattacks from smaller, nearby countries are also common.”

Like other emerging SaaS companies, ThreatBook sells software and charges a subscription fee for annual services. More than 80% of its current customers are big corporations in finance, energy, the internet industry, and manufacturing. Government contracts make up a smaller slice. With its Series E funding round that closed 500 million yuan ($76 million) in March, ThreatBook boosted its total capital raised to over 1 billion yuan from investors including Hillhouse Capital.

Xue declined to disclose the company’s revenues or valuation but said 95% of the firm’s customers have chosen to renew their annual subscriptions. He added that the company has met the “preliminary requirements” of the Shanghai Exchange’s STAR board, China’s equivalent to NASDAQ, and will go public when the conditions are ripe.

“It takes our peers 7-10 years to go public,” said Xue.

ThreatBook compares itself to CrowdStrike from Silicon Valley, which filed to go public in 2019 and detect threats by monitoring a company’s “endpoints”, which could be an employee’s laptops and mobile devices that connect to the internal network from outside the corporate firewall.

ThreatBook similarly has a suite of software that goes onto the devices of a company’s employees, automatically detects threats and comes up with a list of solutions.

“It’s like installing a lot of security cameras inside a company,” said Xue. “But the thing that matters is what we tell customers after we capture issues.”

SaaS providers in China are still in the phase of educating the market and lobbying enterprises to pay. Of the 3,000 companies that ThreatBook serves, only 300 are paying so there is plentiful room for monetization. Willingness to spend also differs across sectors, with financial institutions happy to shell out several million yuan ($1 = 6.54 yuan) a year while a tech startup may only want to pay a fraction of that.

Xue’s vision is to take ThreatBook global. The company had plans to expand overseas last year but was held back by the COVID-19 pandemic.

“We’ve had a handful of inquiries from companies in Southeast Asia and the Middle East. There may even be room for us in markets with mature [cybersecurity companies] like Europe and North America,” said Xue. “As long as we are able to offer differentiation, a customer may still consider us even if it has an existing security solution.”

#asia, #china, #cloud-computing, #cloud-infrastructure, #computer-security, #crowdstrike, #cyberattack, #cybercrime, #firewall, #internet-security, #internet-society, #microsoft-china, #saas, #security, #security-software, #software-as-a-service, #tc, #tech-startup

0

Google Cloud joins the FinOps Foundation

Google Cloud today announced that it is joining the FinOps Foundation as a Premier Member.

The FinOps Foundation is a relatively new open-source foundation, hosted by the Linux Foundation, that launched last year. It aims to bring together companies in the ‘cloud financial management’ space to establish best practices and standards. As the term implies, ‘cloud financial management,’ is about the tools and practices that help businesses manage and budget their cloud spend. There’s a reason, after all, that there are a number of successful startups that do nothing else but help businesses optimize their cloud spend (and ideally lower it).

Maybe it’s no surprise that the FinOps Foundation was born out of Cloudability’s quarterly Customer Advisory Board meetings. Until now, CloudHealth by VMware was the Foundation’s only Premiere Member among its vendor members. Other members include Cloudability, Densify, Kubecost and SoftwareOne. With Google Cloud, the Foundation has now signed up its first major cloud provider.

“FinOps best practices are essential for companies to monitor, analyze, and optimize cloud spend across tens to hundreds of projects that are critical to their business success,” said Yanbing Li, Vice President of Engineering and Product at Google Cloud. “More visibility, efficiency, and tools will enable our customers to improve their cloud deployments and drive greater business value. We are excited to join FinOps Foundation, and together with like-minded organizations, we will shepherd behavioral change throughout the industry.”

Google Cloud has already committed to sending members to some of the Foundation’s various Special Interest Groups (SIGs) and Working Groups to “help drive open source standards for cloud financial management.”

“The practitioners in the FinOps Foundation greatly benefit when market leaders like Google Cloud invest resources and align their product offerings to FinOps principles and standards,” said J.R. Storment, Executive Director of the FinOps Foundation. “We are thrilled to see Google Cloud increase its commitment to the FinOps Foundation, joining VMware as the 2nd of 3 dedicated Premier Member Technical Advisory Council seats.”

#cloud, #cloud-computing, #cloud-infrastructure, #cloudability, #computing, #densify, #enterprise, #google, #google-cloud, #linux, #linux-foundation, #vmware

0

ConductorOne raises $5M in seed round led by Accel to automate your access requests

Over the course of their careers, Alex Bovee and Paul Querna realized that while the use of SaaS apps and cloud infrastructure was exploding, the process to give employees permission to use them was not keeping up.

The pair led Zero Trust strategies and products at Okta, and could see the problem firsthand. For the unacquainted, Zero Trust is a security concept based on the premise that organizations should not automatically trust anything inside or outside its perimeters and, instead must verify anything and everything trying to connect to its systems before granting access.

Bovee and Querna realized that while more organizations were adopting Zero Trust strategies, they were not enacting privilege controls. This was resulting in delayed employee access to apps, or to the over-permissioning employees from day one.

Last summer, Bovee left Okta to be the first virtual entrepreneur-in-residence at VC firm Accel. There, he and Accel partner Ping Li got to talking and realized they both had an interest in addressing the challenge of granting permissions to users of cloud apps quicker and more securely.

Recalls Li: “It was actually kind of fortuitous. We were looking at this problem and I was like ‘Who can we talk to about the space? And we realized we had an expert in Alex.”

At that point, Bovee told Li he was actually thinking of starting a company to solve the problem. And so he did. Months later, Querna left Okta to join him in getting the startup off the ground. And today, ConductorOne announced that it raised $5 million in seed funding in a round led by Accel, with participation from Fuel Capital, Fathom Capital and Active Capital. 

ConductorOne plans to use its new capital to build what the company describes as “the first-ever identity orchestration and automation platform.” Its goal is to give IT and identity admins the ability to automate and delegate employee access to cloud apps and infrastructure, while preserving least privilege permissions. 

“The crux of the problem is that you’ve got these identities — you’ve got employees and contractors on one side and then on the other side you’ve got all this SaaS infrastructure and they all have sort of infinite permutations of roles and permissions and what people can do within the context of those infrastructure environments,” Bovee said.

Companies of all sizes often have hundreds of apps and infrastructure providers they’re managing. It’s not unusual for an IT helpdesk queue to be more than 20% access requests, with people needing urgent access to resources like Salesforce, AWS, or GitHub, according to Bovee. Yet each request is manually reviewed to make sure people get the right level of permissions. 

“But that access is never revoked, even if it’s unused,” Bovee said. “Without a central layer to orchestrate and automate authorization, it’s impossible to handle all the permissions, entitlements, and on- and off-boarding, not to mention auditing and analytics.”

ConductorOne aims to build “the world’s best access request experience,” with automation at its core.

“Automation that solves privilege management and governance is the next major pillar of cloud identity,” Accel’s Li said.

Bovee and Querna have deep expertise in the space. Prior to Okta, Bovee led enterprise mobile security product development at Lookout. Querna was the co-founder and CTO of ScaleFT, which was acquired by Okta in 2018. He also led technology and strategy teams at Rackspace and Cloudkick, and is a vocal and active open source software advocate.   

While the company’s headquarters are in Portland, Oregon, ConductorOne is a remote-first company with 10 employees.

“We’re deep in building the product right now, and just doing a lot of customer development to understand the problems deeply,” Bovee said. “Then we’ll focus on getting early customers.”

#accel, #alex-bovee, #cloud, #cloud-infrastructure, #funding, #fundings-exits, #paul-querna, #ping-li, #recent-funding, #saas, #security, #startups, #venture-capital

0

Microsoft outage knocks sites and services offline

Microsoft is experiencing a major outage, so that’s why you can’t get any work done.

Besides its homepage, Microsoft services are down, log-in pages aren’t loading, and even the company’s status pages were kaput. Worse, Microsoft’s cloud service Azure appeared to also be offline, causing outages to any sites and services that rely on it.

It’s looking like a networking issue, according to the status page — when it loaded. Microsoft also tweeted that it was related to DNS, the internet system that translates web addresses to computer-readable internet numbers. It’s an important function of how the internet works, so not ideal when it suddenly breaks.

We’ve reached out for comment, and we’ll follow up when we know more.

#cloud, #cloud-computing, #cloud-infrastructure, #computing, #microsoft, #microsoft-azure, #security, #technology

0

Why Adam Selipsky was the logical choice to run AWS

When AWS CEO Andy Jassy announced in an email to employees yesterday that Tableau CEO Adam Selipsky was returning to run AWS, it was probably not the choice most considered. But to the industry watchers we spoke to over the last couple of days, it was a move that made absolute sense once you thought about it.

Gartner analyst Ed Anderson says that the cultural fit was probably too good for Jassy to pass up. Selipsky spent 11 years helping build the division. It was someone he knew well and had worked side by side with for over a decade. He could slide into the new role and be trusted to continue building the lucrative division.

Anderson says that even though the size and scope of AWS has changed dramatically since Selipsky left in 2016 when the company closed the year on $16 billion run rate, he says that the organization’s cultural dynamics haven’t changed all that much.

“Success in this role requires a deep understanding of the Amazon/AWS culture in addition to a vision for AWS’s future growth. Adam already knows the AWS culture from his previous time at AWS. Yes, AWS was a smaller business when he left, but the fundamental structure and strategy was in place and the culture hasn’t notably evolved since then,” Anderson told me.

Matt McIlwain, managing director at Madrona Venture Group says the experience Selipsky had after he left AWS will prove invaluable when he returns.

“Adam transformed Tableau from a desktop, licensed software company to a cloud, subscription software company that thrived. As the leader of AWS, Adam is returning to a culture he helped grow as the sales and marketing leader that brought AWS to prominence and broke through from startup customers to become the leading enterprise solution for public cloud,” he said.

Holger Mueller, an analyst with Constellation Research says that Selipsky’s business experience gave him the edge over other candidates. “His business acumen won out over [internal candidates] Matt Garmin and Peter DeSantis. Insight on how Salesforce works may be helpful and valued as well,” Mueller pointed out.

As for leaving Tableau and with it Salesforce, the company that purchased it for $15.7 billion in 2019, Brent Leary, founder and principal analyst at CRM Essentials believes that it was only a matter of time before some of these acquired company CEOs left to do other things. In fact, he’s surprised it didn’t happen sooner.

“Given Salesforce’s growing stable of top notch CEOs accumulated by way of a slew of high profile acquisitions, you really can’t expect them all to stay forever, and given Adam Selipsky’s tenure at AWS before becoming Tableau’s CEO, this move makes a whole lot of sense. Amazon brings back one of their own, and he is also a wildly successful CEO in his own right,” Leary said.

While the consensus is that Selipsky is a good choice, he is going to have awfully big shoes to fill.  The fact is that division is continuing to grow like a large company currently on a run rate of over $50 billion. With a track record like that to follow, and Jassy still close at hand, Selipsky has to simply continue letting the unit do its thing while putting his own unique stamp on it.

Any kind of change is disconcerting though, and it will be up to him to put customers and employees at ease and plow ahead into the future. Same mission. New boss.

#adam-selipsky, #andy-jassy, #aws, #cloud, #cloud-infrastructure, #enterprise, #personnel, #salesforce, #tableau, #tc

0

The ‘Frankencloud’ model is our biggest security risk

Recent testimony before Congress on the massive SolarWinds attacks served as a wake-up call for many. What I saw emerge from the testimony was a debate on whether the public cloud is a more secure option than a hybrid cloud approach.

The debate shouldn’t surround which cloud approach is more secure, but rather which one we need to design security for. We — enterprise technology providers — should be designing security around the way our modern systems work, rather than pigeonholing our customers into securing one computing model over the other.

An organization’s security needs to be designed with one single point of control that provides a holistic view of threats and mitigates complexity.

The SolarWinds attack was successful because it took advantage of a vast, intermixed supply chain of technology vendors. While there are fundamental lessons to be learned on how to protect the code supply chain, I think the bigger lesson is that complexity is the enemy of security.

The “Frankencloud” model

We’ve seen our information technology environments evolve into what I call a “Frankenstein” approach. Firms scrambled to take advantage of the cloud while maintaining their systems of record. Similar to how Frankenstein was assembled, this led to systems riddled with complexity and disconnected parts put together.

Security teams cite this complexity as one of their largest challenges. Forced to rely on dozens of vendors and disconnected security products, the average security team is using 25 to 49 tools from up to 10 different vendors. This disconnect is creating blind spots we can no longer afford to avoid. Security systems shouldn’t be piecemealed together; an organization’s security needs to be designed with one single point of control that provides a holistic view of threats and mitigates complexity.

Hybrid cloud innovations

We’re seeing hybrid cloud environments emerging as the dominant technology design point for governments, as well as public and private enterprises. In fact, a recent study from Forrester Research found that 85% of technology decision-makers agree that on-premise infrastructure is critical to their hybrid cloud strategies.

A hybrid cloud model combines part of a company’s existing on-premise systems with a mix of public cloud resources and as-a-service resources and treats them as one.

How does this benefit your security? In a disconnected environment, the most common path for cybercriminals to compromise cloud environments is via cloud-based applications, representing 45% of cloud-related incidents analyzed by our IBM X-Force team.

Take, for instance, your cloud-based systems that authenticate that someone is authorized to access systems. A login from an employee’s device is detected in the middle of the night. At the same time, there may be an attempt from that same device, seemingly in a different time zone, to access sensitive data from your on-premise data centers. A unified security system knows the risky behavior patterns to watch for and automatically hinders both actions. If these incidents were detected in two separate systems, that action never takes place and data is lost.

Many of these issues arise due to the mishandling of data through cloud data storage. The fastest-growing innovations to address this gap are called Confidential Computing. Right now, most cloud providers promise that they won’t access your data. (They could, of course, be compelled to break that promise by a court order or other means.) Conversely, it also means malicious actors could use that same access for their own nefarious purposes. Confidential Computing ensures that the cloud technology provider is technically incapable of accessing data, making it equally difficult for cybercriminals to gain access to it.

Creating a more secure future

Cloud computing has brought critical innovations to the world, from the distribution of workloads to moving with speed. At the same time, it also brought to light the essentials of delivering IT with integrity.

Cloud’s need for speed has pushed aside the compliance and controls that technology companies historically ensured for their clients. Now, those requirements are often put back on the customer to manage. I’d urge you to think of security first and foremost in your cloud strategy and choose a partner you can trust to securely advance your organization forward.

We need to stop bolting security and privacy onto the “Frankencloud” environment that operates so many businesses and governments. SolarWinds taught us that our dependence on a diverse set of technologies can be a point of weakness.

Fortunately, it can also become our greatest strength, as long as we embrace a future where security and privacy are designed in the very fabric of that diversity.

#cloud-infrastructure, #column, #cybersecurity, #information-technology, #security, #solarwinds, #tc

0

Google Cloud launches a new support option for mission critical workloads

Google Cloud today announced the launch of a new support option for its Premium Support customers that run mission-critical services on its platform. The new service, imaginatively dubbed Mission Critical Services (MCS), brings Google’s own experience with Site Reliability Engineering to its customers. This is not Google completely taking over the management of these services, though. Instead, the company describes it as a “consultative offering in which we partner with you on a journey toward readiness.”

Initially, Google will work with its customers to improve — or develop — the architecture of their apps and help them instrument the right monitoring systems and controls, as well as help them set and raise their service-level objectives (a key feature in the Site Reliability Engineering philosophy).

Later, Google will also provide ongoing check-ins with its engineers and walk customers through tune-ups architecture reviews. “Our highest tier of engineers will have deep familiarity with your workloads, allowing us to monitor, prevent, and mitigate impacts quickly, delivering the fastest response in the industry. For example, if you have any issues–24-hours-a-day, seven-days-a-week–we’ll spin up a live war room with our experts within five minutes,” Google Cloud’s VP for Customer Experience, John Jester, explains in today’s announcement.

This new offering is another example of how Google Cloud is trying to differentiate itself from the rest of the large cloud providers. Its emphasis today is on providing the high-touch service experiences that were long missing from its platform, with a clear emphasis on the needs of large enterprise customers. That’s what Thomas Kurian promised to do when he became the organization’s CEO and he’s clearly following through.

 

#artificial-intelligence, #ceo, #cloud, #cloud-computing, #cloud-infrastructure, #computing, #enterprise, #google, #google-cloud, #google-workspace, #technology, #thomas-kurian, #world-wide-web

0

Tetrate, the company born out of Istio’s open source app networking project, raises $40 million

Tetrate, the company commercializing an open source networking project that allows for easier data sharing across different applications, has raised $40 million.

The round, led by Sapphire Ventures underscores the importance of the Istio project and just how critical services that facilitate cross-platform data sharing have become.

Sapphire was joined by other new investors including Scale Venture Partners and NTTVC, along with existing investors, Dell Technologies Capital, Intel Capital, 8VC, and Samsung NEXT.

The company said it would use the cash to further develop its hybrid cloud application networking platform and support a new product, based on Istio, that makes the application service mesh easier to use, according to a statement from the company. Geographic expansion to Latin America, Europe and Asia is also on the menu now that it has 40 million simoleons to play around with (personally I’d have converted all that money into bills and gone swimming in it like Scrooge McDuck).

“As the microservices revolution picks up steam, it’s indispensable to use Istio for managing applications built with microservices and deployed on containers. Both the product and background of the founding team lead us to believe that Tetrate is poised to bring Istio into the mainstream for enterprises by making it easy to manage and deploy on multi-cloud and hybrid cloud environments,” said Jai Das, the partner, president and co-founder of Sapphire’s multi-billion dollar firm, who’s joining the Tetrate board. “The applications we use daily require a lot of work in the background, and Tetrate helps make that happen with its Istio-based service mesh technology, which helps route traffic between microservices, add visibility and enhance security.”

Founded in 2018, Tetrate formally launched in 2019 with a $12.5 million round that boosted the company’s profile and helped the company commercialize and professionalize services around the Istio and Envoy Proxy open source projects.

Tons of really big customers, including the U.S. Department of Defense use Tetrate’s services currently. In the military, Tetrate powers the DevSecOps platform called Platform One.

“We partnered with Tetrate to help secure and smoothly operate Platform One with Istio. Platform One works with the most critical systems across the DoD. The Tetrate team has provided world class expertise, trained our team members, reviewed our platform architecture and configurations, and helped with debugging and upgrades,” said Nicolas Chaillan, the chief software officer for the US Air Force, in a statement. “We’re getting excellent production support for running our platform smoothly and we rely on them and their platform for a critical layer of our stack.”

#asia, #cloud-computing, #cloud-infrastructure, #computing, #dell-technologies-capital, #department-of-defense, #envoy, #europe, #intel-capital, #jai-das, #latin-america, #microservices, #proxy, #samsung, #sap, #sapphire-ventures, #scale-venture-partners, #tc, #technology, #tetrate, #us-air-force

0

Aqua Security raises $135M at a $1B valuation for its cloud native security service

Aqua Security, a Boston- and Tel Aviv-based security startup that focuses squarely on securing cloud-native services, today announced that it has raised a $135 million Series E funding round at a $1 billion valuation. The round was led by ION Crossover Partners. Existing investors M12 Ventures, Lightspeed Venture Partners, Insight Partners, TLV Partners, Greenspring Associates and Acrew Capital also participated. In total, Aqua Security has now raised $265 million since it was founded in 2015.

The company was one of the earliest to focus on securing container deployments. And while many of its competitors were acquired over the years, Aqua remains independent and is now likely on a path to an IPO. When it launched, the industry focus was still very much on Docker and Docker containers. To the detriment of Docker, that quickly shifted to Kubernetes, which is now the de facto standard. But enterprises are also now looking at serverless and other new technologies on top of this new stack.

“Enterprises that five years ago were experimenting with different types of technologies are now facing a completely different technology stack, a completely different ecosystem and a completely new set of security requirements,” Aqua CEO Dror Davidoff told me. And with these new security requirements came a plethora of startups, all focusing on specific parts of the stack.

Image Credits: Aqua Security

What set Aqua apart, Dror argues, is that it managed to 1) become the best solution for container security and 2) realized that to succeed in the long run, it had to become a platform that would secure the entire cloud-native environment. About two years ago, the company made this switch from a product to a platform, as Davidoff describes it.

“There was a spree of acquisitions by CheckPoint and Palo Alto [Networks] and Trend [Micro],” Davidoff said. “They all started to acquire pieces and tried to build a more complete offering. The big advantage for Aqua was that we had everything natively built on one platform. […] Five years later, everyone is talking about cloud-native security. No one says ‘container security’ or ‘serverless security’ anymore. And Aqua is practically the broadest cloud-native security [platform].”

One interesting aspect of Aqua’s strategy is that it continues to bet on open source, too. Trivy, its open-source vulnerability scanner, is the default scanner for GitLab’s Harbor Registry and the CNCF’s Artifact Hub, for example.

“We are probably the best security open-source player there is because not only do we secure from vulnerable open source, we are also very active in the open-source community,” Davidoff said (with maybe a bit of hyperbole). “We provide tools to the community that are open source. To keep evolving, we have a whole open-source team. It’s part of the philosophy here that we want to be part of the community and it really helps us to understand it better and provide the right tools.”

In 2020, Aqua, which mostly focuses on mid-size and larger companies, doubled the number of paying customers and it now has more than half a dozen customers with an ARR of over $1 million each.

Davidoff tells me the company wasn’t actively looking for new funding. Its last funding round came together only a year ago, after all. But the team decided that it wanted to be able to double down on its current strategy and raise sooner than originally planned. ION had been interested in working with Aqua for a while, Davidoff told me, and while the company received other offers, the team decided to go ahead with ION as the lead investor (with all of Aqua’s existing investors also participating in this round).

“We want to grow from a product perspective, we want to grow from a go-to-market [perspective] and expand our geographical coverage — and we also want to be a little more acquisitive. That’s another direction we’re looking at because now we have the platform that allows us to do that. […] I feel we can take the company to great heights. That’s the plan. The market opportunity allows us to dream big.”

 

#acrew-capital, #aqua, #aqua-security, #boston, #checkpoint, #cloud, #cloud-computing, #cloud-infrastructure, #cloud-storage, #computing, #docker, #enterprise, #greenspring-associates, #insight-partners, #ion-crossover-partners, #kubernetes, #lightspeed-venture-partners, #palo-alto, #recent-funding, #security, #serverless-computing, #software, #startups, #tc, #tel-aviv, #tlv-partners

0

Airbyte raises $5.2M for its open-source data integration platform

Airbyte, an open-source data integration platform, today announced that it has raised a $5.2 million seed funding round led by Accel. Other investors include YCombinator, 8VC, Segment co-founder Calvin French-Owen, former Cloudera GM Charles Zedlewski, LiveRamp and Safegraph CEO Auren Hoffman, Datavant CEO Travis May and Alain Rossmann, the president of Machinify.

The company was co-founded by Michel Tricot, the former director of engineering and head of integrations at LiverRamp and RideOS, and John Lafleur, a serial entrepreneur who focuses on developer tools and B2B services. The last startup he co-founded was Anaxi.

Image Credits: Airbyte

In its early days, the team was actually working on a slightly different project that focused on data connectivity for marketing companies. The founders were accepted into Y Combinator and built out their application, but once the COVID pandemic hit, a lot of the companies that had placed early bets on Airbyte’s original project faced budget freezes and layoffs.

“At that point, we decided to go into deeper data integration and that’s how we started the Airbyte project and product as we know it today,” Tricot explained.

Today’s Airbyte is geared toward data engineering, without the specific industry focus of its early incarnation, but it offers both a graphical UI for building connectors, as well as APIs for developers to hook into.

As Tricot noted, a lot of companies start out by building their own data connectors — and that tends to work alright at first. But the real complexity is in maintaining them. “You have zero control over how they behave,” he noted. “So either they’re going to fail, or they’re going to change something. The cost of data integration is in the maintenance.”

Even for a company that specializes in building these connectors, the complexity will quickly outpace its ability to keep up, so the team decided on building Airbyte as an open-source company. The team also argues that while there are companies like Fivetran that focus on data integration, a lot of customers end up with use cases that aren’t supported by Airbyte’s closed-source competitors and that they had to build themselves from the ground up.

“Our mission with Airbyte is really to become the standard to replicate data,” Lafleur said. “To do that, we will open-source every feature that addresses the need of the individual contributor, so all the connectors.” He also noted that Airbyte will exclusively focus on its open-source tools until it raises a Series A round — likely early next year.

To monetize its service, Airbyte plans to use an open core model, where all of the features that address the needs of a company (think enterprise features like data quality, privacy, user management, etc.) will be licensed. The team is also looking at white-labeling its containerized connectors to others.

Currently, about 600 companies use Airbyte’s connectors — up from 250 just a month ago. Its users include the likes of Safegraph, Dribbble, Mercato, GraniteRock, Agridigital and Cart.com.

The company plans to use the new funding to double its team from about 12 people to 25 by the end of the year. Right now, the company’s focus is on establishing its user base, and then it plans to start monetizing that — and raise more funding — next year.

 

#airbyte, #auren-hoffman, #ceo, #cloud, #cloud-infrastructure, #cloudera, #computing, #enterprise, #information-technology, #liveramp, #machinify, #safegraph, #serial-entrepreneur, #y-combinator

0

Microsoft launches Azure Percept, its new hardware and software platform to bring AI to the edge

Microsoft today announced Azure Percept, its new hardware and software platform for bringing more of its Azure AI services to the edge. Percept combines Microsoft’s Azure cloud tools for managing devices and creating AI models with hardware from Microsoft’s device partners. The general idea here is to make it far easier for all kinds of businesses to build and implement AI for things like object detection, anomaly detections, shelf analytics and keyword spotting at the edge by providing them with an end-to-end solution that takes them from building AI models to deploying them on compatible hardware.

To kickstart this, Microsoft also today launches a hardware development kit with an intelligent camera for vision use cases (dubbed Azure Percept Vision). The kit features hardware-enabled AI modules for running models at the edge, but it can also be connected to the cloud. Users will also be able to trial their proofs-of-concept in the real world because the development kit conforms to the widely used 80/20 T-slot framing architecture.

In addition to Percept Vision, Microsoft is also launching Azure Percept Audio for audio-centric use cases.

Azure Percept devices, including Trust Platform Module, Azure Percept Vision and Azure Percept Audio

Azure Percept devices, including Trust Platform Module, Azure Percept Vision and Azure Percept Audio

“We’ve started with the two most common AI workloads, vision and voice, sight and sound, and we’ve given out that blueprint so that manufacturers can take the basics of what we’ve started,” said Roanne Sones, the corporate vice president of Microsoft’s edge and platform group, said. “But they can envision it in any kind of responsible form factor to cover a pattern of the world.”

Percept customers will have access to Azure’s cognitive service and machine learning models and Percept devices will automatically connect to Azure’s IoT hub.

Microsoft says it is working with silicon and equipment manufacturers to build an ecosystem of “intelligent edge devices that are certified to run on the Azure Percept platform.” Over the course of the next few months, Microsoft plans to certify third-party devices for inclusion in this program, which will ideally allow its customers to take their proofs-of-concept and easily deploy them to any certified devices.

“Anybody who builds a prototype using one of our development kits, if they buy a certified device, they don’t have to do any additional work,” said Christa St. Pierre, a product manager in Microsoft’s Azure edge and platform group.

St. Pierre also noted that all of the components of the platform will have to conform to Microsoft’s responsible AI principles — and go through extensive security testing.

#articles, #artificial-intelligence, #azure, #cloud, #cloud-computing, #cloud-infrastructure, #enterprise, #machine-learning, #microsoft, #microsoft-ignite-2021, #microsoft-azure, #perception, #philosophy, #platform, #product-manager, #software-platform, #tc

0

Microsoft’s Azure Arc multi-cloud platform now supports machine learning workloads

With Azure Arc, Microsoft offers a service that allows its customers to run Azure in any Kubernetes environment, no matter where that container cluster is hosted. From Day One, Arc supported a wide range of use cases, but one feature that was sorely missing when it first launched was support for machine learning (ML). But one of the advantages of a tool like Arc is that it allows enterprises to run their workloads close to their data and today, that often means using that data to train ML models.

At its Ignite conference, Microsoft today announced that it bringing exactly this capability to Azure Arc with the addition of Azure Machine Learning to the set of Arc-enabled data services.

“By extending machine learning capabilities to hybrid and multicloud environments, customers can run training models where the data lives while leveraging existing infrastructure investments. This reduces data movement and network latency, while meeting security and compliance requirements,” Azure GM Arpan Shah writes in today’s announcement.

This new capability is now available to Arc customers.

In addition to bringing this new machine learning capability to Arc, Microsoft also today announced that Azure Arc enabled Kubernetes, which allows users to deploy standard Kubernetes configurations to their clusters anywhere, is now generally available.

Also new in this world of hybrid Azure services is support for Azure Kubernetes Service on Azure Stack HCI. That’s a mouthful, but Azure Stack HCI is Microsoft’s platform for running Azure on a set of standardized, hyperconverged hardware inside a customer’s datacenter. The idea pre-dates Azure Arc, but it remains a plausible alternative for enterprises who want to run Azure in their own data center and has continued support from vendors like Dell, Lenovo, HPE, Fujitsu and DataOn.

On the open-source side of Arc, Microsoft also today stressed that Arc is built to work with any Kubernetes distribution that is conformant to the standard of the Cloud Native Computing Foundation (CNCF) and that it has worked with RedHat, Canonical, Rancher and now Nutanix to test and validate their Kubernetes implementations on Azure Arc.

#cloud-computing, #cloud-infrastructure, #cloud-native-computing-foundation, #computing, #dell, #fujitsu, #hpe, #kubernetes, #lenovo, #machine-learning, #microsoft, #microsoft-ignite-2021, #microsoft-azure, #ml, #nutanix, #red-hat, #redhat, #tc

0

Google Cloud puts its Kubernetes Engine on autopilot

Google Cloud today announced a new operating mode for its Kubernetes Engine (GKE) that turns over the management of much of the day-to-day operations of a container cluster to Google’s own engineers and automated tools. With Autopilot, as the new mode is called, Google manages all of the Day 2 operations of managing these clusters and their nodes, all while implementing best practices for operating and securing them.

This new mode augments the existing GKE experience, which already managed most of the infrastructure of standing up a cluster. This ‘standard’ experience, as Google Cloud now calls it, is still available and allows users to customize their configurations to their heart’s content and manually provision and manage their node infrastructure.

Drew Bradstock, the Group Product Manager for GKE, told me that the idea behind Autopilot was to bring together all of the tools that Google already had for GKE and bring them together with its SRE teams who know how to run these clusters in production — and have long done so inside of the company.

“Autopilot stitches together auto-scaling, auto-upgrades, maintenance, Day 2 operations and — just as importantly — does it in a hardened fashion,” Bradstock noted. “[…] What this has allowed our initial customers to do is very quickly offer a better environment for developers or dev and test, as well as production, because they can go from Day Zero and the end of that five-minute cluster creation time, and actually have Day 2 done as well.”

Image Credits: Google

From a developer’s perspective, nothing really changes here, but this new mode does free up teams to focus on the actual workloads and less on managing Kubernetes clusters. With Autopilot, businesses still get the benefits of Kubernetes, but without all of the routine management and maintenance work that comes with that. And that’s definitely a trend we’ve been seeing as the Kubernetes ecosystem has evolved. Few companies, after all, see their ability to effectively manage Kubernetes as their real competitive differentiator.

All of that comes at a price, of course, at a flat fee of $0.10 per hour and cluster (there’s also a free GKE tier that provides $74.40 in billing credits), plus, of course, the usual fees for resources that your clusters consume. Google offers a 99.95% SLA for the control plane of its Autopilot clusters and a 99.9% SLA for Autopilot pods in multiple zones.

Autopilot for GKE joins a set of container-centric products in the Google Cloud portfolio that also include Anthos for running in multi-cloud environments and Cloud Run, Google’s serverless offering. “[Autopilot] is really [about] bringing the automation aspects in GKE we have for running on Google Cloud, and bringing it all together in an easy-to-use package, so that if you’re newer to Kubernetes, or you’ve got a very large fleet, it drastically reduces the amount of time, operations and even compute you need to use,” Bradstock explained.

And while GKE is a key part of Anthos, that service is more about brining Google’s config management, service mesh and other tools to an enterprise’s own data center. Autopilot of GKE is, at least for now, only available on Google Cloud.

“On the serverless side, Cloud Run is really, really great for an opinionated development experience,” Bradstock added. “So you can get going really fast if you want an app to be able to go from zero to 1000 and back to zero — and not worry about anything at all and have it managed entirely by Google. That’s highly valuable and ideal for a lot of development. Autopilot is more about simplifying the entire platform people work on when they want to leverage the Kubernetes ecosystem, be a lot more in control and have a whole bunch of apps running within one environment.”

 

#cloud, #cloud-computing, #cloud-infrastructure, #containers, #enterprise, #gke, #google, #kubernetes, #tc

0

Project management service ZenHub raises $4.7M

ZenHub, the GitHub-centric project management service for development teams, today announced that it has raised a $4.7 million seed funding round from Canada’s BDC Capital and Ripple Ventures. This marks the first fundraise for the Vancouver, Canada-based startup after the team bootstrapped the service, which first launched back in 2014. Additional angel investors in this round include Adam Gross (former CEO of Heroku), Jiaona Zhang (VP Product at Webflow) and Oji Udezue (VP Product at Calendly).

In addition to announcing this funding round, the team also today launched its newest automation feature, which makes it easier for teams to plan the development sprints, something that is core to the Agile development process but often takes a lot of time and energy — something teams are better off spending on the actual development process.

“This is a really exciting kind of pivot point for us as a business and gives us a lot of ammunition, I think, to really go after our vision and mission a little bit more aggressively than we have even in the past,” ZenHub co-founder and CEO Aaron Upright told me. The team, he explained, used the beginning of the pandemic to spend a lot of time with customers to better understand how they were reacting to what was happening. In the process, customers repeatedly noted that development resources were getting increasingly expensive and that teams were being stretched even farther and under a lot of pressure.

ZenHub’s answer to this was to look into how it could automate more of the processes that constitute the most complex parts of Agile. Earlier this year, the company launched its first efforts in this area, with new tools for improving developer handoffs in GitHub and now, with the help of this new funding, it is putting the next pieces in place by helping teams automate their sprint planning.

Image Credits: ZenHub

“We thought about automation as an answer to [the problems development teams were facing] and that we could take an approach to automation and to help guide teams through some of the most complex and time-consuming parts of the Agile process,” Upright said. “We raised money so that we can really accelerate toward that vision. As a self-funded company, we could have gone down that path, albeit a little bit slower. But the opportunity that we saw in the market — really brought about by the pandemic, and teams working more remotely and this pressure to produce — we wanted to provide a solution much, much faster.”

The spring planning feature itself is actually pretty straightforward and allows project managers to allocate a certain number of story points (a core Agile metric to estimate the complexity of a given action item) to each sprint. ZenHub’s tool can then use that to automatically generate a list of the most highly prioritized items for the next sprint. Optionally, teams can also decide to roll over items that they didn’t finish during a given sprint into the next one.

Image Credits: ZenHub

With that, ZenHub Sprints can automate a lot of the standard sprint meetings and lets teams focus on thinking about the overall process. Of course, teams can always overrule the automated systems.

“There’s nothing more that developers hate than sitting around the table for eight hours, planning sprints, when really they all just want to be working on stuff,” Upright said.

With this new feature, sprints become a core feature of the ZenHub experience. Typically, project managers worked around this by assigning milestones in GitHub, but having a dedicated tool and these new automation features will make this quite a bit easier.

Coming soon, ZenHub will also build a new feature that will automate some parts of the software estimation process, too, by launching a new tool that will help teams more easily allocate story points to routing action items so that their discussions can focus on the more contentious ones.

#agile-software-development, #canada, #ceo, #cloud-infrastructure, #cloud-storage, #computing, #energy, #github, #heroku, #salesforce-com, #serverless-computing, #tc, #technology, #vancouver, #webflow

0

Jamaica’s Amber Group fixes second JamCOVID security lapse

Amber Group has fixed a second security lapse that exposed private keys and passwords for the government’s JamCOVID app and website.

A security researcher told TechCrunch on Sunday that the Amber Group left a file on the JamCOVID website by mistake, which contained passwords that would have granted access to the backend systems, storage, and databases running the JamCOVID site and app. The researcher asked not to be named for fears of legal repercussions from the Jamaican government.

This file, known as an environment variables (.env) file, is often used to store private keys and passwords for third-party services that are necessary for cloud applications to run. But these files are sometimes inadvertently exposed or uploaded by mistake, but can be abused to gain access to data or services that the cloud application relies on if found by a malicious actor.

The exposed environmental variables file was found in an open directory on the JamCOVID website. Although the JamCOVID domain appears to be on the Ministry of Health’s website, Amber Group controls and maintains the JamCOVID dashboard, app, and website.

The exposed file contained secret credentials for the Amazon Web Services databases and storage servers for JamCOVID. The file also contained a username and password to the SMS gateway used by JamCOVID to send text messages, and credentials for its email-sending server. (TechCrunch did not test or use any of the passwords or keys as doing so would be unlawful.)

A portion of the exposed credentials found on the JamCOVID website, controlled and maintained by Amber Group. (Image: TechCrunch)

TechCrunch contacted Amber Group’s chief executive Dushyant Savadia to alert the company to the security lapse, who pulled the exposed file offline a short time later. We also asked Savadia, who did not comment, to revoke and replace the keys.

Matthew Samuda, a minister in Jamaica’s Ministry of National Security, did not respond to a request for comment or our questions — including if the Jamaican government plans to continue its contract or relationship with Amber Group, and what — if any — security requirements were agreed upon by both the Amber Group and the Jamaican government for the JamCOVID app and website?

Details of the exposure comes just days after Escala 24×7, a cybersecurity firm based in the Caribbean, claimed that it had found no vulnerabilities in the JamCOVID service following the initial security lapse.

Escala’s chief executive Alejandro Planas declined to say if his company was aware of the second security lapse prior to its comments last week, saying only that his company was under a non-disclosure agreement and “is not able to provide any additional information.”

This latest security incident comes less than a week after Amber Group secured a passwordless cloud server hosting immigration records and negative COVID-19 test results for hundreds of thousands of travelers who visited the island over the past year. Travelers visiting the island are required to upload their COVID-19 test results in order to obtain a travel authorization before their flights. Many of the victims whose information was exposed on the server are Americans.

One news report recently quoted Amber’s Savadia as saying that the company developed JamCOVID19 “within three days.”

Neither the Amber Group nor the Jamaican government have commented to TechCrunch, but Samada told local radio that it has launched a criminal investigation into the security lapse.


Send tips securely over Signal and WhatsApp to +1 646-755-8849. You can also send files or documents using our SecureDrop. Learn more

#amazon-web-services, #caribbean, #cloud-applications, #cloud-computing, #cloud-infrastructure, #cryptography, #government, #operating-systems, #password, #securedrop, #security, #signal, #sms, #software

0

Microsoft’s Dapr open-source project to help developers build cloud-native apps hits 1.0

Dapr, the Microsoft-incubated open-source project that aims to make it easier for developers to build event-driven, distributed cloud-native applications, hit its 1.0 milestone today, signifying the project’s readiness for production use cases. Microsoft launched the Distributed Application Runtime (that’s what “Dapr” stand for) back in October 2019. Since then, the project released 14 updates and the community launched integrations with virtually all major cloud providers, including Azure, AWS, Alibaba and Google Cloud.

The goal for Dapr, Microsoft Azure CTO Mark Russinovich told me, was to democratize cloud-native development for enterprise developers.

“When we go look at what enterprise developers are being asked to do — they’ve traditionally been doing client, server, web plus database-type applications,” he noted. “But now, we’re asking them to containerize and to create microservices that scale out and have no-downtime updates — and they’ve got to integrate with all these cloud services. And many enterprises are, on top of that, asking them to make apps that are portable across on-premises environments as well as cloud environments or even be able to move between clouds. So just tons of complexity has been thrown at them that’s not specific to or not relevant to the business problems they’re trying to solve.”

And a lot of the development involves re-inventing the wheel to make their applications reliably talk to various other services. The idea behind Dapr is to give developers a single runtime that, out of the box, provides the tools that developers need to build event-driven microservices. Among other things, Dapr provides various building blocks for things like service-to-service communications, state management, pub/sub and secrets management.

Image Credits: Dapr

“The goal with Dapr was: let’s take care of all of the mundane work of writing one of these cloud-native distributed, highly available, scalable, secure cloud services, away from the developers so they can focus on their code. And actually, we took lessons from serverless, from Functions-as-a-Service where with, for example Azure Functions, it’s event-driven, they focus on their business logic and then things like the bindings that come with Azure Functions take care of connecting with other services,” Russinovich said.

He also noted that another goal here was to do away with language-specific models and to create a programming model that can be leveraged from any language. Enterprises, after all, tend to use multiple languages in their existing code, and a lot of them are now looking at how to best modernize their existing applications — without throwing out all of their current code.

As Russinovich noted, the project now has more than 700 contributors outside of Microsoft (though the core commuters are largely from Microsoft) and a number of businesses started using it in production before the 1.0 release. One of the larger cloud providers that is already using it is Alibaba. “Alibaba Cloud has really fallen in love with Dapr and is leveraging it heavily,” he said. Other organizations that have contributed to Dapr include HashiCorp and early users like ZEISS, Ignition Group and New Relic.

And while it may seem a bit odd for a cloud provider to be happy that its competitors are using its innovations already, Russinovich noted that this was exactly the plan and that the team hopes to bring Dapr into a foundation soon.

“We’ve been on a path to open governance for several months and the goal is to get this into a foundation. […] The goal is opening this up. It’s not a Microsoft thing. It’s an industry thing,” he said — but he wasn’t quite ready to say to which foundation the team is talking.

 

#alibaba, #alibaba-cloud, #aws, #cloud, #cloud-computing, #cloud-infrastructure, #cloud-services, #cloud-storage, #computing, #developer, #enterprise, #google, #hashicorp, #mark-russinovich, #microservices, #microsoft, #microsoft-azure, #new-relic, #serverless-computing, #tc

0

Databricks brings its lakehouse to Google Cloud

Databricks and Google Cloud today announced a new partnership that will bring to Databricks customers a deep integration with Google’s BigQuery platform and Google Kubernetes Engine. This will allow Databricks’ users to bring their data lakes and the service’s analytics capabilities to Google Cloud.

Databricks already features a deep integration with Microsoft Azure — one that goes well beyond this new partnership with Google Cloud — and the company is also an AWS partner. By adding Google Cloud to this list, the company can now claim to be the “only unified data platform available across all three clouds (Google, AWS and Azure).”

It’s worth stressing, though, that Databricks’ Azure integration is a bit of a different deal from this new partnership with Google Cloud. “Azure Databricks is a first-party Microsoft Azure service that is sold and supported directly by Microsoft. The first-party service is unique to our Microsoft partnership. Customers on Google Cloud will purchase directly from Databricks through the Google Cloud Marketplace,” a company spokesperson told me. That makes it a bit more of a run-of-the-mill partnership compared to the Microsoft deal, but that doesn’t mean the two companies aren’t just as excited about it.

“We’re delighted to deliver Databricks’ lakehouse for AI and ML-driven analytics on Google Cloud,” said Google Cloud CEO Thomas Kurian (or, more likely, one of the company’s many PR specialists who likely wrote and re-wrote this for him a few times before it got approved). “By combining Databricks’ capabilities in data engineering and analytics with Google Cloud’s global, secure network—and our expertise in analytics and delivering containerized applications—we can help companies transform their businesses through the power of data.”

Similarly, Databricks CEO Ali Ghodsi noted that he is “thrilled to partner with Google Cloud and deliver on our shared vision of a simplified, open, and unified data platform that supports all analytics and AI use-cases that will empower our customers to innovate even faster.”

And indeed, this is clearly a thrilling delight for everybody around, including customers like Conde Nast, whose Director of Data Engineering Nana Essuman is “excited to see leaders like Google Cloud and Databricks come together to streamline and simplify getting value from data.”

If you’re also thrilled about this, you’ll be able to hear more about it from both Ghodsi and Kurian at an event on April 6 that is apparently hosted by TechCrunch (though this is the first I’ve heard of it, too).

#ali-ghodsi, #artificial-intelligence, #aws, #bigquery, #cloud-computing, #cloud-infrastructure, #computing, #conde-nast, #databricks, #google, #google-cloud, #microsoft, #microsoft-azure, #partner, #tc, #thomas-kurian

0

Twitter expands Google Cloud partnership to ‘learn more from data, move faster’

Twitter is upping its data analytics game in the form of an expanded, multiyear partnership with Google Cloud.

The social media giant first began working with Google in 2018 to move Hadoop clust