Google Cloud hires Intel veteran to head its custom chip efforts

There has been a growing industry trend in recent years for large scale companies to build their own chips. As part of that, Google announced today that it has hired long-time Intel executive Uri Frank as Vice President to run its custom chip division.

“The future of cloud infrastructure is bright, and it’s changing fast. As we continue to work to meet computing demands from around the world, today we are thrilled to welcome Uri Frank as our VP of Engineering for server chip design,” Amin Vahdat, Google Fellow and VP of systems infrastructure wrote in a blog post announcing the hire.

With Frank, Google gets an experienced chip industry executive, who spent more than two decades at Intel rising from engineering roles to Corporate Vice President at the Design Engineering Group, his final role before leaving the company earlier this month.

Frank will lead the custom chip division in Israel as part of Google. As he said in his announcement on LinkedIn, this was a big step to join a company with a long history of building custom silicon.

“Google has designed and built some of the world’s largest and most efficient computing systems. For a long time, custom chips have been an important part of this strategy. I look forward to growing a team here in Israel while accelerating Google Cloud’s innovations in compute infrastructure,” Frank wrote.

Google’s history of building its own chips dates back to 2015 when it launched the first TensorFlow chips. It moved into video processing chips in 2018 and added OpenTitan , an open source chip with a security angle in 2019.

Frank’s job will be to continue to build on this previous experience to work with customers and partners to build new custom chip architectures. The company wants to move away from buying motherboard components from different vendors to building its own “system on a chip” or SoC, which it says will be drastically more efficient.

“Instead of integrating components on a motherboard where they are separated by inches of wires, we are turning to “Systems on Chip” (SoC) designs where multiple functions sit on the same chip, or on multiple chips inside one package. In other words, the SoC is the new motherboard,” Vahdat wrote.

While Google was early to the ‘Build Your Own Chip’ movement, we’ve seen other large scale companies like Amazon, Facebook, Apple and Microsoft begin building their own custom chips in recent years to meet each company’s unique needs, and give more precise control over the relationship between the hardware and software.

It will be Frank’s job to lead Google’s custom chip unit and help bring it to the next level.

#chips, #cloud, #enterprise, #google, #google-cloud-platform, #hardware, #intel, #personnel, #tc

0

Twitter expands Google Cloud partnership to ‘learn more from data, move faster’

Twitter is upping its data analytics game in the form of an expanded, multiyear partnership with Google Cloud.

The social media giant first began working with Google in 2018 to move Hadoop clusters to the Google Cloud platform as a part of its Partly Cloudy strategy.

With the expanded agreement, Twitter will move its offline analytics, data processing and machine learning workloads to Google’s Data Cloud

I talked with Sudhir Hasbe, Google Cloud’s director of product management and data analytics, to better understand just what this means. He said the move will give Twitter the ability to analyze data faster as part of its goal to provide a better user experience.

You see, behind every tweet, like and retweet, there is a series of data points that helps Twitter understand things like just how people are using the service, and what type of content they might want to see.

Twitter’s data platform ingests trillions of events, processes hundreds of petabytes of data and runs tens of thousands of jobs on over a dozen clusters daily. 

By expanding its partnership with Google, Twitter is essentially adopting the company’s Data Cloud, including BigQuery, Dataflow, BigTable and machine learning (ML) tools to make more sense of, and improve, how Twitter features are used.

Twitter declined a request for an interview but CTO Parag Agrawal said in a written statement that the company’s initial partnership was successful and led to enhanced productivity on the part of its engineering teams.  

“Building on this relationship and Google’s technologies will allow us to learn more from our data, move faster and serve more relevant content to the people who use our service every day,” he said.

Google Cloud’s Hasbe believes that organizations like Twitter need a highly scalable analytics platform so they can derive value from all their data collecting. By expanding its partnership with Google, Twitter is able to add significantly more use cases out of its cloud platform.

“Our platform is serverless and we can help organizations, like Twitter, automatically scale up and down,” Hasbe told TechCrunch.

“Twitter can bring massive amounts of data, analyze and get insights without the burden of having to worry about infrastructure or capacity management or how many machines or servers they might need,” he added. “None of that is their problem.” 

The shift will also make it easier for Twitter’s data scientists and other similar personnel to build machine learning models and do predictive analytics, according to Hasbe.

Other organizations that have recently turned to Google Cloud to help navigate the pandemic include Bed, Bath and Beyond, Wayfair, Etsy and The Home Depot.

On February 2, TC’s Frederic Lardinois reported that while Google Cloud is seeing accelerated revenue growth, its losses are also increasing. This week, Google disclosed operating income/loss for its Google Cloud business unit in its quarterly earnings. Google Cloud lost $5.6 billion in Google’s fiscal year 2020, which ended December 31. That’s on $13 billion of revenue.

#apache-hadoop, #cloud, #cloud-computing, #cloud-infrastructure, #data-analysis, #data-processing, #google-cloud, #google-cloud-platform, #machine-learning, #twitter

0

Google Cloud launches Apigee X, the next generation of its API management platform

Google today announced the launch of Apigee X, the next major release of the Apgiee API management platform it acquired back in 2016.

“If you look at what’s happening — especially after the pandemic started in March last year — the volume of digital activities has gone up in every kind of industry, all kinds of use cases are coming up. And one of the things we see is the need for a really high-performance, reliable, global digital transformation platform,” Amit Zavery, Google Cloud’s head of platform, told me.

He noted that the number of API calls has gone up 47 percent from last year and that the platform now handles about 2.2 trillion API calls per year.

At the core of the updates are deeper integrations with Google Cloud’s AI, security and networking tools. In practice, this means Apigee users can now deploy their APIs across 24 Google Cloud regions, for example, and use Google’s caching services in more than 100 edge locations.

Image Credits: Google

In addition, Apigee X now integrates with Google’s Cloud Armor firewall and its Cloud Identity Access Management platform. This also means that Apigee users won’t have to use third-party tools for their firewall and identity management needs.

“We do a lot of AI/ML-based anomaly detection and operations management,” Zavery explained. “We can predict any kind of malicious intent or any other things which might happen to those API calls or your traffic by embedding a lot of those insights into our API platform. I think [that] is a big improvement, as well as new features, especially in operations management, security management, vulnerability management and making those a core capability so that as a business, you don’t have to worry about all these things. It comes with the core capabilities and that is really where the front doors of digital front-ends can shine and customers can focus on that.”

The platform now also makes better use of Google’s AI capabilities to help users identify anomalies or predict traffic for peak seasons. The idea here is to help customers automate a lot of the standards automation tasks and, of course, improve security at the same time.

As Zavery stressed, API management is now about more than just managing traffic between applications. But more than just helping customers manage their digital transformation projects, the Apigee team is now thinking about what it calls ‘digital excellence.’ “That’s how we’re thinking of the journey for customers moving from not just ‘hey, I can have a front end,’ but what about all the excellent things you want to do and how we can do that,” Zavery said.

“During these uncertain times, organizations worldwide are doubling-down on their API strategies to operate anywhere, automate processes, and deliver new digital experiences quickly and securely,” said James Fairweather, Chief Innovation Officer at Pitney Bowes. “By powering APIs with new capabilities like reCAPTCHA Enterprise, Cloud Armor (WAF), and Cloud CDN, Apigee X makes it easy for enterprises like us to scale digital initiatives, and deliver innovative experiences to our customers, employees and partners.”

#api, #apigee, #artificial-intelligence, #caching, #cloud, #cloud-computing, #cloud-infrastructure, #computing, #developer, #enterprise, #firewall, #google, #google-cloud, #google-cloud-platform

0

Google Cloud lost $5.6B in 2020

Google continues to bet heavily on Google Cloud and while it is seeing accelerated revenue growth, its losses are also increasing. For the first time today, Google disclosed operating income/loss for its Google Cloud business unit in its quarterly earnings today. Google Cloud lost $5.6 billion in Google’s fiscal year 2020, which ended December 31. That’s on $13 billion of revenue.

While this may look a bit dire at first glance (cloud computing should be pretty profitable, after all), there’s different ways of looking at this. On the one hand, losses are mounting, up from $4.3 billion in 2018 and $4.6 billion in 2019, but revenue is also seeing strong growth, up from $5.8 billion in 2018 and $8.9 billion in 2019. What we’re seeing here, more than anything else, is Google investing heavily in its cloud business.

Google’s Cloud unit, led by its CEO Thomas Kurian, includes all of its cloud infrastructure and platform services, as well as Google Workspace (which you probably still refer to as G Suite). And that’s exactly where Google is making a lot of investments right now. Data centers, after all, don’t come cheap and Google Cloud launched four new regions in 2020 and started work on others. That’s on top of its investment in its core services and a number of acquisitions.

Image Credits: Google

“Our strong fourth quarter performance, with revenues of $56.9 billion, was driven by Search and YouTube, as consumer and business activity recovered from earlier in the year,” Ruth Porat, CFO of Google and Alphabet, said. “Google Cloud revenues were $13.1 billion for 2020, with significant ongoing momentum, and we remain focused on delivering value across the growth opportunities we see.”

For now, though, Google’s core business, which saw a strong rebound in its advertising business in the last quarter, is subsidizing its cloud expansion.

Meanwhile, over in Seattle, AWS today reported revenue of $12.74 billion in the last quarter alone and operating income of $3.56 billion. For 2020, AWS’s operating income was $13.5 billion.

#alphabet, #amazon-web-services, #artificial-intelligence, #aws, #ceo, #cfo, #cloud-computing, #cloud-infrastructure, #companies, #computing, #diane-greene, #earnings, #google, #google-cloud, #google-cloud-platform, #ruth-porat, #seattle, #thomas-kurian, #world-wide-web

0

With a $50B run rate in reach, can anyone stop AWS?

AWS, Amazon’s flourishing cloud arm, has been growing at a rapid clip for more than a decade. An early public cloud infrastructure vendor, it has taken advantage of first-to-market status to become the most successful player in the space. In fact, one could argue that many of today’s startups wouldn’t have gotten off the ground without the formation of cloud companies like AWS giving them easy access to infrastructure without having to build it themselves.

In Amazon’s most-recent earnings report, AWS generated revenues of $11.6 billion, good for a run rate of more than $46 billion. That makes the next AWS milestone a run rate of $50 billion, something that could be in reach in less than two quarters if it continues its pace of revenue growth.

The good news for competing companies is that in spite of the market size and relative maturity, there is still plenty of room to grow.

While the cloud division’s growth is slowing in percentage terms as it comes firmly up against the law of large numbers in which AWS has to grow every quarter compared to an ever-larger revenue base. The result of this dynamic is that while AWS’ year-over-year growth rate is slowing over time — from 35% in Q3 2019 to 29% in Q3 2020 — the pace at which it is adding $10 billion chunks of annual revenue run rate is accelerating.

At the AWS re:Invent customer conference this year, AWS CEO Andy Jassy talked about the pace of change over the years, saying that it took the following number of months to grow its run rate by $10 billion increments:

123 months ($0-$10 billion) 23 months ($10 billion-$20 billion) 13 months ($20 billion-$30 billion) 12 months ($30 billion to $40 billion)

Image Credits: TechCrunch (data from AWS)

Extrapolating from the above trend, it should take AWS fewer than 12 months to scale from a run rate of $40 billion to $50 billion. Stating the obvious, Jassy said “the rate of growth in AWS continues to accelerate.” He also took the time to point out that AWS is now the fifth-largest enterprise IT company in the world, ahead of enterprise stalwarts like SAP and Oracle.

What’s amazing is that AWS achieved its scale so fast, not even existing until 2006. That growth rate makes us ask a question: Can anyone hope to stop AWS’ momentum?

The short answer is that it doesn’t appear likely.

Cloud market landscape

A good place to start is surveying the cloud infrastructure competitive landscape to see if there are any cloud companies that could catch the market leader. According to Synergy Research, AWS remains firmly in front, and it doesn’t look like any competitor could catch AWS anytime soon unless some market dynamic caused a drastic change.

Synergy Research Cloud marketshare leaders. Amazon is first, Microsoft is second and Google is third.

Image Credits: Synergy Research

With around a third of the market, AWS is the clear front-runner. Its closest and fiercest rival Microsoft has around 20%. To put that into perspective a bit, last quarter AWS had $11.6 billion in revenue compared to Microsoft’s $5.2 billion Azure result. While Microsoft’s equivalent cloud number is growing faster at 47%, like AWS, that number has begun to drop steadily while it gains market share and higher revenue and it falls victim to that same law of large numbers.

#amazon, #aws, #cloud, #cloud-infrastructure-market, #enterprise, #google-cloud-platform, #microsoft-azure, #tc

0

Google expands its cloud with new regions in Chile, Germany and Saudi Arabia

It’s been a busy year of expansion for the large cloud providers, with AWS, Azure and Google aggressively expanding their data center presence around the world. To cap off the year, Google Cloud today announced a new set of cloud regions, which will go live in the coming months and years. These new regions, which will all have three availability zones, will be in Chile, Germany and Saudi Arabia. That’s on top of the regions in Indonesia, South Korea, the U.S. (Last Vegas and Salt Lake City) that went live this year — and the upcoming regions in France, Italy, Qatar and Spain the company also announced over the course of the last twelve months.

Image Credits: Google

In total, Google currently operates 24 regions with 73 availability zones, not counting those it has announced but that aren’t live yet. While Microsoft Azure is well ahead of the competition in terms of the total number of regions (though some still lack availability zones), Google is now starting to pull even with AWS, which currently offers 24 regions with a total of 77 availability zones. Indeed, with its 12 announced regions, Google Cloud may actually soon pull ahead of AWS, which is currently working on six new regions.

The battleground may soon shift away from these large data centers, though, with a new focus on edge zones close to urban centers that are smaller than the full-blown data centers the large clouds currently operate but that allow businesses to host their services even closer to their customers.

All of this is a clear sign of how much Google has invested in its cloud strategy in recent years. For the longest time, after all, Google Cloud Platform lagged well behind its competitors. Only three years ago, Google Cloud offered only 13 regions, for example. And that’s on top of the company’s heavy investment in submarine cables and edge locations.

#amazon-web-services, #aws, #chile, #cloud-computing, #cloud-infrastructure, #france, #germany, #google, #google-cloud-platform, #indonesia, #italy, #microsoft, #nuodb, #qatar, #salt-lake-city, #saudi-arabia, #south-korea, #spain, #tc, #united-states, #web-hosting, #web-services

0

Google grants $3 million to the CNCF to help it run the Kubernetes infrastructure

Back in 2018, Google announced that it would provide $9 million in Google Cloud Platform credits — divided over three years — to the Cloud Native Computing Foundation (CNCF) to help it run the development and distribution infrastructure for the Kubernetes project. Previously, Google owned and managed those resources for the community. Today, the two organizations announced that Google is adding on to this grant with another $3 million annual donation to the CNCF to “help ensure the long-term health, quality and stability of Kubernetes and its ecosystem.”

As Google notes, the funds will go to the testing and infrastructure of the Kubernetes project, which currently sees over 2,300 monthly pull requests that trigger about 400,000 integration test runs, all of which use about 300,000 core hours on GCP.

“I’m really happy that we’re able to continue to make this investment,” Aparna Sinha, a director of product management at Google and the chairperson of the CNCF governing board, told me. “We know that it is extremely important for the long-term health, quality and stability of Kubernetes and its ecosystem and we’re delighted to be partnering with the Cloud Native Computing Foundation on an ongoing basis. At the end of the day, the real goal of this is to make sure that developers can develop freely and that Kubernetes, which is of course so important to everyone, continues to be an excellent, solid, stable standard for doing that.”

Sinha also noted that Google contributes a lot of code to the project, with 128,000 code contributions in the last twelve months alone. But on top of these technical contributions, the team is also making in-kind contributions through community engagement and mentoring, for example, in addition to the kind of financial contributions the company is announcing today.

“The Kubernetes project has been growing so fast — the releases are just one after the other,” said Priyanka Sharma, the General Manager of the CNCF. “And there are big changes, all of this has to run somewhere. […] This specific contribution of the $3 million, that’s where that comes in. So the Kubernetes project can be stress-free, [knowing] they have enough credits to actually run for a full year. And that security is critical because you don’t want Kubernetes to be wondering where will this run next month. This gives the developers and the contributors to the project the confidence to focus on feature sets, to build better, to make Kubernetes ever-evolving.”

It’s worth noting that while both Google and the CNCF are putting their best foot forward here, there have been some questions around Google’s management around the Istio service mesh project, which was incubated by Google and IBM a few years ago. At some point in 2017, there was a proposal to bring it under the CNCF umbrella, but that never happened. This year, Istio became one of the founding projects of Open Usage Commons, though that group is mostly concerned with trademarks, not with project governance. And while all of this may seem like a lot of inside baseball — and it is — but it had some members of the open-source community question Google’s commitment to organizations like the CNCF.

“Google contributes to a lot of open-source projects. […] There’s a lot of them, many are with open-source foundations under the Linux Foundation, many of them are otherwise,” Sinha said when I asked her about this. “There’s nothing new, or anything to report about anything else. In particular, this discussion — and our focus very much with the CNCF here is on Kubernetes, which I think — out of everything that we do — is by far the biggest contribution or biggest amount of time and biggest amount of commitment relative to anything else.”

#aparna-sinha, #cloud, #cloud-computing, #cloud-infrastructure, #cloud-native-computing-foundation, #cloud-native-computing, #cncf, #computing, #developer, #free-software, #google, #google-cloud-platform, #kubernetes, #priyanka-sharma, #product-management, #tc, #web-services

0

Google Cloud launches its Business Application Platform based on Apigee and AppSheet

Unlike some of its competitors, Google Cloud has recently started emphasizing how its large lineup of different services can be combined to solve common business problems. Instead of trying to sell individual services, Google is focusing on solutions and the latest effort here is what it calls its Business Application Platform, which combines the API management capabilities of Apigee with the no-code application development platform of AppSheet, which Google acquired earlier this year.

As part of this process, Google is also launching a number of new features for both services today. The company is launching the beta of a new API Gateway, built on top of the open-source Envoy project, for example. This is a fully-managed service that is meant o makes it easier for developers to secure and manage their API across Google’s cloud computing services and serverless offerings like Cloud Functions and Cloud Run. The new gateway, which has been in alpha for a while now, offers all the standard features you’d expect, including authentication, key validation and rate limiting.

As for its low-code service AppSheet, the Google Cloud team is now making it easier to bring in data from third-party applications thanks to the general availability to Apigee as a data source for the service. AppSheet already supported standard sources like MySQL, Salesforce and G Suite, but this new feature adds a lot of flexibility to the service.

With more data comes more complexity, so AppSheet is also launching new tools for automating processes inside the service today, thanks to the early access launch of AppSheet Automation. Like the rest of AppSheet, the promise here is that developers won’t have to write any code. Instead, AppSheet Automation provides a visual interface, that according to Google, “provides contextual suggestions based on natural language inputs.” 

“We are confident the new category of business application platforms will help empower both technical and line of business developers with the core ability to create and extend applications, build and automate workflows, and connect and modernize applications,” Google notes in today’s announcement. And indeed, this looks like a smart way to combine the no-code environment of AppSheet with the power of Apigee .

#alpha, #api, #api-management, #apigee, #appsheet, #cloud, #cloud-applications, #cloud-computing, #computing, #developer, #enterprise, #envoy, #google, #google-cloud, #google-cloud-platform, #mysql, #salesforce, #serverless-computing, #tc

0

Canalys: Google is top cloud infrastructure provider for online retailers

While Google Cloud Platform has shown some momentum in the last year, it remains a distant third behind Amazon and Microsoft in the cloud infrastructure market. But Google got some good news from Canalys today when the firm reported that GCP is the number one cloud platform provider for retailers.

Canalys didn’t provide specific numbers, but it did set overall market positions in the retail sector with Microsoft coming in second, Amazon third, followed by Alibaba and IBM in fourth and fifth respectively.

Canalys cloud infrastructure retail segment market share numbers

Image Credits: Canalys

It’s probably not a coincidence that Google went after retail. Many retailers don’t want to put their cloud presence onto AWS, as Amazon.com competes directly with these retailers. Brent Leary, founder and principal analyst at CRM Essentials, says that as such, the news doesn’t really surprise him.

“Retailers have to compete with Amazon, and I’m guessing the last thing they want to do is use AWS and help Amazon fund all their new initiatives and experiments that in some cases will be used against them,” Leary told TechCrunch. Further, he said that many retailers would also prefer to keep their customer data off of Amazon’s services.

Canalys Senior Director Alex Smith says that this Amazon effect combined with the pandemic and other technological factors has been working in Google’s favor, at least in the retail sector. “Now more than ever, retailers need a digital strategy to win in an omnichannel world, especially with Amazon’s online dominance. Digital is applied everywhere from customer experience to cost optimization, and the overall technological capability of a retailer is what will define its success,” he said.

COVID-19 has forced many retailers to close stores for extended periods of time, and when you combine that with people being more reluctant to go inside stores when they do open, retailers have had to take a crash course in eCommerce if they didn’t have a significant online presence already.

Canalys points out that Google has lured customers with its advertising and search capabilities beyond just pure infrastructure offerings, taking advantage of its other strengths to grow the market segment.

Recognizing this, Google has been making a big retail push including a big partnership with Salesforce and specific products announced at Google Cloud Next last year. As we wrote at the time of the retail offering,

The company offers eCommerce Hosting, designed specifically for online retailers, and it is offering a special premium program, so retailers get “white glove treatment with technical architecture reviews and peak season operations support…” according to the company. In other words, it wants to help these companies avoid disastrous, money-losing results when a site goes down due to demand.

What’s more, Canalys reports that Google Cloud has also been hiring aggressively and forming partnerships with big systems integrators to help grow the retail business. Retail customers include Home Depot, Kohl’s, Costco and Best Buy.

#asia, #canalys, #cloud, #ecommerce, #enterprise, #google-cloud-platform, #retail, #tc, #thomas-kurian

0

Even as cloud infrastructure growth slows, revenue rises over $30B for quarter

The cloud market is coming into its own during the pandemic as the novel coronavirus forced many companies to accelerate plans to move to the cloud, even while the market was beginning to mature on its own.

This week, the big three cloud infrastructure vendors — Amazon, Microsoft and Google — all reported their earnings, and while the numbers showed that growth was beginning to slow down, revenue continued to increase at an impressive rate, surpassing $30 billion for a quarter for the first time, according to Synergy Research Group numbers.

#amazon, #aws, #azure, #canalys, #cloud, #cloud-market-share, #earnings, #enterprise, #extra-crunch, #google, #google-cloud-platform, #market-analysis, #microsoft, #synergy-research, #tc

0

Google Cloud earns defense contract win for Anthos multi-cloud management tool

Google dropped out of the Pentagon’s JEDI cloud contract battle fairly early in the game, citing it was in conflict with its “AI principals.” However, today the company announced a new 7 figure contract with DoD’s Defense Innovation Unit (DIU), a big win for the cloud unit and CEO Thomas Kurian.

While the company would not get specific about the number, the new contract involves using Anthos, the tool the company announced last year to secure DIU’s multi-cloud environment. In spite of the JEDI contract involving a single vendor, the DoD has always used solutions from all three major cloud vendors — Amazon, Microsoft and Google — and this solution will provide a way to monitor security across all three environments, according to the company.

“Multi-cloud is the future. The majority of commercial businesses run multi-cloud environments securely and seamlessly, and this is now coming to the federal government as well,” Mike Daniels, VP of Global Public Sector at Google Cloud told TechCrunch.

The idea is to manage security across three environments with help from cloud security vendor Netskope, which is also part of the deal.”The multi-cloud solution will be built on Anthos, allowing DIU to run web services and applications across Google Cloud, Amazon Web Services,  and Microsoft Azure — while being centrally managed from the Google Cloud Console,” the company wrote in a statement.

Daniels says that while this is a deal with DIU, he could see it expanding to other parts of DoD. “This is a contract with the DIU, but our expectation is that the DoD will look at the project as a model for how to implement their own security posture.”

Google Cloud Platform remains way back in the cloud infrastructure pack in third place with around 8% market share. For context, AWS has around 33% market share and Microsoft has around 18%.

While JEDI, a $10 billion, winner-take-all prize remains mired in controversy and an on-going battle between The Pentagon, Amazon and Microsoft, this deal shows that the defense department is looking at advanced technology like Anthos to help it manage a multi-cloud world regardless of what happens with JEDI.

#cloud, #enterprise, #google, #google-anthos, #google-cloud-platform, #multi-cloud, #security, #tc, #us-department-of-defense

0

In spite of pandemic (or maybe because of it), cloud infrastructure revenue soars

It’s fair to say that even before the impact of COVID-19, companies had begun a steady march to the cloud. Maybe it wasn’t fast enough for AWS, as Andy Jassy made clear in his 2019 Re:invent keynote, but it was happening all the same and the steady revenue increases across the cloud infrastructure market bore that out.

As we look at the most recent quarter’s earnings reports for the main players in the market, it seems the pandemic and economic fall out has done little to slow that down. In fact, it may be contributing to its growth.

According to numbers supplied by Synergy Research, the cloud infrastructure market totaled $29 billion in revenue for Q12020.

Image Credit: Synergy Research

Synergy’s John Dinsdale, who has been watching this market for a long time, says that the pandemic could be contributing to some of that growth, at least modestly. In spite of the numbers, he doesn’t necessarily see these companies getting out of this unscathed either, but as companies shift operations from offices, it could be part of the reason for the increased demand we saw in the first quarter.

“For sure, the pandemic is causing some issues for cloud providers, but in uncertain times, the public cloud is providing flexibility and a safe haven for enterprises that are struggling to maintain normal operations. Cloud provider revenues continue to grow at truly impressive rates, with AWS and Azure in aggregate now having an annual revenue run rate of well over $60 billion,” Dinsdale said in a statement.

AWS led the way with a third of the market or more than $10 billion in quarterly revenue as it continues to hold a substantial lead in market share. Microsoft was in second, growing at a brisker 59% for 18% of the market. While Microsoft doesn’t break out its numbers, using Synergy’s numbers, that would work out to around $5.2 billion for Azure revenue. Meanwhile Google came in third with $2.78 billion.

If you’re keeping track of market share at home, it comes out to 32% for AWS, 18% for Microsoft and 8% for Google. This split has remained fairly steady, although Microsoft has managed to gain a few percentage points over the last several quarters as its overall growth rate outpaces Amazon.

#amazon, #andy-jassy, #aws, #azure, #cloud, #cloud-infrastructure-market-share, #earnings, #enterprise, #google, #google-cloud-platform, #john-dinsdale, #microsoft, #synergy-research

0

Google Cloud opens its Las Vegas region

Google Cloud today announced the official opening of its Las Vegas data center region. With this, Google Cloud now operates four regions in the western U.S., with Las Vegas complementing Google Cloud’s existing data centers in Los Angeles, California; The Dalles, Oregon and its recently opened Salt Lake City, Utah region.

In total, Google now offers its customers the option to host their applications in 23 regions globally and with the opening of this new region, it now has seven U.S. regions.

Like all of Google’s new regions, Las Vegas will offer three availability zones and access to most of Google Cloud’s services. In Vegas, though, developers won’t be able to use relatively new services like Cloud Functions and Cloud Run yet. Some other features, including Cloud HSM and Secret Manager, are also not available yet either.

The company first announced the Vegas expansion in July 2019. And while it’s eerily quiet in Las Vegas right now, the idea behind these new regions is always to give companies the option to be close to their customers and offer them low-latency access to their applications, as well as the ability to distribute workloads across a wider geographic region.

Earlier this year, Google also announced that it would open its regions in Jakarta, Seoul and Warsaw over the course of 2020. So far, it doesn’t look like the COVID-19 pandemic is slowing these plans down.

For Las Vegas, Google’s launch partner is Aristocrat, which fittingly offers digital products for the gambling industry.

“Cloud technologies enable two important outcomes for us,” said James Alverez, CIO of Aristocrat. “First the ability to securely, consistently and immediately enable and disable game development platforms; and second, our ability to expand and contract our infrastructure based on demand. Both of these capabilities allow us to flex our technology to fully support the demands of our customers and our business. The Las Vegas region gives us the opportunity to more directly engage Google Cloud services and take advantage of an entry point into the network.”

#california, #cloud, #cloud-computing, #cloud-infrastructure, #companies, #google, #google-cloud-platform, #las-vegas, #los-angeles, #nevada, #oregon, #salt-lake-city, #united-states, #utah

0

Google Cloud’s fully-managed Anthos is now generally available for AWS

A year ago, back in the days of in-person conferences, Google officially announced the launch of its Anthos multi-cloud application modernization platform at its Cloud Next conference. The promise of Anthos was always that it would allow enterprises to write their applications once, package them into containers and then manage their multi-cloud deployments across GCP, AWS, Azure and their on-prem data centers.

Until now, support for AWS and Azure was only available in preview, but today, the company is making support for AWS and on-premises generally available. Microsoft Azure support remains in preview, though.

“As an AWS customer now, or a GCP customer, or a multi-cloud customer, […] you can now run Anthos on those environments in a consistent way, so you don’t have to learn any proprietary APIs and be locked in,” Eyal Manor, the VP of engineering in charge of Anthos, told me. “And for the first time, we enable the portability between different infrastructure environments as opposed to what has happened in the past where you were locked into a set of API’s.”

Manor stressed that Anthos was designed to be multi-cloud from day one. As for why AWS support is launching ahead of Azure, Manor said that there was simply more demand for it. “We surveyed the customers and they said, hey, we want, in addition to GCP, we want AWS,” he said. But support for Azure will come later this year and the company already has a number of preview customers for it. In addition, Anthos will also come to bare metal servers in the future.

Looking even further ahead, Manor also noted that better support for machine learning workloads in on the way. Many businesses, after all, want to be able to update and run their models right where their data resides, no matter what cloud that may be. There, too, the promise of Anthos is that developers can write the application once and then run it anywhere.

“I think a lot of the initial response and excitement was from the developer audiences,” Jennifer Lin, Google Cloud’s VP of product management, told me. “Eric Brewer had led a white paper that we did to say that a lot of the Anthos architecture sort of decouples the developer and the operator stakeholder concerns. There hadn’t been a multi-cloud shared software architecture where we could do that and still drive emerging and existing applications with a common shared software stack.”

She also noted that a lot of Google Cloud’s ecosystem partners endorsed the overall Anthos architecture early on because they, too, wanted to be able to write once and run anywhere — and so do their customers.

Plaid is one of the launch partners for these new capabilities. “Our customers rely on us to be always available and as a result we have very high reliability requirements,” said Naohiko Takemura, Plaid’s head of engineering. “We pursued a multi-cloud strategy to ensure redundancy for our critical KARTE service. Google Cloud’s Anthos works seamlessly across GCP and our other cloud providers preventing any business disruption. Thanks to Anthos, we prevent vendor lock-in, avoid managing cloud-specific infrastructure, and our developers are not constrained by cloud providers.”

With this release, Google Cloud is also bringing deeper support for virtual machines to Anthos, as well as improved policy and configuration management.

Over the next few months, the Anthos Service Mesh will also add support for applications that run in traditional virtual machines. As Lin told me, “a lot of this is is about driving better agility and talking the complexity out of it so that we have abstractions that work across any environment, whether it’s legacy or new or on-prem or AWS or GCP.”

#amazon-web-services, #api, #cloud, #cloud-computing, #cloud-infrastructure, #computing, #developer, #enterprise, #google, #google-cloud, #google-cloud-platform, #machine-learning, #microsoft, #microsoft-azure, #netapp, #product-management, #tc

0