Google’s Anthos multi-cloud platform gets improved logging, Windows container support and more

Google today announced a sizable update to its Anthos multi-cloud platform that lets you build, deploy and manage containerized applications anywhere, including on Amazon’s AWS and (in preview) on Microsoft Azure.

Version 1.7 includes new features like improved metrics and logging for Anthos on AWS, a new Connect gateway to interact with any cluster right from Google Cloud and a preview of Google’s managed control plane for Anthos Service Mesh. Other new features include Windows container support for environments that use VMware’s vSphere platform and new tools for developers to make it easier for them to deploy their applications to any Anthos cluster.

Today’s update comes almost exactly two years after Google CEO Sundar Pichai originally announced Anthos at its Cloud Next event in 2019 (before that, Google called this project the ‘Google Cloud Services Platform,’ which launched three years ago). Hybrid- and multi-cloud, it’s fair to say, takes a key role in the Google Cloud roadmap — and maybe more so for Google than for any of its competitors. And recently, Google brought on industry veteran Jeff Reed to become the VP of Product Management in charge of Anthos.

Reed told me that he believes that there are a lot of factors right now that are putting Anthos in a good position. “The wind is at our back. We bet on Kubernetes, bet on containers — those were good decisions,” he said. Increasingly, customers are also now scaling out their use of Kubernetes and have to figure out how to best scale out their clusters and deploy them in different environments — and to do so, they need a consistent platform across these environments. He also noted that when it comes to bringing on new Anthos customers, it’s really those factors that determine whether a company will look into Anthos or not.

He acknowledged that there are other players in this market, but he argues that Google Cloud’s take on this is also quite different. “I think we’re pretty unique in the sense that we’re from the cloud, cloud-native is our core approach,” he said. “A lot of what we talk about in [Anthos] 1.7 is about how we leverage the power of the cloud and use what we call ‘an anchor in the cloud’ to make your life much easier. We’re more like a cloud vendor there, but because we support on-prem, we see some of those other folks.” Those other folks being IBM/Red Hat’s OpenShift and VMware’s Tanzu, for example. 

The addition of support for Windows containers in vSphere environments also points to the fact that a lot of Anthos customers are classical enterprises that are trying to modernize their infrastructure, yet still rely on a lot of legacy applications that they are now trying to bring to the cloud.

Looking ahead, one thing we’ll likely see is more integrations with a wider range of Google Cloud products into Anthos. And indeed, as Reed noted, inside of Google Cloud, more teams are now building their products on top of Anthos themselves. In turn, that then makes it easier to bring those services to an Anthos-managed environment anywhere. One of the first of these internal services that run on top of Anthos is Apigee. “Your Apigee deployment essentially has Anthos underneath the covers. So Apigee gets all the benefits of a container environment, scalability and all those pieces — and we’ve made it really simple for that whole environment to run kind of as a stack,” he said.

I guess we can expect to hear more about this in the near future — or at Google Cloud Next 2021.

 

#anthos, #apigee, #aws, #ceo, #chrome-os, #cisco, #cloud, #cloud-computing, #cloud-infrastructure, #computing, #enterprise, #google, #google-cloud, #google-cloud-platform, #ibm, #kubernetes, #microsoft, #microsoft-windows, #red-hat, #sundar-pichai, #vmware

0

Pulumi launches version 3.0 of its infrastructure-as-code platform

Pulumi was one of the first of what is now a growing number of infrastructure-as-code startups and today, at its developer conference, the company is launching version 3.0 of its cloud engineering platform. With 70 new features and about 1,000 improvements since version 2.0, this is Pulumi’s biggest release yet.

The new release includes features that range from support for Google Cloud as an infrastructure provider (now in preview) to a new Automation API that turns Pulumi into a library that can then be called from other applications. It basically allows developers to write tools that, for example, can then provision and configure their own infrastructure for each customer of a SaaS application, for example.

Image Credits: Pulumi

The company is also launching Pulumi Packages and Components for creating opinionated infrastructure building blocks that developers can then call up from their preferred languages.

Also new is support for Pulumi’s CI/CD Assistant across all the company’s paid plans. This feature makes it easier to deploy cloud infrastructure and applications through more than a dozen popular CI/CD platforms, including the likes of AWS Code Service, Azure DevOps, CircleCI, GitLab CI, Google Cloud Build, Jenkins, Travis CI and Spinnaker. Until now, you needed to be on a Team Pro or Enterprise plan to use this, but it’s now available to all paying users.

In addition, the company is expanding some of its enterprise features with, for example, SAML SSO, SCIm synchronization and new role types.

“When we started out on Pulumi, we knew we wanted to enable developers and infrastructure teams to
collaborate more closely to build more innovative software,” said Joe Duffy, Pulumi co-founder and
CEO. “What we didn’t know yet is that we’d end up calling this ‘Cloud Engineering,’ that our customers
would call it that too, and that they would go on this journey with us. We are now centering our entire
platform around this core idea which is now accelerating as the modern cloud continues to disrupt
entire business models. Pulumi 3.0 is an exciting milestone in realizing this vision of the future —
democratizing access to the cloud and helping teams build better software together — with much more
to come.”

#api, #aws, #cloud-computing, #cloud-infrastructure, #co-founder, #computing, #continuous-integration, #devops, #gitlab, #identity-management, #jenkins, #joe-duffy, #pulumi, #software-engineering, #tc, #technology, #version-control

0

Amazon partners with Seraphim on AWS accelerator for space startups

Amazon will soon be a big part of the space economy in the form of its Kuiper satellite internet constellation, but here on Earth its ambitions are more commonplace: get an accelerator going. They’ve partnered with space-focused VC outfit Seraphim Capital to create a four-week program with (among other things) a $100,000 AWS credit for a carrot.

Applications are open now for the AWS Space Accelerator, with the only requirement that you’re aiming for the space sector and plan to use AWS at some point. Ten applicants will be accepted; you have until April 21 to apply.

The program sounds fairly straightforward: a “technical, business, and mentorship” deal where you’ll likely learn how to use AWS properly, get some good tips from the AWS Partner Network and other space-focused experts on tech, regulations and security, then rub shoulders with some VCs to talk about that round you’re putting together. (No doubt Seraphim’s team gets first dibs, but there doesn’t appear to be any strict equity agreement.)

“Selected startups may receive up to $100,000 in AWS Activate credit,” the announcement says, which does hedge somewhat, but probably legal made them put that in.

There are a good amount of space-focused programs out there, but not nearly enough to cover demand — there are a lot of space startups! And they often face a special challenge of being highly technical, have customers in the public sector and need rather a lot of cash to get going compared with your average enterprise SaaS.

We’ll understand more about the program once the first cohort is announced, likely not for at least a month or two.

#accelerator, #aerospace, #amazon, #aws, #seraphim-capital

0

Arm announces the next generation of its processor architecture

Arm today announced Armv9, the next generation of its chip architecture. Its predecessor, Armv8 launched a decade ago and while it has seen its fair share of changes and updates, the new architecture brings a number of major updates to the platform that warrant a shift in version numbers. Unsurprisingly, Armv9 builds on V8 and is backward compatible, but it specifically introduces new security, AI, signal processing and performance features.

Over the last five years, more than 100 billion Arm-based chips have shipped. But Arm believes that its partners will ship over 300 billion in the next decade. We will see the first ArmV9-based chips in devices later this year.

Ian Smythe, Arm’s VP of Marketing for its client business, told me that he believes this new architecture will change the way we do computing over the next decade. “We’re going to deliver more performance, we will improve the security capabilities […] and we will enhance the workload capabilities because of the shift that we see in compute that’s taking place,” he said. “The reason that we’ve taken these steps is to look at how we provide the best experience out there for handling the explosion of data and the need to process it and the need to move it and the need to protect it.”

That neatly sums up the core philosophy behind these updates. On the security side, ArmV9 will introduce Arm’s confidential compute architecture and the concept of Realms. These Realms enable developers to write applications where the data is shielded from the operating system and other apps on the device. Using Realms, a business application could shield sensitive data and code from the rest of the device, for example.

Image Credits: Arm

“What we’re doing with the Arm Confidential Compute Architecture is worrying about the fact that all of our computing is running on the computing infrastructure of operating systems and hypervisors,” Richard Grisenthwaite, the chief architect at Arm, told me. “That code is quite complex and therefore could be penetrated if things go wrong. And it’s in an incredibly trusted position, so we’re moving some of the workloads so that [they are] running on a vastly smaller piece of code. Only the Realm manager is the thing that’s actually capable of seeing your data while it’s in action. And that would be on the order of about a 10th of the size of a normal hypervisor and much smaller still than an operating system.”

As Grisenthwaite noted, it took Arm a few years to work out the details of this security architecture and ensure that it is robust enough — and during that time Spectre and Meltdown appeared, too, and set back some of Arm’s initial work because some of the solutions it was working on would’ve been vulnerable to similar attacks.

Image Credits: Arm

Unsurprisingly, another area the team focused on was enhancing the CPU’s AI capabilities. AI workloads are now ubiquitous. Arm had already done introduced its Scalable Vector Extension (SVE) a few years ago, but at the time, this was meant for high-performance computing solutions like the Arm-powered Fugaku supercomputer.

Now, Arm is introducing SVE2 to enable more AI and digital signal processing (DSP) capabilities. Those can be used for image processing workloads, as well as other IoT and smart home solutions, for example. There are, of course, dedicated AI chips on the market now, but Arm believes that the entire computing stack needs to be optimized for these workloads and that there are a lot of use cases where the CPU is the right choice for them, especially for smaller workloads.

“We regard machine learning as appearing in just about everything. It’s going to be done in GPUs, it’s going to be done in dedicated processors, neural processors, and also done in our CPUs. And it’s really important that we make all of these different components better at doing machine learning,” Grisenthwaite said.

As for raw performance, Arm believes its new architecture will allow chip manufacturers to gain more than 30% in compute power over the next two chip generations, both for mobile CPUs but also the kind of infrastructure CPUs that large cloud vendors like AWS now offer their users.

“Arm’s next-generation Armv9 architecture offers a substantial improvement in security and machine learning, the two areas that will be further emphasized in tomorrow’s mobile communications devices,” said Min Goo Kim, the executive vice president of SoC development at Samsung Electronics. “As we work together with Arm, we expect to see the new architecture usher in a wider range of innovations to the next generation of Samsung’s Exynos mobile processors.”

#ai-chips, #artificial-intelligence, #aws, #companies, #computers, #computing, #dsp, #exynos, #image-processing, #machine-learning, #nvidia, #operating-system, #operating-systems, #samsung-electronics, #soc, #softbank-group, #tc

0

ABB and AWS team up to create an EV fleet management platform

Swiss automation and technology company ABB has announced a collaboration with Amazon Web Services (AWS) to create a cloud-based EV fleet management platform that it hopes will hasten the electrification of fleets. The platform, which the company says will help operators maintain business continuity as they switch to electric, will roll out in the second half of 2021.

This announcement comes after a wave of major delivery companies pledged to electrify their fleets. Amazon already has a number of Rivian-sourced electric delivery vans on the streets of California and plans to have 10,000 more operational by this year; UPS ordered 10,000 electric vans from Arrival for its fleet; 20% of DHL’s fleet is already electric; and FedEx plans to electrify its entire fleet by 2040. A 2020 McKinsey report predicted commercial and passenger fleets in the U.S. could include as many as eight million EVs by 2030, compared with fewer than 5,000 in 2018. That’s about 10 to 15% of all fleet vehicles.

“We want to make EV adoption easier and more scalable for fleets,” Frank Muehlon, president of ABB’s e-mobility division, told TechCrunch. “To power progress, the industry must bring together the best minds and adopt an entrepreneurial approach to product development.” 

ABB brings experience in e-mobility solutions, energy management and charging technology to the table, which will combine with AWS’s cloud and software to make a single-view platform that can be tailored to whichever company is using it. Companies will be able to monitor things like charge planning, EV maintenance status, and route optimization based on the time of day, weather and use patterns. Muehlon said they’ll work with customers to explore ways to use existing data from fleets for faster implementation.

The platform will be hosted on the AWS cloud, which means that it can scale anywhere AWS is available, which so far includes in 25 regions globally.

The platform will be hardware-agnostic, meaning any type of EV or charger can work with it. Integration of software into specific EV fleets will depend on the fleet’s level of access to third-party asset management systems and onboard EV telematics, but the platform will support a layered feature approach, wherein each layer provides more accurate vehicle data. Muehlon says this makes for a more seamless interface than existing third-party charging management software, which don’t have the technology or the flexibility to work with the total breadth of EV models and charging infrastructure. 

“Not only do fleet managers have to contend with the speed of development in charging technology, but they also need real-time vehicle and charging status information, access to charging infrastructures and information for hands-on maintenance,” said Muehlon. “This new real-time EV fleet management solution will set new standards in the world of electric mobility for global fleet operators and help them realize improved operations.”

This software is aimed at depot and commercial fleets, as well as public infrastructure fleets. Muehlon declined to specify any specific EV operators or customers lined up to use this new technology, but he did say there are “several pilots underway” which will “enable us to ensure that we are developing market-ready solutions for all kinds of fleets.” 

#abb, #amazon, #amazon-web-services, #automotive, #aws, #electric-delivery-vehicles, #electric-vehicles, #ev, #logistics, #shipping, #transportation

0

Why Adam Selipsky was the logical choice to run AWS

When AWS CEO Andy Jassy announced in an email to employees yesterday that Tableau CEO Adam Selipsky was returning to run AWS, it was probably not the choice most considered. But to the industry watchers we spoke to over the last couple of days, it was a move that made absolute sense once you thought about it.

Gartner analyst Ed Anderson says that the cultural fit was probably too good for Jassy to pass up. Selipsky spent 11 years helping build the division. It was someone he knew well and had worked side by side with for over a decade. He could slide into the new role and be trusted to continue building the lucrative division.

Anderson says that even though the size and scope of AWS has changed dramatically since Selipsky left in 2016 when the company closed the year on $16 billion run rate, he says that the organization’s cultural dynamics haven’t changed all that much.

“Success in this role requires a deep understanding of the Amazon/AWS culture in addition to a vision for AWS’s future growth. Adam already knows the AWS culture from his previous time at AWS. Yes, AWS was a smaller business when he left, but the fundamental structure and strategy was in place and the culture hasn’t notably evolved since then,” Anderson told me.

Matt McIlwain, managing director at Madrona Venture Group says the experience Selipsky had after he left AWS will prove invaluable when he returns.

“Adam transformed Tableau from a desktop, licensed software company to a cloud, subscription software company that thrived. As the leader of AWS, Adam is returning to a culture he helped grow as the sales and marketing leader that brought AWS to prominence and broke through from startup customers to become the leading enterprise solution for public cloud,” he said.

Holger Mueller, an analyst with Constellation Research says that Selipsky’s business experience gave him the edge over other candidates. “His business acumen won out over [internal candidates] Matt Garmin and Peter DeSantis. Insight on how Salesforce works may be helpful and valued as well,” Mueller pointed out.

As for leaving Tableau and with it Salesforce, the company that purchased it for $15.7 billion in 2019, Brent Leary, founder and principal analyst at CRM Essentials believes that it was only a matter of time before some of these acquired company CEOs left to do other things. In fact, he’s surprised it didn’t happen sooner.

“Given Salesforce’s growing stable of top notch CEOs accumulated by way of a slew of high profile acquisitions, you really can’t expect them all to stay forever, and given Adam Selipsky’s tenure at AWS before becoming Tableau’s CEO, this move makes a whole lot of sense. Amazon brings back one of their own, and he is also a wildly successful CEO in his own right,” Leary said.

While the consensus is that Selipsky is a good choice, he is going to have awfully big shoes to fill.  The fact is that division is continuing to grow like a large company currently on a run rate of over $50 billion. With a track record like that to follow, and Jassy still close at hand, Selipsky has to simply continue letting the unit do its thing while putting his own unique stamp on it.

Any kind of change is disconcerting though, and it will be up to him to put customers and employees at ease and plow ahead into the future. Same mission. New boss.

#adam-selipsky, #andy-jassy, #aws, #cloud, #cloud-infrastructure, #enterprise, #personnel, #salesforce, #tableau, #tc

0

Tableau CEO Adam Selipsky is returning to AWS to replace Andy Jassy as CEO

When Amazon announced last month that Jeff Bezos was moving into the executive chairman role, and AWS CEO Andy Jassy would be taking over the entire Amazon operation, speculation began about who would replace Jassy.

People considered a number of internal candidates such as Peter DeSantis, vice president of global infrastructure at AWS and Matt Garman, who is vice president of sales and marketing. Not many would have chosen Tableau CEO Adam Selipsky, but sure enough he is returning home to run the division he left in 2016.

In an email to employees, Jassy wasted no time getting to the point that Selipsky was his choice, saying that the former employee who helped launch the division when they hired him 2005, spent 11 years helping Jassy build the unit before taking the job at Tableau. Through that lens, the the choice makes perfect sense.

“Adam brings strong judgment, customer obsession, team building, demand generation, and CEO experience to an already very strong AWS leadership team. And, having been in such a senior role at AWS for 11 years, he knows our culture and business well,” Jassy wrote in the email.

Jassy has run the AWS since its earliest days taking it from humble beginnings as a kind of internal experiment on running a storage web service to building a mega division currently on a $51 billion run rate. It is that juggernaut that will be Selipsky to run, but he seems well suited for the job.

 

 

This is a breaking story. We will be adding to it.

#amazon, #andy-jassy, #aws, #cloud, #enterprise, #jeff-bezos, #personnel, #salesforce, #tableau

0

Amazon will expand its Amazon Care on-demand healthcare offering U.S.-wide this summer

Amazon is apparently pleased with how its Amazon Care pilot in Seattle has gone, since it announced this morning that it will be expanding the offering across the U.S. this summer, and opening it up to companies of all sizes, in addition to its own employees. The Amazon Care model combines on-demand and in-person care, and is meant as a solution from the search giant to address shortfalls in current offering for employer-sponsored healthcare offerings.

In a blog post announcing the expansion, Amazon touted the speed of access to care made possible for its employees and their families via the remote, chat and video-based features of Amazon Care. These are facilitated via a dedicated Amazon Care app, which provides direct, live chats via a nurse or doctor. Issues that then require in-person care is then handled via a house call, so a medical professional is actually sent to your home to take care of things like administering blood tests or doing a chest exam, and prescriptions are delivered to your door as well.

The expansion is being handled differently across both in-person and remote variants of care; remote services will be available starting this summer to both Amazon’s own employees, as well as other companies who sign on as customers, starting this summer. The in-person side will be rolling out more slowly, starting with availability in Washington, D.C., Baltimore, and “other cities in the coming months” according to the company.

As of today, Amazon Care is expanding in its home state of Washington to begin serving other companies. The idea is that others will sing on to make Amazon Care part of its overall benefits package for employees. Amazon is touting the speed advantages of testing services, including results delivery, for things including COVID-19 as a major strength of the service.

The Amazon Care model has a surprisingly Amazon twist, too – when using the in-person care option, the app will provide an updating ETA for when to expect your physician or medical technician, which is eerily similar to how its primary app treats package delivery.

While the Amazon Care pilot in Washington only launched a year-and-a-half ago, the company has had its collective mind set on upending the corporate healthcare industry for some time now. It announced a partnership with Berkshire Hathaway and JPMorgan back at the very beginning of 2018 to form a joint venture specifically to address the gaps they saw in the private corporate healthcare provider market.

That deep pocketed all-star team ended up officially disbanding at the outset of this year, after having done a whole lot of not very much in the three years in between. One of the stated reasons that Amazon and its partners gave for unpartnering was that each had made a lot of progress on its own in addressing the problems it had faced anyway. While Berkshire Hathaway and JPMorgan’s work in that regard might be less obvious, Amazon was clearly referring to Amazon Care.

It’s not unusual for large tech companies with lots of cash on the balance sheet and a need to attract and retain top-flight talent to spin up their own healthcare benefits for their workforces. Apple and Google both have their own on-campus wellness centers staffed by medical professionals, for instance. But Amazon’s ambitious have clearly exceeded those of its peers, and it looks intent on making a business line out of the work it did to improve its own employee care services — a strategy that isn’t too dissimilar from what happened with AWS, by the way.

#amazon, #amazon-care, #apple, #aws, #baltimore, #berkshire-hathaway, #computing, #enterprise, #eta, #google, #health, #healthcare, #jpmorgan, #physician, #seattle, #tc, #technology, #united-states, #washington, #washington-d-c

0

Parler sues Amazon (again), claims AWS ban sank a billion-dollar valuation

The bright screen of a notebook computer illuminates the face of the person using it.

Enlarge / A person browsing Parler in early January, before the site got into a fight with AWS. (credit: Jaap Arriens | NurPhoto | Getty Images)

Social media platform Parler has dropped a federal lawsuit alleging that Amazon colluded with Twitter to drive a rival offline—but in its place, the platform has filed a new state lawsuit alleging that Amazon deliberately tanked Parler’s valuation.

Parler’s new suit (PDF)—filed in King County, Washington, where Amazon is headquartered—argues mainly that Parler is no worse than the competition and that Amazon defamed and devalued it when AWS discontinued service.

The platform has been embroiled in legal battles with Amazon since January, when Amazon cut off Parler’s AWS hosting in the wake of the January 6 insurrection at the US Capitol. Parler went offline shortly after and remained that way until mid-February.

Read 10 remaining paragraphs | Comments

#amazon, #aws, #biz-it, #lawsuits, #parler, #policy

0

Microsoft Azure expands its NoSQL portfolio with Managed Instances for Apache Cassandra

At its Ignite conference today, Microsoft announced the launch of Azure Managed Instance for Apache Cassandra, its latest NoSQL database offering and a competitor to Cassandra-centric companies like Datastax. Microsoft describes the new service as a ‘semi-managed offering that will help companies bring more of their Cassandra-based workloads into its cloud.

“Customers can easily take on-prem Cassandra workloads and add limitless cloud scale while maintaining full compatibility with the latest version of Apache Cassandra,” Microsoft explains in its press materials. “Their deployments gain improved performance and availability, while benefiting from Azure’s security and compliance capabilities.”

Like its counterpart, Azure SQL Manages Instance, the idea here is to give users access to a scalable, cloud-based database service. To use Cassandra in Azure before, businesses had to either move to Cosmos DB, its highly scalable database service which supports the Cassandra, MongoDB, SQL and Gremlin APIs, or manage their own fleet of virtual machines or on-premises infrastructure.

Cassandra was originally developed at Facebook and then open-sourced in 2008. A year later, it joined the Apache Foundation and today it’s used widely across the industry, with companies like Apple and Netflix betting on it for some of their core services, for example. AWS launched a managed Cassandra-compatible service at its re:Invent conference in 2019 (it’s called Amazon Keyspaces today), Microsoft only launched the Cassandra API for Cosmos DB last November. With today’s announcement, though, the company can now offer a full range of Cassandra-based servicer for enterprises that want to move these workloads to its cloud.

#amazon, #apache-cassandra, #api, #apple, #aws, #cloud, #computing, #data, #data-management, #datastax, #developer, #enterprise, #facebook, #microsoft, #microsoft-ignite-2021, #microsoft-azure, #mongodb, #netflix, #nosql, #sql, #tc

0

AWS reorganizes DeepRacer League to encourage more newbies

AWS launched the DeepRacer League in 2018 as a fun way to teach developers machine learning, and it’s been building on the idea ever since. Today, it announced the latest league season with two divisions: Open and Pro.

As Marcia Villalba wrote in a blog post announcing the new league, “AWS DeepRacer is an autonomous 1/18th scale race car designed to test [reinforcement learning] models by racing virtually in the AWS DeepRacer console or physically on a track at AWS and customer events. AWS DeepRacer is for developers of all skill levels, even if you don’t have any ML experience. When learning RL using AWS DeepRacer, you can take part in the AWS DeepRacer League where you get experience with machine learning in a fun and competitive environment.”

While the company started these as in-person races with physical cars, the pandemic has forced them to make it a virtual event over the last year, but the new format seemed to be blocking out newcomers. Because the goal is to teach people about machine learning, getting new people involved is crucial to the company.

That’s why it created the Open League, which as the name suggests is open to anyone. You can test your skills and if you’re good enough, finishing in the top 10%, you can compete in the Pro division. Everyone competes for prizes, as well, such as vehicle customizations.

The top 16 in the Pro League each month race for a chance to go to the finals at AWS re:Invent in 2021, an event that may or may not be virtual, depending on where we are in the pandemic recovery.

#amazon, #artificial-intelligence, #aws, #aws-deepracer, #cloud, #developer, #machine-learning

0

ClimaCell plans to launch its own satellites to improve its weather predictions

The weather data and forecasting startup ClimaCell today announced that it plans to launch its own constellation of small weather satellites. These radar-equipped satellites will allow ClimaCell to improve its ability to get a better picture of global weather and improve its forecasting abilities. The company expects the first of these to launch in the second half of 2022.

As ClimaCell CEO Shimon Elkabetz points out in today’s announcement, ground-based radar coverage, which allows you to get information about precipitation and cloud structure, remains spotty, even in the U.S., which in turn often makes even basic forecasting more difficult. And while there are (expensive) space-based radar satellites available, those often only revisit the same area every three days, limiting their usefulness. ClimaCell hopes that its constellation of small, specialized satellites will offer hourly revisit times.

We started with proprietary sensing and modeling to predict the weather more accurately at every point in the world, and built on top of it one software platform that can be configured to every job and vertical,” Elkabetz writes. “[…] Now, we are evolving into a SaaS company powered by Space: We’re launching a constellation of satellites to improve weather forecasting for the entire world. For the first time, a constellation of active radar will surround Earth and provide real-time observations to feed weather forecasting at every point on the globe.

That’s indeed a big step for the company, but we may just see more of this in the near future. While even 10 years ago it would have been hard for even a well-funded company to launch its own satellites, that’s quite different now. A number of factors contributed to this, ranging from easier access to launch services, breakthroughs in building these proprietary radar satellites and the availability of auxiliary services like ground stations as a service, which now even AWS and Microsoft offer, and a whole ecosystem of vendors that specialize in building these satellites. The ClimaCell team tells me that it is talking to a lot of vendors right now and will choose which one to go to later on.

#aerospace, #aws, #climacell, #microsoft, #satellites, #software-platform, #united-states

0

Microsoft’s Dapr open-source project to help developers build cloud-native apps hits 1.0

Dapr, the Microsoft-incubated open-source project that aims to make it easier for developers to build event-driven, distributed cloud-native applications, hit its 1.0 milestone today, signifying the project’s readiness for production use cases. Microsoft launched the Distributed Application Runtime (that’s what “Dapr” stand for) back in October 2019. Since then, the project released 14 updates and the community launched integrations with virtually all major cloud providers, including Azure, AWS, Alibaba and Google Cloud.

The goal for Dapr, Microsoft Azure CTO Mark Russinovich told me, was to democratize cloud-native development for enterprise developers.

“When we go look at what enterprise developers are being asked to do — they’ve traditionally been doing client, server, web plus database-type applications,” he noted. “But now, we’re asking them to containerize and to create microservices that scale out and have no-downtime updates — and they’ve got to integrate with all these cloud services. And many enterprises are, on top of that, asking them to make apps that are portable across on-premises environments as well as cloud environments or even be able to move between clouds. So just tons of complexity has been thrown at them that’s not specific to or not relevant to the business problems they’re trying to solve.”

And a lot of the development involves re-inventing the wheel to make their applications reliably talk to various other services. The idea behind Dapr is to give developers a single runtime that, out of the box, provides the tools that developers need to build event-driven microservices. Among other things, Dapr provides various building blocks for things like service-to-service communications, state management, pub/sub and secrets management.

Image Credits: Dapr

“The goal with Dapr was: let’s take care of all of the mundane work of writing one of these cloud-native distributed, highly available, scalable, secure cloud services, away from the developers so they can focus on their code. And actually, we took lessons from serverless, from Functions-as-a-Service where with, for example Azure Functions, it’s event-driven, they focus on their business logic and then things like the bindings that come with Azure Functions take care of connecting with other services,” Russinovich said.

He also noted that another goal here was to do away with language-specific models and to create a programming model that can be leveraged from any language. Enterprises, after all, tend to use multiple languages in their existing code, and a lot of them are now looking at how to best modernize their existing applications — without throwing out all of their current code.

As Russinovich noted, the project now has more than 700 contributors outside of Microsoft (though the core commuters are largely from Microsoft) and a number of businesses started using it in production before the 1.0 release. One of the larger cloud providers that is already using it is Alibaba. “Alibaba Cloud has really fallen in love with Dapr and is leveraging it heavily,” he said. Other organizations that have contributed to Dapr include HashiCorp and early users like ZEISS, Ignition Group and New Relic.

And while it may seem a bit odd for a cloud provider to be happy that its competitors are using its innovations already, Russinovich noted that this was exactly the plan and that the team hopes to bring Dapr into a foundation soon.

“We’ve been on a path to open governance for several months and the goal is to get this into a foundation. […] The goal is opening this up. It’s not a Microsoft thing. It’s an industry thing,” he said — but he wasn’t quite ready to say to which foundation the team is talking.

 

#alibaba, #alibaba-cloud, #aws, #cloud, #cloud-computing, #cloud-infrastructure, #cloud-services, #cloud-storage, #computing, #developer, #enterprise, #google, #hashicorp, #mark-russinovich, #microservices, #microsoft, #microsoft-azure, #new-relic, #serverless-computing, #tc

0

Databricks brings its lakehouse to Google Cloud

Databricks and Google Cloud today announced a new partnership that will bring to Databricks customers a deep integration with Google’s BigQuery platform and Google Kubernetes Engine. This will allow Databricks’ users to bring their data lakes and the service’s analytics capabilities to Google Cloud.

Databricks already features a deep integration with Microsoft Azure — one that goes well beyond this new partnership with Google Cloud — and the company is also an AWS partner. By adding Google Cloud to this list, the company can now claim to be the “only unified data platform available across all three clouds (Google, AWS and Azure).”

It’s worth stressing, though, that Databricks’ Azure integration is a bit of a different deal from this new partnership with Google Cloud. “Azure Databricks is a first-party Microsoft Azure service that is sold and supported directly by Microsoft. The first-party service is unique to our Microsoft partnership. Customers on Google Cloud will purchase directly from Databricks through the Google Cloud Marketplace,” a company spokesperson told me. That makes it a bit more of a run-of-the-mill partnership compared to the Microsoft deal, but that doesn’t mean the two companies aren’t just as excited about it.

“We’re delighted to deliver Databricks’ lakehouse for AI and ML-driven analytics on Google Cloud,” said Google Cloud CEO Thomas Kurian (or, more likely, one of the company’s many PR specialists who likely wrote and re-wrote this for him a few times before it got approved). “By combining Databricks’ capabilities in data engineering and analytics with Google Cloud’s global, secure network—and our expertise in analytics and delivering containerized applications—we can help companies transform their businesses through the power of data.”

Similarly, Databricks CEO Ali Ghodsi noted that he is “thrilled to partner with Google Cloud and deliver on our shared vision of a simplified, open, and unified data platform that supports all analytics and AI use-cases that will empower our customers to innovate even faster.”

And indeed, this is clearly a thrilling delight for everybody around, including customers like Conde Nast, whose Director of Data Engineering Nana Essuman is “excited to see leaders like Google Cloud and Databricks come together to streamline and simplify getting value from data.”

If you’re also thrilled about this, you’ll be able to hear more about it from both Ghodsi and Kurian at an event on April 6 that is apparently hosted by TechCrunch (though this is the first I’ve heard of it, too).

#ali-ghodsi, #artificial-intelligence, #aws, #bigquery, #cloud-computing, #cloud-infrastructure, #computing, #conde-nast, #databricks, #google, #google-cloud, #microsoft, #microsoft-azure, #partner, #tc, #thomas-kurian

0

TigerGraph raises $105M Series C for its enterprise graph database

TigerGraph, a well-funded enterprise startup that provides a graph database and analytics platform, today announced that it has raised a $105 million Series C funding round. The round was led by Tiger Global and brings the company’s total funding to over $170 million.

“TigerGraph is leading the paradigm shift in connecting and analyzing data via scalable and native graph technology with pre-connected entities versus the traditional way of joining large tables with rows and columns,” said TigerGraph found and CEO, Yu Xu. “This funding will allow us to expand our offering and bring it to many more markets, enabling more customers to realize the benefits of graph analytics and AI.”

Current TigerGraph customers include the likes of Amgen, Citrix, Intuit, Jaguar Land Rover and UnitedHealth Group. Using a SQL-like query language (GSQL), these customers can use the company’s services to store and quickly query their graph databases. At the core of its offerings is the TigerGraphDB database and analytics platform, but the company also offers a hosted service, TigerGraph Cloud, with pay-as-you-go pricing, hosted either on AWS or Azure. With GraphStudio, the company also offers a graphical UI for creating data models and visually analyzing them.

The promise for the company’s database services is that they can scale to tens of terabytes of data with billions of edges. Its customers use the technology for a wide variety of use cases, including fraud detection, customer 360, IoT, AI, and machine learning.

Like so many other companies in this space, TigerGraph is facing some tailwind thanks to the fact that many enterprises have accelerated their digital transformation projects during the pandemic.

“Over the last 12 months with the COVID-19 pandemic, companies have embraced digital transformation at a faster pace driving an urgent need to find new insights about their customers, products, services, and suppliers,” the company explains in today’s announcement. “Graph technology connects these domains from the relational databases, offering the opportunity to shrink development cycles for data preparation, improve data quality, identify new insights such as similarity patterns to deliver the next best action recommendation.”

#amgen, #analytics, #articles, #artificial-intelligence, #aws, #business-intelligence, #ceo, #citrix, #citrix-systems, #computing, #data, #database, #enterprise, #graph-database, #intuit, #jaguar-land-rover, #machine-learning, #tiger-global

0

Is overseeing cloud operations the new career path to CEO?

When Amazon announced last week that founder and CEO Jeff Bezos planned to step back from overseeing operations and shift into an executive chairman role, it also revealed that AWS CEO Andy Jassy, head of the company’s profitable cloud division, would replace him.

As Bessemer partner Byron Deeter pointed out on Twitter, Jassy’s promotion was similar to Satya Nadella’s ascent at Microsoft: in 2014, he moved from executive VP in charge of Azure to the chief exec’s office. Similarly, Arvind Krishna, who was promoted to replace Ginni Rometti as IBM CEO last year, also was formerly head of the company’s cloud business.

Could Nadella’s successful rise serve as a blueprint for Amazon as it makes a similar transition? While there are major differences in the missions of these companies, it’s inevitable that we will compare these two executives based on their former jobs. It’s true that they have an awful lot in common, but there are some stark differences, too.

Replacing a legend

For starters, Jassy is taking over for someone who founded one of the world’s biggest corporations. Nadella replaced Steve Ballmer, who had taken over for the company’s face, Bill Gates. Holger Mueller, an analyst at Constellation Research, says this notable difference could have a huge impact for Jassy with his founder boss still looking over his shoulder.

“There’s a lot of similarity in the two situations, but Satya was a little removed from the founder Gates. Bezos will always hover and be there, whereas Gates (and Ballmer) had retired for good. [ … ] It was clear [they] would not be coming back. [ … ] For Jassy, the owner could [conceivably] come back anytime,” Mueller said.

But Andrew Bartels, an analyst at Forrester Research, says it’s not a coincidence that both leaders were plucked from the cloud divisions of their respective companies, even if it was seven years apart.

“In both cases, these hyperscale business units of Microsoft and Amazon were the fastest-growing and best-performing units of the companies. [ … ] In both cases, cloud infrastructure was seen as a platform on top of which and around which other cloud offerings could be developed,” Bartels said. The companies both believe that the leaders of these two growth engines were best suited to lead the company into the future.

#amazon, #andy-jassy, #aws, #azure, #cloud, #ec-cloud-and-enterprise-infrastructure, #ec-news-analysis, #enterprise, #jeff-bezos, #microsoft, #personnel, #satya-nadella, #tc

0

The Rust programming language finds a new home in a non-profit foundation

Rust, the programming language — not the survival game, now has a new home: the Rust Foundation. AWS, Huawei, Google, Microsoft and Mozilla banded together to launch this new foundation today and put a two-year commitment to a million-dollar budget behind it. This budget will allow the project to “develop services, programs, and events that will support the Rust project maintainers in building the best possible Rust.”

Rust started out as a side project inside of Mozilla to develop an alternative to C/C++ . Designed by Mozilla Research’s Graydon Hore, with contributions from the likes of JavaScript creator Brendan Eich, Rust became the core language for some of the fundamental features of the Firefox browser and its Gecko engine, as well as Mozilla’s Servo engine. Today, Rust is the most-loved language among developers. But with Mozilla’s layoffs in recent months, a lot of the Rust team lost its job and the future of the language became unclear without a main sponsor, though the project itself has thousands of contributors and a lot of corporate users, so the language itself wasn’t going anywhere.

A large open-source project oftens needs some kind of guidance and the new foundation will provide this — and it takes a legal entity to manage various aspects of the community, including the trademark, for example. The new Rust board will feature 5 board directors from the 5 founding members, as well as 5 directors from project leadership.

“Mozilla incubated Rust to build a better Firefox and contribute to a better Internet,” writes Bobby Holley, Mozilla and Rust Foundation Board member, in a statement. “In its new home with the Rust Foundation, Rust will have the room to grow into its own success, while continuing to amplify some of the core values that Mozilla shares with the Rust community.”

All of the corporate sponsors have a vested interest in Rust and are using it to build (and re-build) core aspects of some of their stacks. Google recently said that it will fund a Rust-based project that aims to make the Apache webserver safer, for example, while Microsoft recently formed a Rust team, too, and is using the language to rewrite some core Windows APIs. AWS recently launched Bottlerocket, a new Linux distribution for containers that, for example, features a build system that was largely written in Rust.

 

#aws, #brendan-eich, #firefox, #free-software, #gecko, #google, #huawei, #javascript, #microsoft, #mozilla, #mozilla-foundation, #programming-languages, #rust, #servo, #software, #tc

0

What Andy Jassy’s promotion to Amazon CEO could mean for AWS

Blockbuster news struck late this afternoon when Amazon announced that Jeff Bezos would be stepping back as CEO of Amazon, the company he built from a business in his garage to worldwide behemoth. As he takes on the role of executive chairman, his replacement will be none other than AWS CEO Andy Jassy.

With Jassy moving into his new role at the company, the immediate question is who replaces him to run AWS. Let the games begin. Among the names being tossed about in the rumor mill are Peter DeSantis, vice president of global infrastructure at AWS and Matt Garman, who is Vice President of sales and marketing. Both are members of Bezos’ elite executive team known as the S-team and either would make sense as Jassy’s successor. Nobody knows for sure though, and it could be any number of people inside the organization, or even someone from outside. (We have asked Amazon PR to provide clarity on the successor, but as of publication we had not heard from them.)

Holger Mueller, a senior analyst at Constellation Research, says that Jassy is being rewarded for doing a stellar job raising AWS from a tiny side business to one on a $50 billion run rate. “On the finance side it makes sense to appoint an executive who intimately knows Amazon’s most profitable business, that operates in more competitive markets. [Appointing Jassy] ensures that the new Amazon CEO does not break the ‘golden goose’,” Mueller told me.

Alex Smith, VP of channels, who covers the cloud infrastructure market at analyst firm Canalys, says the writing has been on the wall that a transition was in the works. “This move has been coming for some time. Jassy is the second most public-facing figure at Amazon and has lead one of its most successful business units. Bezos can go out on a high and focus on his many other ventures,” Smith said.

Smith adds that this move should enhance AWS’s place in the organization. “I think this is more of an AWS gain, in terms of its increasing strategic importance to Amazon going forwards, rather than loss in terms of losing Andy as direct lead. I expect he’ll remain close to that organization.”

Ed Anderson, a Gartner analyst also sees Jassy as the obvious choice to take over for Bezos. “Amazon is a company driven by technology innovation, something Andy has been doing at AWS for many years now. Also, it’s worth noting that Andy Jassy has an impressive track record of building and running a very large business. Under Andy’s leadership, AWS has grown to be one of the biggest technology companies in the world and one of the most impactful in defining what the future of computing will be,” Anderson said.

In the company earnings report released today, AWS came in at $12.74 billion for the quarter up 28% YoY from $9.60 billion a year ago. That puts the company on an elite $50 billion run rate. No other cloud infrastructure vendor, even the mighty Microsoft, is even close in this category. Microsoft stands at around 20% marketshare compared to AWS’s approximately 33% market share.

It’s unclear what impact the executive shuffle will have on the company at large or AWS in particular. In some ways it feels like when Larry Ellison stepped down as CEO of Oracle in 2014 to take on the exact same executive chairman role. While Safra Catz and Mark Hurd took over at co-CEOs in that situation, Ellison has remained intimately involved with the company he helped found. It’s reasonable to assume that Bezos will do the same.

With Jassy, the company is getting a man who has risen through the ranks since joining the company in 1997 after getting an undergraduate degree and an MBA from Harvard. In 2002 he became VP/ technical assistant, working directly under Bezos. It was in this role that he began to see the need for a set of common web services for Amazon developers to use. This idea grew into AWS and Jassy became a VP at the fledgling division working his way up until he was appointed CEO in 2016.

#amazon, #andy-jassy, #aws, #cloud, #enterprise, #jeff-bezos, #personnel, #tc

0

Google Cloud lost $5.6B in 2020

Google continues to bet heavily on Google Cloud and while it is seeing accelerated revenue growth, its losses are also increasing. For the first time today, Google disclosed operating income/loss for its Google Cloud business unit in its quarterly earnings today. Google Cloud lost $5.6 billion in Google’s fiscal year 2020, which ended December 31. That’s on $13 billion of revenue.

While this may look a bit dire at first glance (cloud computing should be pretty profitable, after all), there’s different ways of looking at this. On the one hand, losses are mounting, up from $4.3 billion in 2018 and $4.6 billion in 2019, but revenue is also seeing strong growth, up from $5.8 billion in 2018 and $8.9 billion in 2019. What we’re seeing here, more than anything else, is Google investing heavily in its cloud business.

Google’s Cloud unit, led by its CEO Thomas Kurian, includes all of its cloud infrastructure and platform services, as well as Google Workspace (which you probably still refer to as G Suite). And that’s exactly where Google is making a lot of investments right now. Data centers, after all, don’t come cheap and Google Cloud launched four new regions in 2020 and started work on others. That’s on top of its investment in its core services and a number of acquisitions.

Image Credits: Google

“Our strong fourth quarter performance, with revenues of $56.9 billion, was driven by Search and YouTube, as consumer and business activity recovered from earlier in the year,” Ruth Porat, CFO of Google and Alphabet, said. “Google Cloud revenues were $13.1 billion for 2020, with significant ongoing momentum, and we remain focused on delivering value across the growth opportunities we see.”

For now, though, Google’s core business, which saw a strong rebound in its advertising business in the last quarter, is subsidizing its cloud expansion.

Meanwhile, over in Seattle, AWS today reported revenue of $12.74 billion in the last quarter alone and operating income of $3.56 billion. For 2020, AWS’s operating income was $13.5 billion.

#alphabet, #amazon-web-services, #artificial-intelligence, #aws, #ceo, #cfo, #cloud-computing, #cloud-infrastructure, #companies, #computing, #diane-greene, #earnings, #google, #google-cloud, #google-cloud-platform, #ruth-porat, #seattle, #thomas-kurian, #world-wide-web

0

Jeff Bezos will no longer be CEO of Amazon as of later this year

Amazon founder and current CEO Jeff Bezos will be transitioning to Executive Chair of the company sometime in Q3 of this year, with current AWS CEO Andy Jassy taking over the top executive role at the commerce company. Amazon announced the news alongside its earnings results on Tuesday.

Amazon initially rose after-hours as the market digested both the company’s earnings and its CEO news. The company beat on both earnings per share, and revenues. That makes it hard to untangle the market’s response to its busy set of announcements. Update: Amazon shares have now dipped into negative territory as investors had more time to parse the company’s total collection of announcements.

Amazon crushed earnings-per-share and revenue expectations in Q4 2020. So, any investor worried about the exit of Bezos from the CEO chair were given some measure of of performance-based amelioration. Amazon’s quarter was its first to break the $100 billion mark, bringing in $125.6 billion in revenue against an anticipated $119.7 billion. And, the company’s $14.09 per share in earnings was nearly double an expected $7.23.

Jassy has been identified previously as the likely successor to Bezos, after leading Amazon Web Services (AWS) to the success it currently enjoys as a leader in the cloud computing space. AWS grew its revenues by 28% in the quarter, lower than its year-ago growth rate of 34%. AWS’s net revenues expanded from $9.95 in the year-ago Q4 to $12.74 billion during the fourth quarter of 2020. Operating income at AWS scaled as well, from $2.60 billion in Q4 2019 to $3.56 billion in the most recent quarter.

Notably Microsoft’s Azure business grew 50% in its most recent earnings period.

Bezos sent an email to Amazon employees, which the company also released publicly on its blog on Tuesday following the announcement. In the missive, he says that while he continues to “find [his] work meaningful and fun,” he wants to be able to devote proper time and attention to his “Day 1 Fund, the Bezos Earth Fund, Blue Origin, The Washington Post, and [his] other passions.”

Developing…

#amazon, #amazon-web-services, #andy-jassy, #aws, #ceo, #computing, #earnings, #executive, #jeff-bezos, #tc, #technology

0

Subscription-based pricing is dead: Smart SaaS companies are shifting to usage-based models

Software buying has evolved. The days of executives choosing software for their employees based on IT compatibility or KPIs are gone. Employees now tell their boss what to buy. This is why we’re seeing more and more SaaS companies — Datadog, Twilio, AWS, Snowflake and Stripe, to name a few — find success with a usage-based pricing model.

The usage-based model allows a customer to start at a low cost, while still preserving the ability to monetize a customer over time.

The usage-based model allows a customer to start at a low cost, minimizing friction to getting started while still preserving the ability to monetize a customer over time because the price is directly tied with the value a customer receives. Not limiting the number of users who can access the software, customers are able to find new use cases — which leads to more long-term success and higher lifetime value.

While we aren’t going 100% usage-based overnight, looking at some of the megatrends in software —  automation, AI and APIs — the value of a product normally doesn’t scale with more logins. Usage-based pricing will be the key to successful monetization in the future. Here are four top tips to help companies scale to $100+ million ARR with this model.

1. Land-and-expand is real

Usage-based pricing is in all layers of the tech stack. Though it was pioneered in the infrastructure layer (think: AWS and Azure), it’s becoming increasingly popular for API-based products and application software — across infrastructure, middleware and applications.

API-based products and appliacation software – across infrastructure, middleware and applications.

Image Credits: Kyle Povar / OpenView

Some fear that investors will hate usage-based pricing because customers aren’t locked into a subscription. But, investors actually see it as a sign that customers are seeing value from a product and there’s no shelf-ware.

In fact, investors are increasingly rewarding usage-based companies in the market. Usage-based companies are trading at a 50% revenue multiple premium over their peers.

Investors especially love how the usage-based pricing model pairs with the land-and-expand business model. And of the IPOs over the last three years, seven of the nine that had the best net dollar retention all have a usage-based model. Snowflake in particular is off the charts with a 158% net dollar retention.

#100-million-arr, #aws, #azure, #cloud, #column, #ec-cloud-and-enterprise-infrastructure, #ec-column, #ec-how-to, #enterprise, #pricing, #saas, #tc, #usage-based-billing

0

Judge denies Parler’s bid to make Amazon restore service

A federal judge has denied an attempt by conservative social network Parler to force Amazon to host it on AWS. As expected by most who read Parler’s ramshackle legal arguments, the court found nothing in the lawsuit that could justify intervention, only “faint and factually inaccurate speculation.”

In the order, filed in the Western Washington U.S. District Court, Judge Barbara Rothstein explained how little Parler actually brought to the table to support its allegations that Amazon and Twitter were engaged in antitrust collusion and that AWS had broken its contract.

On the question of antitrust, Parler fell far short of demonstrating anything at all, let alone collusion in breach of the Sherman Act.

The evidence it has submitted in support of the claim is both dwindlingly slight, and disputed by AWS. Importantly, Parler has submitted no evidence that AWS and Twitter acted together intentionally — or even at all — in restraint of trade.

…Indeed, Parler has failed to do more than raise the specter of preferential treatment of Twitter by AWS.

Amazon had explained in its filing that not only does AWS not even host Twitter yet, though there are plans to do so, but that there are strict rules in place to prevent discussing one client with another. This was more than enough to dispute Parler’s flimsy claim, Rothstein noted.

On breach of contract, Parler had in the course of its argument essentially admitted to breach of contract on its end, but said that Amazon had broken its side of the bargain by not giving it 30 days to fix the problem as stipulated in the customer service agreement (CSA) at Section 7.2(b)(i). Turns out that doesn’t even matter:

Parler fails to acknowledge, let alone dispute, that Section 7.2(b)(ii) — the provision immediately following — authorizes AWS to terminate the Agreement “immediately upon notice” and without providing any opportunity to cure…

So the 30 day agreement was never in play if Amazon didn’t want it to be; one imagines that the clause is for less immediately concerning causes for action. Contract breach argument denied.

Parler’s allegation that Amazon was “motivated by political animus” likewise holds no water, according to the judge.

Parler has failed to allege basic facts that would support several elements of this claim. Most fatally, as discussed above, it has failed to raise more than the scantest speculation that AWS’s actions were taken for an improper purpose or by improper means… To the contrary, the evidence at this point suggests that AWS’s termination of the CSA was in response to Parler’s material breach.

The company also made the argument that it would suffer “irreparable harm” if AWS services were not restored, and in fact Rothstein had no reason to doubt Parler’s claims that it may face “extinction” as a result of these circumstances. Except that “Parler’s claims to irreparable harm are substantially diminished by its admission ‘that much of that harm would be compensable by damages.’ ”

In other words, money would fix it — which means it isn’t exactly irreparable.

On other legalities and technicalities, Rothstein finds that Parler makes no case or that Amazon’s case is much stronger — for instance, that being forced to host violent and hateful content would damage AWS’s reputation, perhaps even irreparably.

As is important to note in cases like this, the judge is not ruling on the merits of the whole case, only on the arguments and evidence presented in the request for an injunction to restore services while the case proceeds.

“To be clear, the Court is not dismissing Parler’s substantive underlying claims at this time” — which is to say that it is not dismissing the substance of the claims, not asserting that they have substance. But Parler “has fallen far short” of demonstrating what it needs to in order to justify a legal intervention of that type.

The case will proceed to its next date, if indeed Parler has not faced the “extinction” it warned of by then.

Rothstein Order on Parler i… by TechCrunch

#amazon, #aws, #lawsuit, #parler

0

Filing: Amazon warned Parler for months about “more than 100” violent threats

3D logo hangs from a convention center ceiling.

Enlarge / Amazon Web Services (AWS) logo displayed during the 4th edition of the Viva Technology show at Parc des Expositions Porte de Versailles on May 17, 2019, in Paris, France. (credit: Chesnot | Getty Images)

Amazon on Tuesday brought receipts in its response to seemingly defunct social networking platform Parler’s lawsuit against it, detailing AWS’ repeated efforts to get Parler to address explicit threats of violence posted to the service.

In the wake of the violent insurrection at the US Capitol last Wednesday, AWS kicked Parler off its Web-hosting platform at midnight Sunday evening. In response, Parler filed a lawsuit accusing Amazon of breaking a contract for political reasons and colluding with Twitter to drive a competitor offline.

But the ban has nothing to do with “stifling viewpoints” or a “conspiracy” to restrain a competitor, Amazon said in its response filing (PDF). Instead, Amazon said, “This case is about Parler’s demonstrated unwillingness and inability” to remove actively dangerous content, including posts that incite and plan “the rape, torture, and assassination of named public officials and private citizens… AWS suspended Parler’s account as a last resort to prevent further access to such content, including plans for violence to disrupt the impending Presidential transition.”

Read 12 remaining paragraphs | Comments

#amazon, #antitrust, #aws, #insurrection, #lawsuits, #parler, #policy, #section-230, #sedition

0

Parler sues Amazon, leveling far-fetched antitrust allegations

Parler has sued Amazon after the beleaguered conservative social media site was expelled from AWS, filing a fanciful complaint alleging the internet giant took it out for political reasons — and in an antitrust conspiracy to benefit Twitter. But its own allegations, including breach of contract, are belied by evidence they supply alongside the suit.

In the lawsuit, filed today in the U.S. Western District Court, Parler complains that “AWS’s decision to effectively terminate Parler’s account is apparently motivated by political animus. It is also apparently designed to reduce competition in the microblogging services market to the benefit of Twitter.”

Regarding the “political animus” it is difficult to speak to Parler’s reasoning, since that argument is supported nowhere in the suit — it simply is never referred to again.

There is the suggestion that Amazon has shown more tolerance for offending content on Twitter than on Parler, but this isn’t well substantiated. For instance, the suit notes that “Hang Mike Pence” trended on Friday the 8th, without noting that much of this volume was, as any user of Twitter can see by searching, people decrying this phrase as having been chanted by the rioters in the Capitol two days prior.

By way of contrast, one Parler post cited by Amazon says that “we need to start systematicly [sic] assasinating [sic] #liberal leaders, liberal activists, #blm leaders and supporters,” and so on. As TechCrunch has been monitoring Parler conversations, we can say that this is far from an isolated example of this rhetoric.

The antitrust argument suggests a conspiracy by Amazon to protect and advance the interests of Twitter. Specifically, the argument is that because Twitter is a major customer of AWS, and Parler is a threat to Twitter, Amazon wanted to take Parler out of the picture.

Given the context of Parler’s looming threat to Twitter and the fact that the Twitter ban might not long muzzle the President if he switched to Parler, potentially bringing tens of millions of followers with him, AWS moved to shut down Parler.

This argument is not convincing for several reasons, but the most obvious one is that Parler was at the time also an AWS customer. If people are going to one customer to another, why would Amazon care at all, let alone enough to interfere to the point of legal and ethical dubiety?

The lawsuit also accuses Amazon of leaking the email communicating Parler’s imminent suspension to reporters before it was sent to administrators at the site. (It also says that Amazon “sought to defame” Parler, though defamation is not part of the legal complaint. Parler seems to be using this term rather loosely.)

Lastly Parler says Amazon is in breach of contract, having not given the 30 days warning stipulated in the terms of service. The exception is if a “material breach remains uncured for a period of 30 days” after notice. As Parler explains it:

On January 8, 2021, AWS brought concerns to Parler about user content that encouraged violence. Parler addressed them, and then AWS said it was “okay” with Parler.

The next day, January 9, 2021, AWS brought more “bad” content to Parler and Parler took down all of that content by the evening.

Thus, there was no uncured material breach of the Agreement for 30 days, as required for termination.

But in the email attached as evidence to the lawsuit — literally exhibit A — Amazon makes it clear the issues have been ongoing for longer than that (emphasis added):

Over the past several weeks, we’ve reported 98 examples to Parler of posts that clearly encourage and incite violence… You remove some violent content when contacted by us or others, but not always with urgency… It’s clear that Parler does not have an effective process to comply with the AWS terms of service.

You can read the rest of the letter here, but it’s obvious that Amazon is not simply saying that a few days of violations are the cause of Parler’s being kicked off the service.

Parler asks a judge for a Temporary Restraining Order that would restore its access to AWS services while the rest of the case is argued, and for damages to be specified at trial.

TechCrunch has asked Amazon for comment and will update this post if we hear back. Meanwhile you can read the full complaint below:

Parler v Amazon by TechCrunch on Scribd

#amazon, #aws, #capitol-riots, #lawsuit, #parler

0

Amazon cuts off Parler’s web hosting following Apple, Google bans

Amazon cuts off Parler’s web hosting following Apple, Google bans

Enlarge (credit: Aurich Lawson / Getty Images)

Amazon Web Services is suspending Parler’s access to its hosting services at the end of the weekend, potentially driving the service offline unless it can find a new provider.

“Because Parler cannot comply with our terms of service and poses a very real risk to public safety, we plan to suspend Parler’s account effective Sunday, January 10th, at 11:59PM PST,” Amazon wrote to Parler in an email obtained and first reported by BuzzFeed.

The email from AWS to Parler cites several examples of violent and threatening posts made in recent days, including threats to “systematically assassinate liberal leaders, liberal activists, BLM leaders and supporters,” and others. “Given the unfortunate events that transpired this past week in Washington, D.C., there is serious risk that this type of content will further incite violence,” the message adds.

Read 6 remaining paragraphs | Comments

#amazon, #android, #app-store, #apple, #aws, #google, #google-play, #ios, #parler, #policy

0

With a $50B run rate in reach, can anyone stop AWS?

AWS, Amazon’s flourishing cloud arm, has been growing at a rapid clip for more than a decade. An early public cloud infrastructure vendor, it has taken advantage of first-to-market status to become the most successful player in the space. In fact, one could argue that many of today’s startups wouldn’t have gotten off the ground without the formation of cloud companies like AWS giving them easy access to infrastructure without having to build it themselves.

In Amazon’s most-recent earnings report, AWS generated revenues of $11.6 billion, good for a run rate of more than $46 billion. That makes the next AWS milestone a run rate of $50 billion, something that could be in reach in less than two quarters if it continues its pace of revenue growth.

The good news for competing companies is that in spite of the market size and relative maturity, there is still plenty of room to grow.

While the cloud division’s growth is slowing in percentage terms as it comes firmly up against the law of large numbers in which AWS has to grow every quarter compared to an ever-larger revenue base. The result of this dynamic is that while AWS’ year-over-year growth rate is slowing over time — from 35% in Q3 2019 to 29% in Q3 2020 — the pace at which it is adding $10 billion chunks of annual revenue run rate is accelerating.

At the AWS re:Invent customer conference this year, AWS CEO Andy Jassy talked about the pace of change over the years, saying that it took the following number of months to grow its run rate by $10 billion increments:

123 months ($0-$10 billion) 23 months ($10 billion-$20 billion) 13 months ($20 billion-$30 billion) 12 months ($30 billion to $40 billion)

Image Credits: TechCrunch (data from AWS)

Extrapolating from the above trend, it should take AWS fewer than 12 months to scale from a run rate of $40 billion to $50 billion. Stating the obvious, Jassy said “the rate of growth in AWS continues to accelerate.” He also took the time to point out that AWS is now the fifth-largest enterprise IT company in the world, ahead of enterprise stalwarts like SAP and Oracle.

What’s amazing is that AWS achieved its scale so fast, not even existing until 2006. That growth rate makes us ask a question: Can anyone hope to stop AWS’ momentum?

The short answer is that it doesn’t appear likely.

Cloud market landscape

A good place to start is surveying the cloud infrastructure competitive landscape to see if there are any cloud companies that could catch the market leader. According to Synergy Research, AWS remains firmly in front, and it doesn’t look like any competitor could catch AWS anytime soon unless some market dynamic caused a drastic change.

Synergy Research Cloud marketshare leaders. Amazon is first, Microsoft is second and Google is third.

Image Credits: Synergy Research

With around a third of the market, AWS is the clear front-runner. Its closest and fiercest rival Microsoft has around 20%. To put that into perspective a bit, last quarter AWS had $11.6 billion in revenue compared to Microsoft’s $5.2 billion Azure result. While Microsoft’s equivalent cloud number is growing faster at 47%, like AWS, that number has begun to drop steadily while it gains market share and higher revenue and it falls victim to that same law of large numbers.

#amazon, #aws, #cloud, #cloud-infrastructure-market, #enterprise, #google-cloud-platform, #microsoft-azure, #tc

0

Google expands its cloud with new regions in Chile, Germany and Saudi Arabia

It’s been a busy year of expansion for the large cloud providers, with AWS, Azure and Google aggressively expanding their data center presence around the world. To cap off the year, Google Cloud today announced a new set of cloud regions, which will go live in the coming months and years. These new regions, which will all have three availability zones, will be in Chile, Germany and Saudi Arabia. That’s on top of the regions in Indonesia, South Korea, the U.S. (Last Vegas and Salt Lake City) that went live this year — and the upcoming regions in France, Italy, Qatar and Spain the company also announced over the course of the last twelve months.

Image Credits: Google

In total, Google currently operates 24 regions with 73 availability zones, not counting those it has announced but that aren’t live yet. While Microsoft Azure is well ahead of the competition in terms of the total number of regions (though some still lack availability zones), Google is now starting to pull even with AWS, which currently offers 24 regions with a total of 77 availability zones. Indeed, with its 12 announced regions, Google Cloud may actually soon pull ahead of AWS, which is currently working on six new regions.

The battleground may soon shift away from these large data centers, though, with a new focus on edge zones close to urban centers that are smaller than the full-blown data centers the large clouds currently operate but that allow businesses to host their services even closer to their customers.

All of this is a clear sign of how much Google has invested in its cloud strategy in recent years. For the longest time, after all, Google Cloud Platform lagged well behind its competitors. Only three years ago, Google Cloud offered only 13 regions, for example. And that’s on top of the company’s heavy investment in submarine cables and edge locations.

#amazon-web-services, #aws, #chile, #cloud-computing, #cloud-infrastructure, #france, #germany, #google, #google-cloud-platform, #indonesia, #italy, #microsoft, #nuodb, #qatar, #salt-lake-city, #saudi-arabia, #south-korea, #spain, #tc, #united-states, #web-hosting, #web-services

0

AWS launches Amazon Location, a new mapping service for developers

AWS today announced the preview of Amazon Location, a new service for developers who want to add location-based features to their web-based and mobile applications.

Based on mapping data from Esri and HERE Technologies, the service provides all of the basic mapping and point-of-interest data you would expect from a mapping service, including built-in tracking and geofencing features. It does not offer a routing feature, though.

“We want to make it easier and more cost-effective for you to add maps, location awareness, and other location-based features to your web and mobile applications,” AWS’s Jeff Barr writes in today’s announcement. “Until now, doing this has been somewhat complex and expensive, and also tied you to the business and programming models of a single provider.”

Image Credits: Amazon

At its core, Amazon Location provides the ability to create maps, based on the data and styles available from its partners (with more partners in the works) and access to their points of interest. Those are obviously the two core features for any mapping service. On top of this, Location also offers built-in support for trackers, so that apps can receive location updates from devices and plot them on a map. This feature can also be linked to Amazon Location’s geofencing tool so apps can send alerts when a device (or the dog that wears it) leaves a particular area.

It may not be as fully-featured as the Google Maps Platform, for example, but AWS promises that Location will be more affordable, with a variety of pricing plans (and a free three-month trial) that start at $0.04 for retrieving 1,000 map tiles. As with all things AWS, the pricing gets more complicated from there but seems quite reasonable overall.

While you can’t directly compare AWS’s tile-based pricing with Google’s plans, it’s worth noting that after you go beyond Google Map Platform’s $200 of free usage per month, static maps cost $2 per 1,000 requests.

After a number of pricing changes, Google’s mapping services lost a lot of goodwill from developers. AWS may be able to capitalize on this with this new platform, especially if it continues to build out its feature set to fill in some of the current gaps in the service.

 

#amazon, #amazon-web-services, #aws, #cloud, #cloud-infrastructure, #computing, #developer, #esri, #google, #google-maps, #information, #jeff-barr, #software, #tc

0

AWS launches CloudShell, a web-based shell for command-line access to AWS

AWS today launched CloudShell, a new, fully-featured web-based shell environment, based on Amazon Linux 2, for developers who want to be able to use some of their favorite command-line tools — and scripts — right inside the AWS Console.

CloudShell, Amazon CTO Werner Vogels explained in his announcement today, is a new browser-based service that will give developers access to a Linux console. When uses start a new CloudShell session, it will automatically be pre-configured to have the same API permissions as your user in the AWS Console.

AWS CloudShell

Image Credits: AWS

“This means you don’t have to manage multiple profiles or API credentials for different test and production environments like you would normally have if you worked in a terminal,” Vogels said. “With these credentials automatically forwarded, it is simple to start a new CloudShell session and use the pre-installed AWS tools right away.”

All of the usual AWS command-line tools will also be pre-installed and ready to go, in addition to Bash, Python, Node.js, PowerShell, VIM, git and more. That also means you’ll be able to install your own favorite tools, too. The OS won’t persist between sessions, so if you break something, you can just restart, but you will get up to 1GB of persistent storage to work with.

Image Credits: AWS

Users can have up to 10 concurrent shells running in each region for free. Developers who need more will have to request an increase.

The new service is now available in AWS’s US East (N. Virginia)US East (Ohio)US West (Oregon)Europe (Ireland), and Asia Pacific (Tokyo) regions, with more to follow.

It’s worth noting that AWS competitors Google Cloud Platform and Microsoft Azure already offer similar services as well — and Google also calls it Cloud Shell, but with a space between the two words.

#aws, #aws-reinvent-2020, #cloud, #command-line, #developer, #developers

0