Microsoft launches Azure Percept, its new hardware and software platform to bring AI to the edge

Microsoft today announced Azure Percept, its new hardware and software platform for bringing more of its Azure AI services to the edge. Percept combines Microsoft’s Azure cloud tools for managing devices and creating AI models with hardware from Microsoft’s device partners. The general idea here is to make it far easier for all kinds of businesses to build and implement AI for things like object detection, anomaly detections, shelf analytics and keyword spotting at the edge by providing them with an end-to-end solution that takes them from building AI models to deploying them on compatible hardware.

To kickstart this, Microsoft also today launches a hardware development kit with an intelligent camera for vision use cases (dubbed Azure Percept Vision). The kit features hardware-enabled AI modules for running models at the edge, but it can also be connected to the cloud. Users will also be able to trial their proofs-of-concept in the real world because the development kit conforms to the widely used 80/20 T-slot framing architecture.

In addition to Percept Vision, Microsoft is also launching Azure Percept Audio for audio-centric use cases.

Azure Percept devices, including Trust Platform Module, Azure Percept Vision and Azure Percept Audio

Azure Percept devices, including Trust Platform Module, Azure Percept Vision and Azure Percept Audio

“We’ve started with the two most common AI workloads, vision and voice, sight and sound, and we’ve given out that blueprint so that manufacturers can take the basics of what we’ve started,” said Roanne Sones, the corporate vice president of Microsoft’s edge and platform group, said. “But they can envision it in any kind of responsible form factor to cover a pattern of the world.”

Percept customers will have access to Azure’s cognitive service and machine learning models and Percept devices will automatically connect to Azure’s IoT hub.

Microsoft says it is working with silicon and equipment manufacturers to build an ecosystem of “intelligent edge devices that are certified to run on the Azure Percept platform.” Over the course of the next few months, Microsoft plans to certify third-party devices for inclusion in this program, which will ideally allow its customers to take their proofs-of-concept and easily deploy them to any certified devices.

“Anybody who builds a prototype using one of our development kits, if they buy a certified device, they don’t have to do any additional work,” said Christa St. Pierre, a product manager in Microsoft’s Azure edge and platform group.

St. Pierre also noted that all of the components of the platform will have to conform to Microsoft’s responsible AI principles — and go through extensive security testing.

#articles, #artificial-intelligence, #azure, #cloud, #cloud-computing, #cloud-infrastructure, #enterprise, #machine-learning, #microsoft, #microsoft-ignite-2021, #microsoft-azure, #perception, #philosophy, #platform, #product-manager, #software-platform, #tc

0

Microsoft’s Azure Arc multi-cloud platform now supports machine learning workloads

With Azure Arc, Microsoft offers a service that allows its customers to run Azure in any Kubernetes environment, no matter where that container cluster is hosted. From Day One, Arc supported a wide range of use cases, but one feature that was sorely missing when it first launched was support for machine learning (ML). But one of the advantages of a tool like Arc is that it allows enterprises to run their workloads close to their data and today, that often means using that data to train ML models.

At its Ignite conference, Microsoft today announced that it bringing exactly this capability to Azure Arc with the addition of Azure Machine Learning to the set of Arc-enabled data services.

“By extending machine learning capabilities to hybrid and multicloud environments, customers can run training models where the data lives while leveraging existing infrastructure investments. This reduces data movement and network latency, while meeting security and compliance requirements,” Azure GM Arpan Shah writes in today’s announcement.

This new capability is now available to Arc customers.

In addition to bringing this new machine learning capability to Arc, Microsoft also today announced that Azure Arc enabled Kubernetes, which allows users to deploy standard Kubernetes configurations to their clusters anywhere, is now generally available.

Also new in this world of hybrid Azure services is support for Azure Kubernetes Service on Azure Stack HCI. That’s a mouthful, but Azure Stack HCI is Microsoft’s platform for running Azure on a set of standardized, hyperconverged hardware inside a customer’s datacenter. The idea pre-dates Azure Arc, but it remains a plausible alternative for enterprises who want to run Azure in their own data center and has continued support from vendors like Dell, Lenovo, HPE, Fujitsu and DataOn.

On the open-source side of Arc, Microsoft also today stressed that Arc is built to work with any Kubernetes distribution that is conformant to the standard of the Cloud Native Computing Foundation (CNCF) and that it has worked with RedHat, Canonical, Rancher and now Nutanix to test and validate their Kubernetes implementations on Azure Arc.

#cloud-computing, #cloud-infrastructure, #cloud-native-computing-foundation, #computing, #dell, #fujitsu, #hpe, #kubernetes, #lenovo, #machine-learning, #microsoft, #microsoft-ignite-2021, #microsoft-azure, #ml, #nutanix, #red-hat, #redhat, #tc

0

Google Cloud puts its Kubernetes Engine on autopilot

Google Cloud today announced a new operating mode for its Kubernetes Engine (GKE) that turns over the management of much of the day-to-day operations of a container cluster to Google’s own engineers and automated tools. With Autopilot, as the new mode is called, Google manages all of the Day 2 operations of managing these clusters and their nodes, all while implementing best practices for operating and securing them.

This new mode augments the existing GKE experience, which already managed most of the infrastructure of standing up a cluster. This ‘standard’ experience, as Google Cloud now calls it, is still available and allows users to customize their configurations to their heart’s content and manually provision and manage their node infrastructure.

Drew Bradstock, the Group Product Manager for GKE, told me that the idea behind Autopilot was to bring together all of the tools that Google already had for GKE and bring them together with its SRE teams who know how to run these clusters in production — and have long done so inside of the company.

“Autopilot stitches together auto-scaling, auto-upgrades, maintenance, Day 2 operations and — just as importantly — does it in a hardened fashion,” Bradstock noted. “[…] What this has allowed our initial customers to do is very quickly offer a better environment for developers or dev and test, as well as production, because they can go from Day Zero and the end of that five-minute cluster creation time, and actually have Day 2 done as well.”

Image Credits: Google

From a developer’s perspective, nothing really changes here, but this new mode does free up teams to focus on the actual workloads and less on managing Kubernetes clusters. With Autopilot, businesses still get the benefits of Kubernetes, but without all of the routine management and maintenance work that comes with that. And that’s definitely a trend we’ve been seeing as the Kubernetes ecosystem has evolved. Few companies, after all, see their ability to effectively manage Kubernetes as their real competitive differentiator.

All of that comes at a price, of course, at a flat fee of $0.10 per hour and cluster (there’s also a free GKE tier that provides $74.40 in billing credits), plus, of course, the usual fees for resources that your clusters consume. Google offers a 99.95% SLA for the control plane of its Autopilot clusters and a 99.9% SLA for Autopilot pods in multiple zones.

Autopilot for GKE joins a set of container-centric products in the Google Cloud portfolio that also include Anthos for running in multi-cloud environments and Cloud Run, Google’s serverless offering. “[Autopilot] is really [about] bringing the automation aspects in GKE we have for running on Google Cloud, and bringing it all together in an easy-to-use package, so that if you’re newer to Kubernetes, or you’ve got a very large fleet, it drastically reduces the amount of time, operations and even compute you need to use,” Bradstock explained.

And while GKE is a key part of Anthos, that service is more about brining Google’s config management, service mesh and other tools to an enterprise’s own data center. Autopilot of GKE is, at least for now, only available on Google Cloud.

“On the serverless side, Cloud Run is really, really great for an opinionated development experience,” Bradstock added. “So you can get going really fast if you want an app to be able to go from zero to 1000 and back to zero — and not worry about anything at all and have it managed entirely by Google. That’s highly valuable and ideal for a lot of development. Autopilot is more about simplifying the entire platform people work on when they want to leverage the Kubernetes ecosystem, be a lot more in control and have a whole bunch of apps running within one environment.”

 

#cloud, #cloud-computing, #cloud-infrastructure, #containers, #enterprise, #gke, #google, #kubernetes, #tc

0

Jamaica’s Amber Group fixes second JamCOVID security lapse

Amber Group has fixed a second security lapse that exposed private keys and passwords for the government’s JamCOVID app and website.

A security researcher told TechCrunch on Sunday that the Amber Group left a file on the JamCOVID website by mistake, which contained passwords that would have granted access to the backend systems, storage, and databases running the JamCOVID site and app. The researcher asked not to be named for fears of legal repercussions from the Jamaican government.

This file, known as an environment variables (.env) file, is often used to store private keys and passwords for third-party services that are necessary for cloud applications to run. But these files are sometimes inadvertently exposed or uploaded by mistake, but can be abused to gain access to data or services that the cloud application relies on if found by a malicious actor.

The exposed environmental variables file was found in an open directory on the JamCOVID website. Although the JamCOVID domain appears to be on the Ministry of Health’s website, Amber Group controls and maintains the JamCOVID dashboard, app, and website.

The exposed file contained secret credentials for the Amazon Web Services databases and storage servers for JamCOVID. The file also contained a username and password to the SMS gateway used by JamCOVID to send text messages, and credentials for its email-sending server. (TechCrunch did not test or use any of the passwords or keys as doing so would be unlawful.)

A portion of the exposed credentials found on the JamCOVID website, controlled and maintained by Amber Group. (Image: TechCrunch)

TechCrunch contacted Amber Group’s chief executive Dushyant Savadia to alert the company to the security lapse, who pulled the exposed file offline a short time later. We also asked Savadia, who did not comment, to revoke and replace the keys.

Matthew Samuda, a minister in Jamaica’s Ministry of National Security, did not respond to a request for comment or our questions — including if the Jamaican government plans to continue its contract or relationship with Amber Group, and what — if any — security requirements were agreed upon by both the Amber Group and the Jamaican government for the JamCOVID app and website?

Details of the exposure comes just days after Escala 24×7, a cybersecurity firm based in the Caribbean, claimed that it had found no vulnerabilities in the JamCOVID service following the initial security lapse.

Escala’s chief executive Alejandro Planas declined to say if his company was aware of the second security lapse prior to its comments last week, saying only that his company was under a non-disclosure agreement and “is not able to provide any additional information.”

This latest security incident comes less than a week after Amber Group secured a passwordless cloud server hosting immigration records and negative COVID-19 test results for hundreds of thousands of travelers who visited the island over the past year. Travelers visiting the island are required to upload their COVID-19 test results in order to obtain a travel authorization before their flights. Many of the victims whose information was exposed on the server are Americans.

One news report recently quoted Amber’s Savadia as saying that the company developed JamCOVID19 “within three days.”

Neither the Amber Group nor the Jamaican government have commented to TechCrunch, but Samada told local radio that it has launched a criminal investigation into the security lapse.


Send tips securely over Signal and WhatsApp to +1 646-755-8849. You can also send files or documents using our SecureDrop. Learn more

#amazon-web-services, #caribbean, #cloud-applications, #cloud-computing, #cloud-infrastructure, #cryptography, #government, #operating-systems, #password, #securedrop, #security, #signal, #sms, #software

0

Microsoft announces the next perpetual release of Office

If you use Office, Microsoft would really, really, really like you to buy a cloud-enabled subscription to Microsoft 365 (formerly Office 365). But as the company promised, it will continue to make a stand-alone, perpetual license for Office available for the foreseeable future. A while back, it launched Office 2019, which includes the standard suite of Office tools, but is frozen in time and without the benefit of the regular feature updates and cloud-based tools that come with the subscription offering.

Today, Microsoft is announcing what is now called the Microsoft Office LTSC (Long Term Servicing Channel). It’ll be available as a commercial preview in April and will be available on both Mac and Windows, in both 32-bit and 64-bit versions.

And like with the previous version, it’s clear that Microsoft would really prefer if you just moved to the cloud already. But it also knows that not everybody can do that, so it now calls this version with its perpetual license that you pay for once and then use for as long as you want to (or have compatible hardware) a “specialty product for specific scenarios. Those scenarios, Microsoft agrees, include situations where you have a regulated device that can’t accept feature updates for years at a time, process control devices on a manufacturing floor and other devices that simply can’t be connected to the internet.

“We expect that most customers who use Office LTSC won’t do it across their entire organization, but only in specific scenarios,” Microsoft’s CVP for Microsoft 365, Jared Spataro, writes in today’s announcement.

Because it’s a specialty product, Microsoft will also raise the price for Office Professional Plus, Office Standard, and the individual Office apps by up to 10%.

“To fuel the work of the future, we need the power of the cloud,” writes Spataro. “The cloud is where we invest, where we innovate, where we discover the solutions that help our customers empower everyone in their organization – even as we all adjust to a new world of work. But we also acknowledge that some of our customers need to enable a limited set of locked-in-time scenarios, and these updates reflect our commitment to helping them meet this need.”

If you have one of these special use cases, the price increase will not likely deter you and you’ll likely be happy to hear that Microsoft is committing to another release in this long-term channel in the future, too.

As for the new features in this release, Spataro notes that will have dark mode support, new capabilities like Dynamic Arrays and XLOOKUP in Excel, and performance improvements across the board. One other change worth calling out is that it will not ship with Skype for Business but the Microsoft Teams app (though you can still download Skype for Business if you need it).

#cloud, #cloud-computing, #computing, #jared-spataro, #microsoft, #microsoft-365, #microsoft-office, #office-365, #operating-systems, #software, #subscription-services, #windows-10

0

Microsoft’s Dapr open-source project to help developers build cloud-native apps hits 1.0

Dapr, the Microsoft-incubated open-source project that aims to make it easier for developers to build event-driven, distributed cloud-native applications, hit its 1.0 milestone today, signifying the project’s readiness for production use cases. Microsoft launched the Distributed Application Runtime (that’s what “Dapr” stand for) back in October 2019. Since then, the project released 14 updates and the community launched integrations with virtually all major cloud providers, including Azure, AWS, Alibaba and Google Cloud.

The goal for Dapr, Microsoft Azure CTO Mark Russinovich told me, was to democratize cloud-native development for enterprise developers.

“When we go look at what enterprise developers are being asked to do — they’ve traditionally been doing client, server, web plus database-type applications,” he noted. “But now, we’re asking them to containerize and to create microservices that scale out and have no-downtime updates — and they’ve got to integrate with all these cloud services. And many enterprises are, on top of that, asking them to make apps that are portable across on-premises environments as well as cloud environments or even be able to move between clouds. So just tons of complexity has been thrown at them that’s not specific to or not relevant to the business problems they’re trying to solve.”

And a lot of the development involves re-inventing the wheel to make their applications reliably talk to various other services. The idea behind Dapr is to give developers a single runtime that, out of the box, provides the tools that developers need to build event-driven microservices. Among other things, Dapr provides various building blocks for things like service-to-service communications, state management, pub/sub and secrets management.

Image Credits: Dapr

“The goal with Dapr was: let’s take care of all of the mundane work of writing one of these cloud-native distributed, highly available, scalable, secure cloud services, away from the developers so they can focus on their code. And actually, we took lessons from serverless, from Functions-as-a-Service where with, for example Azure Functions, it’s event-driven, they focus on their business logic and then things like the bindings that come with Azure Functions take care of connecting with other services,” Russinovich said.

He also noted that another goal here was to do away with language-specific models and to create a programming model that can be leveraged from any language. Enterprises, after all, tend to use multiple languages in their existing code, and a lot of them are now looking at how to best modernize their existing applications — without throwing out all of their current code.

As Russinovich noted, the project now has more than 700 contributors outside of Microsoft (though the core commuters are largely from Microsoft) and a number of businesses started using it in production before the 1.0 release. One of the larger cloud providers that is already using it is Alibaba. “Alibaba Cloud has really fallen in love with Dapr and is leveraging it heavily,” he said. Other organizations that have contributed to Dapr include HashiCorp and early users like ZEISS, Ignition Group and New Relic.

And while it may seem a bit odd for a cloud provider to be happy that its competitors are using its innovations already, Russinovich noted that this was exactly the plan and that the team hopes to bring Dapr into a foundation soon.

“We’ve been on a path to open governance for several months and the goal is to get this into a foundation. […] The goal is opening this up. It’s not a Microsoft thing. It’s an industry thing,” he said — but he wasn’t quite ready to say to which foundation the team is talking.

 

#alibaba, #alibaba-cloud, #aws, #cloud, #cloud-computing, #cloud-infrastructure, #cloud-services, #cloud-storage, #computing, #developer, #enterprise, #google, #hashicorp, #mark-russinovich, #microservices, #microsoft, #microsoft-azure, #new-relic, #serverless-computing, #tc

0

Databricks brings its lakehouse to Google Cloud

Databricks and Google Cloud today announced a new partnership that will bring to Databricks customers a deep integration with Google’s BigQuery platform and Google Kubernetes Engine. This will allow Databricks’ users to bring their data lakes and the service’s analytics capabilities to Google Cloud.

Databricks already features a deep integration with Microsoft Azure — one that goes well beyond this new partnership with Google Cloud — and the company is also an AWS partner. By adding Google Cloud to this list, the company can now claim to be the “only unified data platform available across all three clouds (Google, AWS and Azure).”

It’s worth stressing, though, that Databricks’ Azure integration is a bit of a different deal from this new partnership with Google Cloud. “Azure Databricks is a first-party Microsoft Azure service that is sold and supported directly by Microsoft. The first-party service is unique to our Microsoft partnership. Customers on Google Cloud will purchase directly from Databricks through the Google Cloud Marketplace,” a company spokesperson told me. That makes it a bit more of a run-of-the-mill partnership compared to the Microsoft deal, but that doesn’t mean the two companies aren’t just as excited about it.

“We’re delighted to deliver Databricks’ lakehouse for AI and ML-driven analytics on Google Cloud,” said Google Cloud CEO Thomas Kurian (or, more likely, one of the company’s many PR specialists who likely wrote and re-wrote this for him a few times before it got approved). “By combining Databricks’ capabilities in data engineering and analytics with Google Cloud’s global, secure network—and our expertise in analytics and delivering containerized applications—we can help companies transform their businesses through the power of data.”

Similarly, Databricks CEO Ali Ghodsi noted that he is “thrilled to partner with Google Cloud and deliver on our shared vision of a simplified, open, and unified data platform that supports all analytics and AI use-cases that will empower our customers to innovate even faster.”

And indeed, this is clearly a thrilling delight for everybody around, including customers like Conde Nast, whose Director of Data Engineering Nana Essuman is “excited to see leaders like Google Cloud and Databricks come together to streamline and simplify getting value from data.”

If you’re also thrilled about this, you’ll be able to hear more about it from both Ghodsi and Kurian at an event on April 6 that is apparently hosted by TechCrunch (though this is the first I’ve heard of it, too).

#ali-ghodsi, #artificial-intelligence, #aws, #bigquery, #cloud-computing, #cloud-infrastructure, #computing, #conde-nast, #databricks, #google, #google-cloud, #microsoft, #microsoft-azure, #partner, #tc, #thomas-kurian

0

This Cloud Computing Billing Expert Is Very Funny. Seriously.

Corey Quinn has made it his business to understand Amazon’s cloud-computing charges and have some fun at the company’s expense.

#amazon-com-inc, #blogs-and-blogging-internet, #cloud-computing, #computers-and-the-internet, #prices-fares-fees-and-rates, #university-of-maine

0

What Is Amazon Web Services?

Andy Jassy, the successor to Jeff Bezos, was already running the company’s most profitable business.

#amazon-com-inc, #bezos-jeffrey-p, #cloud-computing, #jassy-andrew-r

0

LyteLoop raises $40 million to launch satellites that use light to store data

Soon, your cloud photo backups could reside on beams of light transmitted between satellites instead of in huge, power-hungry server farms here on Earth. Startup LyteLoop has spent the past five years doing tackling the physics challenges that can make that possible, and now it’s raised $40 million to help it leapfrog the remaining engineering hurdles to make its bold vision a reality.

LyteLoop’s new funding will provide it with enough runway to achieve its next major milestone: putting three prototype satellites equipped with its novel data storage technology into orbit within the next three years. The company intends to build and launch six of these, which will demonstrate how its laser-based storage medium operates on orbit.

I spoke to LyteLoop CEO Ohad Harlev about the company’s progress, technology and plans. Harlev said five years into its founding, the company is very confident in the science that underlies its data storage methods – and thrilled about the advantages it could offer over traditional data warehousing technology used today. Security, for instance, gets a big boost from LyteLoop’s storage paradigm.

“Everybody on every single data center has the same same possible maximum level of data security,” he said. “We can provide an extra four layers of cyber security, and they’re all physics-based. Anything that can be applied on Earth, we can apply in our data center, but for example, the fact that we’re storing data on photons, we could put in quantum encryption, which others can’t. Plus, there are big security benefits because the data is in motion, in space, and moving at the speed of light.”

On top of security, LyteLoop’s model also offers benefits when it comes to privacy, because the data it’s storing is technically always in transit between satellites, which means it’ll be subject to an entirely different set of regulations vs. those that come into play when you’re talking about data which is warehoused on drives in storage facilities. LyteLoop also claims advantages in terms of access, because the storage and the network are one in the same, with the satellites able to provide their information to ground stations anywhere on Earth. Finally, Harlev points out that it’s incredibly power efficient, and also ecologically sound in terms of not requiring million of gallons of water for cooling, both significant downsides of our current data center storage practices.

On top of all of that, Harlev says that LyteLoop’s storage will not only be cost-competitive with current cloud-based storage solutions, but will in fact be more affordable – even without factoring in likely decreases to come in launch costs as SpaceX iterates on its own technology and more small satellite launch providers, including Virgin Orbit and Rocket Lab, come online and expand their capacity.

“Although it’s more expensive to build and launch the satellite, it is still a lot cheaper to maintain them in the space,” he said. “So when we do a total cost of ownership calculation, we are cheaper, considerably cheaper, on a total cost of ownership basis. However […] when we compare what the actual users can do, you know, we can definitely go to completely different pricing model.”

Harlev is referring to the possibility of bundled pricing for combining storage and delivery – other providers would require that you supply the network, for instance, in order to move the data you’re storing. LyteLoop’s technology could also offset existing spend on reducing a company’s carbon footprint, because of its much-reduced ecological impact.

The company is focused squarely on getting its satellites to market, with a plan to take its proof of concept and expand that to a full production satellite roughly five years form now, with an initial service offering made available at that time. But LyteLoop’s tech could have equally exciting applications here on Earth. Harlev says that if you created a LyteLoop data center roughly the size of a football field, it would be roughly 500 times as efficient at storing data vs. traditional data warehousing.

The startup’s technology, which essentially stores data on photons instead of physical media, just requires far less matter than do our current ways of doing things, which not only helps its environmental impact, but which also makes it a much more sensible course for in-space storage when compared to physical media. The launch business is all about optimizing mass to orbit in order to reduce costs, and as Harlev notes, photons are massless.

#aerospace, #ceo, #cloud-computing, #computing, #data-management, #elon-musk, #funding, #hyperloop, #laser, #physical-media, #quantum-encryption, #recent-funding, #rocket-lab, #satellite, #small-satellite, #space, #spaceflight, #spacex, #startup, #startups, #tc, #technology, #virgin-orbit

0

Twitter expands Google Cloud partnership to ‘learn more from data, move faster’

Twitter is upping its data analytics game in the form of an expanded, multiyear partnership with Google Cloud.

The social media giant first began working with Google in 2018 to move Hadoop clusters to the Google Cloud platform as a part of its Partly Cloudy strategy.

With the expanded agreement, Twitter will move its offline analytics, data processing and machine learning workloads to Google’s Data Cloud

I talked with Sudhir Hasbe, Google Cloud’s director of product management and data analytics, to better understand just what this means. He said the move will give Twitter the ability to analyze data faster as part of its goal to provide a better user experience.

You see, behind every tweet, like and retweet, there is a series of data points that helps Twitter understand things like just how people are using the service, and what type of content they might want to see.

Twitter’s data platform ingests trillions of events, processes hundreds of petabytes of data and runs tens of thousands of jobs on over a dozen clusters daily. 

By expanding its partnership with Google, Twitter is essentially adopting the company’s Data Cloud, including BigQuery, Dataflow, BigTable and machine learning (ML) tools to make more sense of, and improve, how Twitter features are used.

Twitter declined a request for an interview but CTO Parag Agrawal said in a written statement that the company’s initial partnership was successful and led to enhanced productivity on the part of its engineering teams.  

“Building on this relationship and Google’s technologies will allow us to learn more from our data, move faster and serve more relevant content to the people who use our service every day,” he said.

Google Cloud’s Hasbe believes that organizations like Twitter need a highly scalable analytics platform so they can derive value from all their data collecting. By expanding its partnership with Google, Twitter is able to add significantly more use cases out of its cloud platform.

“Our platform is serverless and we can help organizations, like Twitter, automatically scale up and down,” Hasbe told TechCrunch.

“Twitter can bring massive amounts of data, analyze and get insights without the burden of having to worry about infrastructure or capacity management or how many machines or servers they might need,” he added. “None of that is their problem.” 

The shift will also make it easier for Twitter’s data scientists and other similar personnel to build machine learning models and do predictive analytics, according to Hasbe.

Other organizations that have recently turned to Google Cloud to help navigate the pandemic include Bed, Bath and Beyond, Wayfair, Etsy and The Home Depot.

On February 2, TC’s Frederic Lardinois reported that while Google Cloud is seeing accelerated revenue growth, its losses are also increasing. This week, Google disclosed operating income/loss for its Google Cloud business unit in its quarterly earnings. Google Cloud lost $5.6 billion in Google’s fiscal year 2020, which ended December 31. That’s on $13 billion of revenue.

#apache-hadoop, #cloud, #cloud-computing, #cloud-infrastructure, #data-analysis, #data-processing, #google-cloud, #google-cloud-platform, #machine-learning, #twitter

0

Google Cloud launches Apigee X, the next generation of its API management platform

Google today announced the launch of Apigee X, the next major release of the Apgiee API management platform it acquired back in 2016.

“If you look at what’s happening — especially after the pandemic started in March last year — the volume of digital activities has gone up in every kind of industry, all kinds of use cases are coming up. And one of the things we see is the need for a really high-performance, reliable, global digital transformation platform,” Amit Zavery, Google Cloud’s head of platform, told me.

He noted that the number of API calls has gone up 47 percent from last year and that the platform now handles about 2.2 trillion API calls per year.

At the core of the updates are deeper integrations with Google Cloud’s AI, security and networking tools. In practice, this means Apigee users can now deploy their APIs across 24 Google Cloud regions, for example, and use Google’s caching services in more than 100 edge locations.

Image Credits: Google

In addition, Apigee X now integrates with Google’s Cloud Armor firewall and its Cloud Identity Access Management platform. This also means that Apigee users won’t have to use third-party tools for their firewall and identity management needs.

“We do a lot of AI/ML-based anomaly detection and operations management,” Zavery explained. “We can predict any kind of malicious intent or any other things which might happen to those API calls or your traffic by embedding a lot of those insights into our API platform. I think [that] is a big improvement, as well as new features, especially in operations management, security management, vulnerability management and making those a core capability so that as a business, you don’t have to worry about all these things. It comes with the core capabilities and that is really where the front doors of digital front-ends can shine and customers can focus on that.”

The platform now also makes better use of Google’s AI capabilities to help users identify anomalies or predict traffic for peak seasons. The idea here is to help customers automate a lot of the standards automation tasks and, of course, improve security at the same time.

As Zavery stressed, API management is now about more than just managing traffic between applications. But more than just helping customers manage their digital transformation projects, the Apigee team is now thinking about what it calls ‘digital excellence.’ “That’s how we’re thinking of the journey for customers moving from not just ‘hey, I can have a front end,’ but what about all the excellent things you want to do and how we can do that,” Zavery said.

“During these uncertain times, organizations worldwide are doubling-down on their API strategies to operate anywhere, automate processes, and deliver new digital experiences quickly and securely,” said James Fairweather, Chief Innovation Officer at Pitney Bowes. “By powering APIs with new capabilities like reCAPTCHA Enterprise, Cloud Armor (WAF), and Cloud CDN, Apigee X makes it easy for enterprises like us to scale digital initiatives, and deliver innovative experiences to our customers, employees and partners.”

#api, #apigee, #artificial-intelligence, #caching, #cloud, #cloud-computing, #cloud-infrastructure, #computing, #developer, #enterprise, #firewall, #google, #google-cloud, #google-cloud-platform

0

How Andy Jassy, Amazon’s Next C.E.O., Was a ‘Brain Double’ for Jeff Bezos

Mr. Jassy, who will become Amazon’s chief this summer, has spent more than two decades absorbing lessons from Mr. Bezos.

#amazon-com-inc, #appointments-and-executive-changes, #bezos-jeffrey-p, #cloud-computing, #computers-and-the-internet, #e-commerce, #executives-and-management-theory, #facial-recognition-software, #jassy-andrew-r, #labor-and-jobs

0

What Comes Next For Amazon Without Jeff Bezos?

Amazon has reimagined entire industries. What happens next to the $1.7 trillion company?

#amazon-com-inc, #bezos-jeffrey-p, #cloud-computing, #computers-and-the-internet

0

Alibaba Cloud turns profitable after 11 years

Alibaba Cloud, the cloud computing arm of Chinese e-commerce giant Alibaba, became profitable for the first time in the December quarter, the company announced in its earnings report.

The firm’s cloud unit achieved positive adjusted EBITA (earnings before interest, taxes, and amortization) during the quarter, after being in business since 2009. The milestone is in part a result of the “realization of economies of scale,” Alibaba said.

Alibaba Cloud, which incorporates everything from database, storage, big data analytics, security, machine learning to IoT services, has dominated China’s cloud infrastructure market for the past few years and its market share worldwide continues to grow. As of 2019, the cloud behemoth was the third-largest public cloud company (providing infrastructure-as-a-service) in the world with a 9% market, trailing behind Amazon and Microsoft, according to Gartner.

COVID-19 has been a boon to cloud and digital adoption around the world as the virus forces offline activities online. For instance, Alibaba notes in its earnings that demand for digitalization in the restaurant and service industry remains strong in the post-COVID period in China, a trend that benefits its food delivery and on-demand services app, Ele.me. The firm’s cloud revenue grew to $2.47 billion in the December quarter, primarily driven by “robust growth in revenue from customers in the internet and retail industries and the public sector.”

Commerce remained Alibaba’s largest revenue driver in the quarter accounting for nearly 70% of revenue, while cloud contributed 7%.

Tencent’s cloud segment is Alibaba Cloud’s closest rival. As of 2019, it had a 2.8% market globally, according to Gartner. The industry in China still has ample room for growth, as Alibaba executive vice-chairman Joe Tsai pointed out in an analyst call from last August.

“Based on the third-party studies that we’ve seen, the China cloud market is going to be somewhere in the $15 billion to $20 billion total size range, and the U.S. market is about eight times that. So the China market is still at a very early stage,” said Tsai.

“We feel very good, very comfortable to be in the China market and just being an environment of faster digitization and faster growth of usage of cloud from enterprises because we’re growing from such a smaller base, about one-eighth the base of that of the U.S. market.”

A key strategy to grow Alibaba Cloud is the integration of cloud into Alibaba’s enterprise chat app Dingtalk, which the company hopes can drive industries across the board onto cloud services. It’s a relationship that echoes that between Microsoft 365 and Azure, as president of Alibaba Cloud, Zhang Jianfeng, previously suggested in an interview.

“We don’t want to just provide cloud in terms of infrastructure services,” said Alibaba CEO Daniel Zhang in the August earnings call. “If we just do it as an infrastructure service, as SaaS services, then price competition is inevitable, and then all the cloud service is more like a commodity business. Today, Alibaba’s cloud is cloud plus intelligence services, and it’s about cloud plus the power of the data usage.”

#alibaba, #alibaba-cloud, #asia, #china, #cloud, #cloud-computing, #earnings, #tc

0

Google Cloud lost $5.6B in 2020

Google continues to bet heavily on Google Cloud and while it is seeing accelerated revenue growth, its losses are also increasing. For the first time today, Google disclosed operating income/loss for its Google Cloud business unit in its quarterly earnings today. Google Cloud lost $5.6 billion in Google’s fiscal year 2020, which ended December 31. That’s on $13 billion of revenue.

While this may look a bit dire at first glance (cloud computing should be pretty profitable, after all), there’s different ways of looking at this. On the one hand, losses are mounting, up from $4.3 billion in 2018 and $4.6 billion in 2019, but revenue is also seeing strong growth, up from $5.8 billion in 2018 and $8.9 billion in 2019. What we’re seeing here, more than anything else, is Google investing heavily in its cloud business.

Google’s Cloud unit, led by its CEO Thomas Kurian, includes all of its cloud infrastructure and platform services, as well as Google Workspace (which you probably still refer to as G Suite). And that’s exactly where Google is making a lot of investments right now. Data centers, after all, don’t come cheap and Google Cloud launched four new regions in 2020 and started work on others. That’s on top of its investment in its core services and a number of acquisitions.

Image Credits: Google

“Our strong fourth quarter performance, with revenues of $56.9 billion, was driven by Search and YouTube, as consumer and business activity recovered from earlier in the year,” Ruth Porat, CFO of Google and Alphabet, said. “Google Cloud revenues were $13.1 billion for 2020, with significant ongoing momentum, and we remain focused on delivering value across the growth opportunities we see.”

For now, though, Google’s core business, which saw a strong rebound in its advertising business in the last quarter, is subsidizing its cloud expansion.

Meanwhile, over in Seattle, AWS today reported revenue of $12.74 billion in the last quarter alone and operating income of $3.56 billion. For 2020, AWS’s operating income was $13.5 billion.

#alphabet, #amazon-web-services, #artificial-intelligence, #aws, #ceo, #cfo, #cloud-computing, #cloud-infrastructure, #companies, #computing, #diane-greene, #earnings, #google, #google-cloud, #google-cloud-platform, #ruth-porat, #seattle, #thomas-kurian, #world-wide-web

0

Cloud infrastructure startup CloudNatix gets $4.5 million seed round led by DNX Ventures

CloudNatix founder and chief executive officer Rohit Seth

CloudNatix founder and chief executive officer Rohit Seth

CloudNatix, a startup that provides infrastructure for businesses with multiple cloud and on-premise operations, announced it has raised $4.5 million in seed funding. The round was led by DNX Ventures, an investment firm that focuses on United States and Japanese B2B startups, with participation from Cota Capital. Existing investors Incubate Fund, Vela Partners and 468 Capital also contributed.

The company also added DNX Ventures managing partner Hiro Rio Maeda to its board of directors.

CloudNatix was founded in 2018 by chief executive officer Rohit Seth, who previously held lead engineering roles at Google. The company’s platform helps businesses reduce IT costs by analyzing their infrastructure spending and then using automation to make IT operations across multiple clouds more efficient. The company’s typical customer spends between $500,000 to $50 million on infrastructure each year, and use at least one cloud service provider in addition on-premise networks.

Built on open-source software like Kubernetes and Prometheus, CloudNatix works with all major cloud providers and on-premise networks. For DevOps teams, it helps configure and manage infrastructure that runs both legacy and modern cloud-native applications, and enables them to transition more easily from on-premise networks to cloud services.

CloudNatix competes most directly with VMWare and Red Hat OpenShift. But both of those services are limited to their base platforms, while CloudNatix’s advantage is that is agnostic to base platforms and cloud service providers, Seth told TechCrunch.

The company’s seed round will be used to scale its engineering, customer support and sales teams.

 

#cloud-computing, #cloud-infrastructure, #cloudnatix, #enterprise, #fundings-exits, #startups, #tc

0

An argument against cloud-based applications

In the last decade we’ve seen massive changes in how we consume and interact with our world. The Yellow Pages is a concept that has to be meticulously explained with an impertinent scoff at our own age. We live within our smartphones, within our apps.

While we thrive with the information of the world at our fingertips, we casually throw away any semblance of privacy in exchange for the convenience of this world.

This line we straddle has been drawn with recklessness and calculation by big tech companies over the years as we’ve come to terms with what app manufacturers, large technology companies, and app stores demand of us.

Our private data into the cloud

According to Symantec, 89% of our Android apps and 39% of our iOS apps require access to private information. This risky use sends our data to cloud servers, to both amplify the performance of the application (think about the data needed for fitness apps) and store data for advertising demographics.

While large data companies would argue that data is not held for long, or not used in a nefarious manner, when we use the apps on our phones, we create an undeniable data trail. Companies generally keep data on the move, and servers around the world are constantly keeping data flowing, further away from its source.

Once we accept the terms and conditions we rarely read, our private data is no longer such. It is in the cloud, a term which has eluded concrete understanding throughout the years.

A distinction between cloud-based apps and cloud computing must be addressed. Cloud computing at an enterprise level, while argued against ad nauseam over the years, is generally considered to be a secure and cost-effective option for many businesses.

Even back in 2010, Microsoft said 70% of its team was working on things that were cloud-based or cloud-inspired, and the company projected that number would rise to 90% within a year. That was before we started relying on the cloud to store our most personal, private data.

Cloudy with a chance of confusion

To add complexity to this issue, there are literally apps to protect your privacy from other apps on your smart phone. Tearing more meat off the privacy bone, these apps themselves require a level of access that would generally raise eyebrows if it were any other category of app.

Consider the scenario where you use a key to encrypt data, but then you need to encrypt that key to make it safe. Ultimately, you end up with the most important keys not being encrypted. There is no win-win here. There is only finding a middle ground of contentment in which your apps find as much purchase in your private data as your doctor finds in your medical history.

The cloud is not tangible, nor is it something we as givers of the data can access. Each company has its own cloud servers, each one collecting similar data. But we have to consider why we give up this data. What are we getting in return? We are given access to applications that perhaps make our lives easier or better, but essentially are a service. It’s this service end of the transaction that must be altered.

App developers have to find a method of service delivery that does not require storage of personal data. There are two sides to this. The first is creating algorithms that can function on a local basis, rather than centralized and mixed with other data sets. The second is a shift in the general attitude of the industry, one in which free services are provided for the cost of your personal data (which ultimately is used to foster marketing opportunities).

Of course, asking this of any big data company that thrives on its data collection and marketing process is untenable. So the change has to come from new companies, willing to risk offering cloud privacy while still providing a service worth paying for. Because it wouldn’t be free. It cannot be free, as free is what got us into this situation in the first place.

Clearing the clouds of future privacy

What we can do right now is at least take a stance of personal vigilance. While there is some personal data that we cannot stem the flow of onto cloud servers around the world, we can at least limit the use of frivolous apps that collect too much data. For instance, games should never need access to our contacts, to our camera and so on. Everything within our phone is connected, it’s why Facebook seems to know everything about us, down to what’s in our bank account.

This sharing takes place on our phone and at the cloud level, and is something we need to consider when accepting the terms on a new app. When we sign into apps with our social accounts, we are just assisting the further collection of our data.

The cloud isn’t some omnipotent enemy here, but it is the excuse and tool that allows the mass collection of our personal data.

The future is likely one in which devices and apps finally become self-sufficient and localized, enabling users to maintain control of their data. The way we access apps and data in the cloud will change as well, as we’ll demand a functional process that forces a methodology change in service provisions. The cloud will be relegated to public data storage, leaving our private data on our devices where it belongs. We have to collectively push for this change, lest we lose whatever semblance of privacy in our data we have left.

#big-data, #cloud, #cloud-computing, #cloud-infrastructure, #cloud-storage, #column, #opinion, #privacy, #security

0

Stacklet raises $18M for its cloud governance platform

Stacklet, a startup that is commercializing the Cloud Custodian open-source cloud governance project, today announced that it has raised an $18 million Series A funding round. The round was led by Addition, with participation from Foundation Capital and new individual investor Liam Randall, who is joining the company as VP of business development. Addition and Foundation Capital also invested in Stacklet’s seed round, which the company announced last August. This new round brings the company’s total funding to $22 million.

Stacklet helps enterprises manage their data governance stance across different clouds, accounts, policies and regions, with a focus on security, cost optimization and regulatory compliance. The service offers its users a set of pre-defined policy packs that encode best practices for access to cloud resources, though users can obviously also specify their own rules. In addition, Stacklet offers a number of analytics functions around policy health and resource auditing, as well as a real-time inventory and change management logs for a company’s cloud assets.

The company was co-founded by Travis Stanfield (CEO) and Kapil Thangavelu (CTO). Both bring a lot of industry expertise to the table. Stanfield spent time as an engineer at Microsoft and leading DealerTrack Technologies, while Thangavelu worked at Canonical and most recently in Amazon’s AWSOpen team. Thangavelu is also one of the co-creators of the Cloud Custodian project, which was first incubated at Capital One, where the two co-founders met during their time there, and is now a sandbox project under the Cloud Native Computing Foundation’s umbrella.

“When I joined Capital One, they had made the executive decision to go all-in on cloud and close their data centers,” Thangavelu told me. “I got to join on the ground floor of that movement and Custodian was born as a side project, looking at some of the governance and security needs that large regulated enterprises have as they move into the cloud.”

As companies have sped up their move to the cloud during the pandemic, the need for products like Stacklets has also increased. The company isn’t naming most of its customers, but one of them is FICO, among a number of other larger enterprises. Stacklet isn’t purely focused on the enterprise, though. “Once the cloud infrastructure becomes — for a particular organization — large enough that it’s not knowable in a single person’s head, we can deliver value for you at that time and certainly, whether it’s through the open source or through Stacklet, we will have a story there.” The Cloud Custodian open-source project is already seeing serious use among large enterprises, though, and Stacklet obviously benefits from that as well.

“In just 8 months, Travis and Kapil have gone from an idea to a functioning team with 15 employees, signed early Fortune 2000 design partners and are well on their way to building the Stacklet commercial platform,” Foundation Capital’s Sid Trivedi said. “They’ve done all this while sheltered in place at home during a once-in-a-lifetime global pandemic. This is the type of velocity that investors look for from an early-stage company.”

Looking ahead, the team plans to use the new funding to continue to developed the product, which should be generally available later this year, expand both its engineering and its go-to-market teams and continue to grow the open-source community around Cloud Custodian.

#cloud, #cloud-computing, #cloud-custodian, #cloud-infrastructure, #cloud-native-computing-foundation, #computing, #engineer, #enterprise, #foundation-capital, #kapil-thangavelu, #microsoft, #recent-funding, #stacklet, #startups, #tc

0

Vantage makes managing AWS easier

Vantage, a new service that makes managing AWS resources and their associated spend easier, is coming out of stealth today. The service offers its users an alternative to the complex AWS console with support for most of the standard AWS services, including EC2 instances, S3 buckets, VPCs, ECS and Fargate and Route 53 hosted zones.

The company’s founder, Ben Schaechter, previously worked at AWS and Digital Ocean (and before that, he worked on Crunchbase, too). Yet while DigitalOcean showed him how to build a developer experience for individuals and small businesses, he argues that the underlying services and hardware simply weren’t as robust as those of the hyperclouds. AWS, on the other hand, offers everything a developer could want (and likely more), but the user experience leaves a lot to be desired.

Image Credits: Vantage

“The idea was really born out of ‘what if we could take the user experience of DigitalOcean and apply it to the three public cloud providers, AWS, GCP and Azure,” Schaechter told me. “We decided to start just with AWS because the experience there is the roughest and it’s the largest player in the market. And I really think that we can provide a lot of value there before we do GCP and Azure.”

The focus for Vantage is on the developer experience and cost transparency. Schaechter noted that some of its users describe it as being akin to a “Mint for AWS.” To get started, you give Vantage a set of read permissions to your AWS services and the tool will automatically profile everything in your account. The service refreshes this list once per hour, but users can also refresh their lists manually.

Given that it’s often hard enough to know which AWS services you are actually using, that alone is a useful feature. “That’s the number one use case,” he said. “What are we paying for and what do we have?”

At the core of Vantage is what the team calls “views,” which allows you to see which resources you are using. What is interesting here is that this is quite a flexible system and allows you to build custom views to see which resources you are using for a given application across regions, for example. Those may include Lambda, storage buckets, your subnet, code pipeline and more.

On the cost-tracking side, Vantage currently only offers point-in-time costs, but Schaechter tells me that the team plans to add historical trends as well to give users a better view of their cloud spend.

Schaechter and his co-founder bootstrapped the company and he noted that before he wants to raise any money for the service, he wants to see people paying for it. Currently, Vantage offers a free plan, as well as paid “pro” and “business” plans with additional functionality.

Image Credits: Vantage 

#amazon-web-services, #cloud, #cloud-computing, #cloud-infrastructure, #cloud-storage, #computing, #developer, #digitalocean, #gcp, #tc, #web-hosting, #world-wide-web

0

Roboflow raises $2.1M for its end-to-end computer vision platform

Roboflow, a startup that aims to simplify the process of building computer vision models, today announced that it has raised a $2.1 million seed round co-led by Lachy Groom and Craft Ventures. Additional investors include Segment co-founder Calvin French-Owen, Lob CEO Leore Avidar, Firebase co-founder James Tamplin and early Dropbox engineer Aston Motes, among others. The company is a graduate of this year’s Y Combinator summer class.

Co-founded by Joseph Nelson (CEO) and Brad Dwyer (CTO), Roboflow is the result of the team members’ previous work on AR and AI apps, including Magic Sudoku from 2017. After respectively exiting their last companies, the two co-founders teamed up again to launch a new AR project, this time with a focus on board games. In 2019, the team actually participated in the TC Disrupt hackathon to add chess support to that app — but in the process, the team also realized that it was spending a lot of time trying to solve the same problems that everybody else in the computer vision field was facing.

Image Credits: Roboflow

“In building both those [AR] products, we realized most of our time wasn’t spent on the board game part of it, it was spent on the image management, the annotation management, the understanding of ‘do we have enough images of white queens, for example? Do we have enough images from this angle or this angle? Are the rooms brighter or darker?’ This data mining of understanding in visual imagery is really underdeveloped. We had built a bunch of — at the time — internal tooling to make this easier for us,” Nelson explained. “And in the process of building this company, of trying to make software features for real-world objects, realize that developers didn’t need inspiration. They needed tooling.”

So shortly after participating in the hackathon, the founders started putting together the first version of Roboflow and launched the first version a year ago in January 2020. And while the service started out as a platform for managing large image data sets, it has since grown to become an end-to-end solution for handling image management, analysis, pre-processing and augmentation, up to building the image recognition models and putting them into production. As Nelson noted, while the team didn’t set out to build an end-to-end solution, its users kept pushing the team to add more features.

Image Credits: Roboflow

So far, about 20,000 developers have used the service, with use cases ranging from accelerating cancer research to smart city applications. The thesis here, Nelson said, is that computer vision is going to be useful for every single industry. But not every company has the in-house expertise to set up the infrastructure for building models and putting it into production, so Roboflow aims to provide an easy to use platform for this that individual developers and (over time) large enterprise teams can use to quickly iterate on their ideas.

Roboflow plans to use the new funding to expand its team, which currently consists of five members, both on the engineering and go-to-market side.

The Roboflow racoon.

The Roboflow racoon. Image Credits: Roboflow

“As small cameras become cheaper and cheaper, we’re starting to see an explosion of video and image data everywhere,” Segment co-founder and Roboflow investor French-Owen noted. “Historically, it’s been hard for anyone but the biggest tech companies to harness this data, and actually turn it into a valuable product. Roboflow is building the pipelines for the rest of us. They’re helping engineers take the data that tells a thousand words, and giving them the power to turn that data into recommendations and insights.”

#arkansas, #cloud-computing, #computing, #craft-ventures, #data-mining, #dropbox, #firebase, #france, #hackathon, #lachy-groom, #lob, #recent-funding, #roboflow, #startups, #tc, #technology, #y-combinator

0

Russia Used Microsoft Resellers in Hacking

Evidence from the security firm CrowdStrike suggests that companies that sell software on behalf of Microsoft were used to break into Microsoft’s Office 365 customers.

#cloud-computing, #computer-security, #computers-and-the-internet, #crowdstrike-inc, #defense-department, #fireeye-inc, #microsoft-corp, #russia, #software, #solarwinds, #us-federal-government-data-breach-2020

0

Google expands its cloud with new regions in Chile, Germany and Saudi Arabia

It’s been a busy year of expansion for the large cloud providers, with AWS, Azure and Google aggressively expanding their data center presence around the world. To cap off the year, Google Cloud today announced a new set of cloud regions, which will go live in the coming months and years. These new regions, which will all have three availability zones, will be in Chile, Germany and Saudi Arabia. That’s on top of the regions in Indonesia, South Korea, the U.S. (Last Vegas and Salt Lake City) that went live this year — and the upcoming regions in France, Italy, Qatar and Spain the company also announced over the course of the last twelve months.

Image Credits: Google

In total, Google currently operates 24 regions with 73 availability zones, not counting those it has announced but that aren’t live yet. While Microsoft Azure is well ahead of the competition in terms of the total number of regions (though some still lack availability zones), Google is now starting to pull even with AWS, which currently offers 24 regions with a total of 77 availability zones. Indeed, with its 12 announced regions, Google Cloud may actually soon pull ahead of AWS, which is currently working on six new regions.

The battleground may soon shift away from these large data centers, though, with a new focus on edge zones close to urban centers that are smaller than the full-blown data centers the large clouds currently operate but that allow businesses to host their services even closer to their customers.

All of this is a clear sign of how much Google has invested in its cloud strategy in recent years. For the longest time, after all, Google Cloud Platform lagged well behind its competitors. Only three years ago, Google Cloud offered only 13 regions, for example. And that’s on top of the company’s heavy investment in submarine cables and edge locations.

#amazon-web-services, #aws, #chile, #cloud-computing, #cloud-infrastructure, #france, #germany, #google, #google-cloud-platform, #indonesia, #italy, #microsoft, #nuodb, #qatar, #salt-lake-city, #saudi-arabia, #south-korea, #spain, #tc, #united-states, #web-hosting, #web-services

0

Google grants $3 million to the CNCF to help it run the Kubernetes infrastructure

Back in 2018, Google announced that it would provide $9 million in Google Cloud Platform credits — divided over three years — to the Cloud Native Computing Foundation (CNCF) to help it run the development and distribution infrastructure for the Kubernetes project. Previously, Google owned and managed those resources for the community. Today, the two organizations announced that Google is adding on to this grant with another $3 million annual donation to the CNCF to “help ensure the long-term health, quality and stability of Kubernetes and its ecosystem.”

As Google notes, the funds will go to the testing and infrastructure of the Kubernetes project, which currently sees over 2,300 monthly pull requests that trigger about 400,000 integration test runs, all of which use about 300,000 core hours on GCP.

“I’m really happy that we’re able to continue to make this investment,” Aparna Sinha, a director of product management at Google and the chairperson of the CNCF governing board, told me. “We know that it is extremely important for the long-term health, quality and stability of Kubernetes and its ecosystem and we’re delighted to be partnering with the Cloud Native Computing Foundation on an ongoing basis. At the end of the day, the real goal of this is to make sure that developers can develop freely and that Kubernetes, which is of course so important to everyone, continues to be an excellent, solid, stable standard for doing that.”

Sinha also noted that Google contributes a lot of code to the project, with 128,000 code contributions in the last twelve months alone. But on top of these technical contributions, the team is also making in-kind contributions through community engagement and mentoring, for example, in addition to the kind of financial contributions the company is announcing today.

“The Kubernetes project has been growing so fast — the releases are just one after the other,” said Priyanka Sharma, the General Manager of the CNCF. “And there are big changes, all of this has to run somewhere. […] This specific contribution of the $3 million, that’s where that comes in. So the Kubernetes project can be stress-free, [knowing] they have enough credits to actually run for a full year. And that security is critical because you don’t want Kubernetes to be wondering where will this run next month. This gives the developers and the contributors to the project the confidence to focus on feature sets, to build better, to make Kubernetes ever-evolving.”

It’s worth noting that while both Google and the CNCF are putting their best foot forward here, there have been some questions around Google’s management around the Istio service mesh project, which was incubated by Google and IBM a few years ago. At some point in 2017, there was a proposal to bring it under the CNCF umbrella, but that never happened. This year, Istio became one of the founding projects of Open Usage Commons, though that group is mostly concerned with trademarks, not with project governance. And while all of this may seem like a lot of inside baseball — and it is — but it had some members of the open-source community question Google’s commitment to organizations like the CNCF.

“Google contributes to a lot of open-source projects. […] There’s a lot of them, many are with open-source foundations under the Linux Foundation, many of them are otherwise,” Sinha said when I asked her about this. “There’s nothing new, or anything to report about anything else. In particular, this discussion — and our focus very much with the CNCF here is on Kubernetes, which I think — out of everything that we do — is by far the biggest contribution or biggest amount of time and biggest amount of commitment relative to anything else.”

#aparna-sinha, #cloud, #cloud-computing, #cloud-infrastructure, #cloud-native-computing-foundation, #cloud-native-computing, #cncf, #computing, #developer, #free-software, #google, #google-cloud-platform, #kubernetes, #priyanka-sharma, #product-management, #tc, #web-services

0

Alibaba’s Software Can Find Uighur Faces, It Told China Clients

The website for the tech titan’s cloud business described facial recognition software that could detect members of a minority group whose persecution has drawn international condemnation.

#alibaba-group-holding-ltd, #china, #cloud-computing, #facial-recognition-software, #huawei-technologies-co-ltd, #racial-profiling, #surveillance-of-citizens-by-government, #uighurs-chinese-ethnic-group, #video-recordings-downloads-and-streaming, #xinjiang-china

0

Supabase raises $6M for its open-source Firebase alternative

Supabase, a YC-incubated startup that offers developers an open-source alternative to Google’s Firebase and similar platforms, today announced that it has raised a $6 million funding round led by Coatue, with participation from YC, Mozilla and a group of about 20 angel investors.

Currently, Supabase includes support for PostgreSQL databases and authentication tools, with a storage and serverless solution coming soon. It currently provides all the usual tools for working with databases — and listening to database changes — as well as a web-based UI for managing them. The team is quick to note that while the comparison with Google’s Firebase is inevitable, it is not meant to be a 1-to-1 replacement for it. And unlike Firebase, which uses a NoSQL database, Supabase is using PostgreSQL.

Indeed, the team relies heavily on existing open-source projects and contributes to them where it can. One of Supabase’s full-time employees maintains the PostgREST tool for building APIs on top of the database, for example.

“We’re not trying to build another system,” Supabase co-founder and CEO Paul Copplestone told me. “We just believe that already there are well-trusted, scalable enterprise open-source products out there and they just don’t have this usability component. So actually right now, Supabase is an amalgamation of six tools, soon to be seven. Some of them we built ourselves. If we go to market and can’t find anything that we think is going to be scalable — or really solve the problems — then we’ll build it and we’ll open-source it. But otherwise, we’ll use existing tools.”

Image Credits: Supabase

The traditional route to market for open-source tools is to create a tool and then launch a hosted version — maybe with some additional features — to monetize the work. Supabase took a slightly different route and launched a hosted version right away.

If somebody would want to host the service themselves, the code is available, but running your own PaaS is obviously a major challenge, but that’s also why the team went with this approach. What you get with Firebase, he noted, is that it’s a few clicks to set everything up. Supabase wanted to be able to offer the same kind of experience. “That’s one thing that self-hosting just cannot offer,” he said. “You can’t really get the same wow factor that you can if we offered a hosted platform where you literally [have] one click and then a couple of minutes later, you’ve got everything set up.”

In addition, he also noted that he wanted to make sure the company could support the growing stable of tools it was building and commercializing its tools based on its database services was the easiest way to do so.

Like other Y Combinator startups, Supabase closed its funding round after the accelerator’s demo day in August. The team had considered doing a SAFE round, but it found the right group of institutional investors that offered founder-friendly terms to go ahead with this institutional round instead.

“It’s going to cost us a lot to compete with the generous free tier that Firebase offers,” Copplestone said. “And it’s databases, right? So it’s not like you can just keep them stateless and shut them down if you’re not really using them. [This funding round] gives us a long, generous runway and more importantly, for the developers who come in and build on top of us, [they can] take as long as they want and then start monetizing later on themselves.

The company plans to use the new funding to continue to invest in its various tools and hire to support its growth.

Supabase’s value proposition of building in a weekend and scaling so quickly hit home immediately,” said Caryn Marooney, general partner at Coatue and Facebook’s former VP of Global Communications. “We are proud to work with this team, and we are excited by their laser focus on developers and their commitment to speed and reliability.”

#caryn-marooney, #cloud-computing, #coatue, #computing, #database, #developer, #firebase, #google-cloud, #nosql, #platform-as-a-service, #postgresql, #recent-funding, #serverless-computing, #startups, #supabase, #tc

0

Google, Intel, Zoom and others launch a new alliance to get enterprises to use more Chrome

A group of industry heavyweights, including Google, Box, Citrix, Dell, Imprivata, Intel, Okta, RingCentral, Slack, VMware and Zoom, today announced the launch of the moderncomputing.com.

The mission for this new alliance is to “drive ‘silicon-to-cloud’ innovation for the benefit of enterprise customers — fueling a differentiated modern computing platform and providing additional choice for integrated business solutions.”

Whoever wrote this mission statement was clearly trying to see how many words they could use without actually saying something.

Here is what the alliance is really about: even though the word Chrome never appears on its homepage and Google’s partners never quite get to mentioning it either, it’s all about helping enterprises adopt Chrome and Chrome OS. “The focus of the alliance is to drive innovation and interoperability in the Google Chrome ecosystem, increasing options for enterprise customers and helping to address some of the biggest tech challenges facing companies today,” a Google spokesperson told me.

I’m not sure why it’s not called the Chrome Enterprise Alliance, but Modern Computing Alliance may just have more of a ring to it. This also explains why Microsoft isn’t part of it, though this is only the initial slate of members and others may follow at some point in the future.

Led by Google, the alliance’s focus is on bringing modern web apps to the enterprise, with a focus on performance, security, identity management and productivity. And all of that, of course, is meant to run well on Chrome and Chrome OS and be interoperable.

“The technology industry is moving towards an open, heterogeneous ecosystem that allows freedom of choice while integrating across the stack. This reality presents both a challenge and an opportunity,” Google’s Chrome OS VP John Solomon writes today.

As enterprises move to the cloud, building better web applications and maybe even Progressive Web Applications that work just as well as native solutions is obviously a noble goal and it’s nice to see these companies work together. Given the pandemic, all of this has taken on a new urgency now, too. The plan is for the alliance to release products — though it’s unclear what form these will take — in the first half of 2021. Hopefully, these will play nicely with any browser. A lot of these ‘alliances’ fizzle out quite quickly, so we’ll keep an eye on what happens here.

Bonus: the industry has a long history of alliance like these. Here’s a fun 1991 story about a CPU alliance between Intel, IBM, MIPS and others.

#chrome, #chrome-os, #citrix, #citrix-systems, #cloud-computing, #computing, #dell, #google, #google-chrome, #ibm, #identity-management, #intel, #microsoft, #mips, #okta, #operating-systems, #os, #ringcentral, #software, #spokesperson, #tc, #vmware, #web-applications, #web-apps, #web-browsers, #zoom

0