The OpenStack Foundation becomes the Open Infrastructure Foundation

This has been a long time coming, but the OpenStack foundation today announced that it is changing its name to ‘Open Infrastructure Foundation,” starting in 2021.

The announcement, which the foundation made at its virtual developer conference, doesn’t exactly come as a surprise. Over the course of the last few years, the organization started adding new projects that went well beyond the core OpenStack project and renamed its conference to the ‘Open Infrastructure Summit.’ The organization actually filed for the ‘Open Infrastructure Foundation’ trademark back in April.

Image Credits: OpenStack Foundation

After years of hype, the open-source OpenStack project hit a bit of a wall in 2016, as the market started to consolidate. The project itself, which helps enterprises run their private cloud, found its niche in the telecom space, though, and continues to thrive as one of the world’s most active open-source projects. Indeed, I regularly hear from OpenStack vendors that they are now seeing record sales numbers — despite the lack of hype. With the project being stable, though, the Foundation started casting a wider net and added additional projects like the popular Kata Containers runtime and CI/CD platform Zuul.

“We are officially transitioning and becoming the Open Infrastructure Foundation,” long-term OpenStack Foundation executive president Jonathan Bryce told me. “That is something that I think is an awesome step that’s built on the success that our community has spawned both within projects like OpenStack, but also as a movement […], which is [about] how do you give people choice and control as they build out digital infrastructure? And that is, I think, an awesome mission to have. And that’s what we are recognizing and acknowledging and setting up for another decade of doing that together with our great community.”

In many ways, it’s been more of a surprise that the organization waited as long as it did. As the foundation’s COO Mark Collier told me, the team waited because it wanted to sure that it did this right.

“We really just wanted to make sure that all the stuff we learned when we were building the OpenStack community and with the community — that started with a simple idea of ‘open source should be part of cloud, for infrastructure.’ That idea has just spawned so much more open source than we could have imagined. Of course, OpenStack itself has gotten bigger and more diverse than we could have imagined,” Collier said.

As part of today’s announcement, the group is also adding four new members at Platinum tier, its highest membership level: Ant Group, the Alibaba affiliate behind Alipay, embedded systems specialist Wind River, China’s Fiberhome (which was previously a Gold member) and Facebook Connectivity. To become a Platinum member, companies have to contribute $350,000 per year to the foundation and must have at least 2 full-time employees contributing to its projects.

“If you look at those companies that we have as Platinum members, it’s a pretty broad set of organizations,” Bryce noted. “AT&T, the largest carrier in the world. And then you also have a company Ant, who’s the largest payment processor in the world and a massive financial services company overall — over to Ericsson, that does telco, Wind River, that does defense and manufacturing. And I think that speaks to that everybody needs infrastructure. If we build a community — and we successfully structure these communities to write software with a goal of getting all of that software out into production, I think that creates so much value for so many people: for an ecosystem of vendors and for a great group of users and a lot of developers love working in open source because we work with smart people from all over the world.”

The OpenStack Foundation’s existing members are also on board and Bryce and Collier hinted at several new members who will join soon but didn’t quite get everything in place for today’s announcement.

We can probably expect the new foundation to start adding new projects next year, but it’s worth noting that the OpenStack project continues apace. The latest of the project’s bi-annual releases, dubbed ‘Victoria,’ launched last week, with additional Kubernetes integrations, improved support for various accelerators and more. Nothing will really change for the project now that the foundation is changing its name — though it may end up benefitting from a reenergized and more diverse community that will build out projects at its periphery.

#alibaba, #alipay, #ant-group, #att, #china, #cloud-computing, #cloud-infrastructure, #computing, #developer, #enterprise, #ericsson, #facebook, #manufacturing, #mirantis, #openstack, #openstack-foundation, #payment-processor, #wind-river

0

Temporal raises $18.75M for its microservices orchestration platform

Temporal, a Seattle-based startup that is building an open-source, stateful microservices orchestration platform, today announced that it has raised an $18.75 million Series A round led by Sequoia Ventures. Existing investors Addition Ventures and Amplify Partners also joined, together with new investor Madrona Venture Group. With this, the company has now raised a total of $25.5 million.

Founded by Maxim Fateev (CEO) and Samar Abbas (CTO), who created the open-source Cadence orchestration engine during their time at Uber, Temporal aims to make it easier for developers and operators to run microservices in production. Current users include the likes of Box and Snap.

“Before microservices, coding applications was much simpler,” Temporal’s Fateev told me. “Resources were always located in the same place — the monolith server with a single DB — which meant developers didn’t have to codify a bunch of guessing about where things were. Microservices, on the other hand, are highly distributed, which means developers need to coordinate changes across a number of servers in different physical locations.”

Those servers could go down at any time, so engineers often spend a lot of time building custom reliability code to make calls to these services. As Fateev argues, that’s table stakes and doesn’t help these developers create something that builds real business value. Temporal gives these developers access to a set of what the team calls ‘reliability primitives’ that handle these use cases. “This means developers spend far more time writing differentiated code for their business and end up with a more reliable application than they could have built themselves,” said Fateev.

Temporal’s target use is virtually any developer who works with microservices — and wants them to be reliable. Because of this, the company’s tool — despite offering a read-only web-based user interface for administering and monitoring the system — isn’t the main focus here. The company also doesn’t have any plans to create a no-code/low-code workflow builder, Fateev tells me. However, since it is open-source, quite a few Temporal users build their own solutions on top of it.

The company itself plans to offer a cloud-based Temporal-as-a-Service offering soon. Interestingly, Fateev tells me that the team isn’t looking at offering enterprise support or licensing in the near future, though. “After spending a lot of time thinking it over, we decided a hosted offering was best for the open-source community and long term growth of the business,” he said.

Unsurprisingly, the company plans to use the new funding to improve its existing tool and build out this cloud service, with plans to launch it into general availability next year. At the same time, the team plans to say true to its open-source roots and host events and provide more resources to its community.

“Temporal enables Snapchat to focus on building the business logic of a robust asynchronous API system without requiring a complex state management infrastructure,” said Steven Sun, Snap Tech Lead, Staff Software Engineer. “This has improved the efficiency of launching our services for the Snapchat community.”

#amplify-partners, #ceo, #cloud-computing, #cloud-infrastructure, #cto, #developer, #enterprise, #madrona-venture-group, #microservices, #seattle, #snap, #snap-inc, #snapchat, #uber

0

Edge computing startup Edgify secures $6.5M Seed from Octopus, Mangrove and semiconductor

Edgify, which builds AI for edge computing, has secured a $6.5m seed funding round backed by Octopus Ventures, Mangrove Capital Partners and an unnamed semiconductor giant. The name was not released but TechCrunch understands it nay be Intel Corp. or Qualcomm Inc.

Edgify’s technology allows ‘edge devices’ (devices at the edge of the internet) to interpret vast amounts of data, train an AI model locally, and then share that learning across its network of similar devices. This then trains all the other devices in anything from computer vision, NLP, voice recognition, or any other form of AI. 

The technology can be applied to anything from MRI machines, connected cars, checkout lanes, mobile devices and anything that has a CPU, GPU or NPU. Edgify’s technology is already being used in supermarkets, for instance.

Ofri Ben-Porat, CEO and co-founder of Edgify, commented in a statement: “Edgify allows companies, from any industry, to train complete deep learning and machine learning models, directly on their own edge devices. This mitigates the need for any data transfer to the Cloud and also grants them close to perfect accuracy every time, and without the need to retrain centrally.” 

Mangrove partner Hans-Jürgen Schmitz who will join Edgify’s Board comments: “We expect a surge in AI adoption across multiple industries with significant long-term potential for Edgify in medical and manufacturing, just to name a few.” 

Simon King, Partner and Deep Tech Investor at Octopus Ventures added: “As the interconnected world we live in produces more and more data, AI at the edge is becoming increasingly important to process large volumes of information.”

So-called ‘edge computing’ is seen as being one of the forefronts of deeptech right now.

#articles, #artificial-intelligence, #cloud-computing, #computing, #cybernetics, #deep-learning, #edge-computing, #emerging-technologies, #europe, #internet-of-things, #machine-learning, #mangrove-capital-partners, #manufacturing, #mobile-devices, #mri, #octopus-ventures, #science-and-technology, #semiconductor, #tc, #voice-recognition

0

Kayhan Space wants to be the air traffic control service for satellites in space

Kayhan Space, the Boulder, Colo. and Atlanta-based company launched from Techstars virtual space-focused accelerator, wants nothing more than to be the air traffic control service for satellites in space.

Founded by two childhood friends, Araz Feyzi and Siamak Hesar, who grew up in Iran and immigrated to the U.S. for college, Kayhan is tackling one of the toughest problems that the space industry will confront in the coming years — how to manage the exponentially increasing traffic that will soon crowd outer space.

There are currently around 8,000 satellites in orbit around the earth, but over the next several years, Amazon will launch 3,236 satellites for its Kuiper Network, while SpaceX filed paperwork last year to launch up to 30,000 satellites. That’s… a lot of metal flying around.

And somebody needs to make sure that those satellites don’t crash into each other, because space junk has a whole other set of problems.

In some ways, Feyzi and Hesar are a perfect pair to solve the problem.

Hesar, the company’s co-founder and chief executive, has spent years studying space travel, receiving a master’s degree from the University of Southern California in aeronautics, and a doctorate in astronautical engineering from the University of Colorado, Boulder. He interned at NASA’s Jet Propulsion Laboratory, and spent three years at Colorado-based satellite situational awareness and systems control technology developers like SpaceNav and Blue Canyon Technologies.

Meanwhile Feyzi is a serial entrepreneur who co-founded a company in the Atlanta area called Syfer, which developed technologies to secure internet-enabled consumer devices. Using Hesar’s proprietary algorithms based on research from his doctoral days at UC Boulder and Feyzi’s expertise in cloud computing, the company has developed a system that can predict and alert the operators of satellite networks when there’s the potential for a collision and suggest alternative paths to avoid an accident.

It’s a problem that the two founders say can’t be solved by automation on satellites alone, thanks to the complexity and multidimensional nature of the work. “Imagine that a US commercial satellite is on a collision course with a Russian military satellite,” Feyzi said. “Who needs to maneuver? We make sure the satellite operator has all the information available to them [including] here’s what we know about the collision about to happen here and here are the recommendations and options to avoid it.”

Satellites today aren’t equipped to visualize their surroundings and autonomy won’t solve a problem that includes geopolitical complexities and dumb space debris all creating a morass that requires human intervention to navigate, the founders said.

Today it’s too complex to resolve and because of the different nations and lack of standards and policy … today you need human input,” Hesar said.

And in the future, if satellites are equipped with sensors to make collision avoidance more autonomous, then Kayhan Space already has the algorithms that can provide that service. “If you think of the system and the sensors and the decision-making and [execution controls] actually performing that action… we are that,” Hesar said. “We have the algorithm whether it uses the ground-based sensor or the space-based sensor.”

Over the next eight years the space situational market is expected to reach $3.9 billion and there are very few companies equipped to provide the kind of traffic control systems that satellite network operators will need, the founders said.

Their argument was compelling enough to gain admission to the Techstars Allied Space Accelerator, an early stage investment and mentoring program developed by Techstars and the U.S. Air Force, the Netherlands Ministry of Defence, the Norwegian Ministry of Defence and the Norwegian Space Agency. And, as first reported in Hypeotamus, the company has now raised $600,000 in a pre-seed funding from investors including the Atlanta-based pre-seed investment firm, Overline, to grow its business.

And the company realizes that money and technology can’t solve the problem alone.

“We believe that technology alone can help but can’t solve this problem. We need the US to take the lead [on policy] globally,” said Feyzi. “Unlike airspace… which is controlled by countries. Space is space.” Hesar agreed. “There needs to be a focused effort on this problem.”

 

#amazon, #articles, #atlanta, #blue-canyon-technologies, #cloud-computing, #colorado, #iran, #metal, #outer-space, #pollution, #satellite, #serial-entrepreneur, #space-debris, #space-travel, #spaceflight, #spacex, #tc, #techstars, #u-s-air-force, #united-states

0

Kong launches Kong Konnect, its cloud-native connectivity platform

At its (virtual) Kong Summit 2020, API platform Kong today announced the launch of Kong Konnect, its managed end-to-end cloud-native connectivity platform. The idea here is to give businesses a single service that allows them to manage the connectivity between their APIs and microservices and help developers and operators manage their workflows across Kong’s API Gateway, Kubernetes Ingress and King Service Mesh runtimes.

“It’s a universal control plane delivery cloud that’s consumption-based, where you can manage and orchestrate API gateway runtime, service mesh runtime, and Kubernetes Ingress controller runtime — and even Insomnia for design — all from one platform,” Kong CEO and co-founder Augusto ‘Aghi’ Marietti told me.

The new service is now in private beta and will become generally available in early 2021.

Image Credits: Kong

At the core of the platform is Kong’s new so-called ServiceHub, which provides that single pane of glass for managing a company’s services across the organization (and make them accessible across teams, too).

As Marietti noted, organizations can choose which runtime they want to use and purchase only those capabilities of the service that they currently need. The platform also includes built-in monitoring tools and supports any cloud, Kubernetes provider or on-premises environment, as long as they are Kubernetes-based.

The idea here, too, is to make all these tools accessible to developers and not just architects and operators. “I think that’s a key advantage, too,” Marietti said. “We are lowering the barrier by making a connectivity technology easier to be used by the 50 million developers — not just by the architects that were doing big grand plans at a large company.”

To do this, Konnect will be available as a self-service platform, reducing the friction of adopting the service.

Image Credits: Kong

This is also part of the company’s grander plan to go beyond its core API management services. Those services aren’t going away, but they are now part of the larger Kong platform. With its open-source Kong API Gateway, the company built the pathway to get to this point, but that’s a stable product now and it’s now clearly expanding beyond that with this cloud connectivity play that takes the company’s existing runtimes and combines them to provide a more comprehensive service.

“We have upgraded the vision of really becoming an end-to-end cloud connectivity company,” Marietti said. “Whether that’s API management or Kubernetes Ingress, […] or Kuma Service Mesh. It’s about connectivity problems. And so the company uplifted that solution to the enterprise.”

 

#api, #augusto-marietti, #cloud, #cloud-computing, #cloud-infrastructure, #cloud-native-computing-foundation, #computing, #controller, #developer, #enterprise, #free-software, #kong, #kubernetes, #microservices, #openshift, #web-services

0

Pixie Labs raises $9.15M Series A round for its Kubernetes observability platform

Pixie, a startup that provides developers with tools to get observability into their Kubernetes-native applications, today announced that it has raised a $9.15 million Series A round led by Benchmark, with participation from GV. In addition, the company also today said that its service is now available as a public beta.

The company was co-founded by Zain Asgar (CEO), a former Google engineer working on Google AI and adjunct professor at Stanford, and Ishan Mukherjee (CPO), who led Apple’s Siri Knowledge Graph product team and also previously worked on Amazon’s Robotics efforts. Asgar had originally joined Benchmark to work on developer tools for machine learning. Over time, the idea changed to using machine learning to power tools to help developers manage large-scale deployments instead.

“We saw data systems, this move to the edge, and we felt like this old cloud 1.0 model of manually collecting data and shipping it to databases in the cloud seems pretty inefficient,” Mukherjee explained. “And the other part was: I was on call. I got gray hair and all that stuff. We felt like we could build this new generation of developer tools and get to Michael Jordan’s vision of intelligent augmentation, which is giving creatives tools where they can be a lot more productive.”

Image Credits: Pixie

The team argues that most competing monitoring and observability systems focus on operators and IT teams — and often involve a long manual setup process. But Pixie wants to automate most of this manual process and build a tool that developers want to use.

Pixie runs inside a developer’s Kubernetes platform and developers get instant and automatic visibility into their production environments. With Pixie, which the team is making available as a freemium SaaS product, there is no instrumentation to install. Instead, the team uses relatively new Linux kernel techniques like eBPF to collect data right at the source.

“One of the really cool things about this is that we can deploy Pixie in about a minute and you’ll instantly get data,” said Asgar. “Our goal here is that this really helps you when there are cases where you don’t want your business logic to be full of monitoring code, especially if you forget something — when you have an outage.”

Image Credits: Pixie

At the core of the developer experience is what the company calls “Pixie scripts.” Using a Python-like language (PxL), developers can codify their debugging workflows. The company’s system already features a number of scripts written by the team itself and the community at large. But as Asgar noted, not every user will write scripts. “The way scripts work, it’s supposed to capture human knowledge in that problem. We don’t expect the average user — or even the way above average developer — ever to touch a script or write one. They’re just going to use it in a specific scenario,” he explained.

Looking ahead, the team plans to make these scripts and the scripting language more robust and usable to allow developers to go from passively monitoring their systems to building scripts that can actively take actions on their clusters based on the monitoring data the system collects.

“Zain and Ishan’s provocative idea was to move software monitoring to the source,” said Eric Vishria, General Partner at Benchmark. “Pixie enables engineering teams to fundamentally rethink their monitoring strategy as it presents a vision of the future where we detect anomalous behavior and make operational decisions inside the infrastructure layer itself. This allows companies of all sizes to monitor their digital experiences in a more responsive, cost-effective and scalable manner.”

#artificial-intelligence, #benchmark, #ceo, #cloud-computing, #cloud-infrastructure, #computing, #engineer, #eric-vishria, #free-software, #general-partner, #google, #kubernetes, #linux, #machine-learning, #michael-jordan, #pixie, #stanford, #tc

0

Macrometa, an edge computing service for app developers, lands $7M seed round led by DNX

As people continue to work and study from home because of the COVID-19 pandemic, interest in edge computing has increased. Macrometa, a Palo Alto-based that provides edge computing infrastructure for app developers, announced today it has closed a $7 million seed round.

The funding was led by DNX Ventures, an investment fund that focuses on early-stage B2B startups. Other participants included returning investors Benhamou Global Ventures, Partech Partners, Fusion Fund, Sway Ventures, Velar Capital and Shasta Ventures.

While cloud computing relies on servers and data centers owned by providers like Amazon, IBM, Microsoft and Google, edge computing is geographically distributed, with computing done closer to data sources, allowing for faster performance.

Founded in 2018 by chief executive Chetan Venkatesh and chief architect Durga Gokina, Macrometa’s globally distributed data service, called Global Data Network, combines a distributed noSQL database and a low-latency stream data processing engine. It allows developers to run their cloud apps and APIs across 175 edge regions around the world. To reduce delays, app requests are sent to the region closest to the user. Macrometa claims that requests can be processed in less than 50 milliseconds globally, making it 50 to 100 times faster than cloud platforms like DyanmoDB, MongoDB or Firebase. One of the ways that Macrometa differentiates from competitors is that it enables developers to work with data stored across a global network of cloud providers, like Google Cloud and Amazon Web Services (for example), instead of a single provider.

As more telecoms roll out 5G networks, demand for globally distributed, serverless data computing services like Macrometa are expected to increase, especially to support enterprise software. Other edge computing-related startups that have recently raised funding including Latent AI, SiMa.ai and Pensando.

A spokesperson for Macrometa said the seed round was oversubscribed because the pandemic has increased investor interest in cloud and edge companies like Snowflake, which recently held its initial public offering.

Macrometa also announced today that it has added DNX managing partner Q Motiwala, former Auth0 and xnor.ai chief executive Jon Gelsey and Armorblox chief technology officer Rob Fry to its board of directors.

In a statement about the funding, Motiwala said, “As we look at the next five to ten years of cloud evolution, it’s clear to us that enterprise developers need a platform like Macrometa to go beyond the constraints, scaling limitations and high-cost economics that current cloud architecture impose. What Macrometa is doing for edge computing, is what Amazon Web Services did for the cloud a decade ago.”

#app-developers, #cloud-computing, #developers, #edge-computing, #enterprise, #fundings-exits, #macrometa, #startups, #tc

0

Google Services Go Down in Some Parts of U.S.

People experienced outages of services like Gmail, YouTube and Google Meet.

#cloud-computing, #computer-network-outages, #computers-and-the-internet, #google-inc, #united-states, #video-recordings-downloads-and-streaming, #youtube-com

0

Microsoft challenges Twilio with the launch of Azure Communication Services

Microsoft today announced the launch of Azure Communication Services, a new set of features in its cloud that enable developers to add voice and video calling, chat, text messages to their apps, as well as old-school telephony.

The company describes the new set of services as the ” first fully managed communication platform offering from a major cloud provider” and that seems right, given that Google and AWS offer some of these features, including the AWS notification service, for example, but not as part of a cohesive communication service. Indeed, it seems Azure Communication Service is more of a competitor to the core features of Twilio or up-and-coming MessageBird.

Over the course of the last few years, Microsoft has built up a lot of experience in this area, in large parts things to the success of its Teams service. Unsurprisingly, that’s something Microsoft is also playing up in its announcement.

“Azure Communication Services is built natively on top a global, reliable cloud — Azure. Businesses can confidently build and deploy on the same low latency global communication network used by Microsoft Teams to support 5B+ meeting minutes daily,” writes Scott Van Vliet, Corporate Vice President for Intelligent Communication at the company.

Microsoft also stresses that it offers a set of additional smart services that developers can tap into to build out their communication services, including its translation tools, for example. The company also notes that its services are encrypted to meet HIPPA and GDPR standards.

Like similar services, developer access the various capabilities through a set of new APIs and SDKs.

As for the core services, the capabilities here are pretty much what you’d expect. There’s voice and video calling (and the ability to shift between them). There’s support for chat and starting in October, users will also be able to send text messages. Microsoft says developers will be able to send these to users anywhere, with Microsoft positioning it as a global service.

Provisioning phone numbers, too, is part of the services and developers will be able to provision those for in-bound and out-bound calls, port existing numbers, request new ones and — most importantly for contact-center users — integrated them with existing on-premises equipment and carrier networks.

“Our goal is to meet businesses where they are and provide solutions to help them be resilient and move their business forward in today’s market,” writes Van Vliet. “We see rich communication experiences – enabled by voice, video, chat, and SMS – continuing to be an integral part in how businesses connect with their customers across devices and platforms.”

#amazon-web-services, #aws, #cloud-computing, #cloud-infrastructure, #computing, #google, #microsoft, #microsoft-azure, #tc, #telephony, #twilio

0

Microsoft Azure launches new availability zones in Canada and Australia

Microsoft Azure offers developers access to more data center regions than its competitors, but it was late to the game of offering different availability zones in those regions for high-availability use cases. After a few high-profile issues a couple of years ago, it accelerated its roadmap for building availability zones. Currently, 12 of Microsoft’s regions feature availability zones and as the company announced at its Ignite conference, both the Canada Central and Australia region will feature availability zones now.

In addition, the company today promised that it would launch availability zones in each country it operates data centers in within the next 24 months.

The idea of an availability zone is to offer users access to data centers that in the same geographic region but are physically separate and each feature their own power, networking and connectivity infrastructure. That way, in case one of those data centers goes offline for whatever reason, there is still another one in the same area that can take over.

In its early days, Microsoft Azure took a slightly different approach and focus on regions without availability zones, arguing that geographic expansion was more important than offering zones. Google took a somewhat similar approach, but it now offers three availability zones for virtually all of its regions (and four in Iowa). The general idea here was that developers could always choose multiple regions for high-availability applications, but that still introduces additional latencies, for example.

#australia, #cloud, #cloud-computing, #cloud-infrastructure, #cloud-storage, #computing, #data-center, #data-management, #google, #iowa, #microsoft, #microsoft-ignite-2020, #microsoft-azure

0

Microsoft brings data services to its Arc multi-cloud management service

Microsoft today launched a major update to its Arc multi-cloud service that allows Azure customers to run and manage workloads across clouds — including those of Microsoft’s competitors — and their on on-premises data centers. First announced at Microsoft Ignite in 2019, Arc was always meant to not just help users manage their servers but to also allow them to run data services like Azure SQL and Azure Database for PostgreSQL close to where their data sits.

Today, the company is making good on this promise with the preview launch of Azure Arc enabled data services with support for, as expected, Azure SQL and Azure Database for PostgreSQL.

In addition, Microsoft is making the core feature of Arc, Arc enabled servers, generally available. These are the tools at the core of the service that allow enterprises can use the standard Azure Portal to manage and monitor their Windows and Linux servers across their multi-cloud and edge environments.

Image Credits: Microsoft

“We’ve always known that enterprises are looking to unlock the agility of the cloud — they love the app model, they love the business model — while balancing a need to maintain certain applications and workloads on premises,” Rohan Kumar, Microsoft’s corporate VP for Azure Data said. “A lot of customers actually have a multi-cloud strategy. In some cases, they need to keep the data specifically for regulatory compliance. And in many cases, they want to maximize their existing investments. They’ve spent a lot of CapEx.”

As Kumar stressed, Microsoft wants to meet customers where they are, without forcing them to adopt a container architecture, for example, or replace their specialized engineered appliances to use Arc.

“Hybrid is really [about] providing that flexible choice to our customers, meeting them where they are, and not prescribing a solution,” he said.

He admitted that this approach makes engineering the solution more difficult, but the team decided that the baseline should be a container endpoint and nothing more. And for the most part, Microsoft packaged up the tools its own engineers were already using to run Azure services on the company’s own infrastructure to manage these services in a multi-cloud environment.

“In hindsight, it was a little challenging at the beginning, because, you can imagine, when we initially built them, we didn’t imagine that we’ll be packaging them like this. But it’s a very modern design point,” Kumar said. But the result is that supporting customers is now relatively easy because it’s so similar to what the team does in Azure, too.

Kumar noted that one of the selling points for the Azure Data Services is also that the version of Azure SQL is essentially evergreen, allowing them to stop worrying about SQL Server licensing and end-of-life support questions.

#arc, #azure-arc, #cloud, #cloud-computing, #cloud-infrastructure, #computing, #database, #enterprise, #microsoft, #microsoft-ignite-2020, #microsoft-azure, #serverless-computing, #sql, #tc

0

Microsoft launches Premonition, its hardware and software platform for detecting biological threats

At its Ignite conference, Microsoft today announced that Premonition, a robotics and sensor platform for monitoring and sampling disease carriers like mosquitos and a cloud-based software stack for analyzing samples, will soon be in private preview.

The idea here, as Microsoft describes it, is to set up a system that can essentially function as a weather monitoring system, but for disease outbreaks. The company first demonstrated the project in 2015, but it has come quite a long way since.

Premonition sounds like a pretty wild project, but Microsoft says it’s based on five years of R&D in this area. The company says it is partnering with the National Science Foundation’s Convergence Accelerator Program and academic partners like Johns Hopkins University, Vanderbilt University, the University of Pittsburgh and the University of Washington’s Institute for Health Metrics and Evaluation to test the tools it’s developing here. In addition, it is also working with pharmaceutical giant Bayer to “develop a deeper understanding of vector-borne diseases and the role of autonomous sensor networks for biothreat detection.”

Currently, it seems, focus is on diseases transmitted by mosquitos and Microsoft actually set up a ‘Premonition Proving Ground’ on its Redmon campus to help researchers test their robots, train their machine learning models and analyze the data they collect. In this Arthropod Containment Level 2 facility, the company can raise and analyze mosquitos. But the idea is to go well beyond this and monitor the entire biome.

So far, Microsoft says, the Premonition system has scanned more than 80 trillion base-pairs of genomic material for biological threats.

“About five years ago, we saw that robotics, AI and cloud computing were reaching a tipping point where we could monitor the biome in entirely new ways, at entirely new scales,” Ethan Jackson, the senior director of Premonition, said in a video the company released today. “It was really the 2014 Ebola outbreak that led to this realization. How did one of the rarest viruses on the planet jump from animal to people to cause this outbreak? What signals are we missing that might have allowed us to predict it?”

Image Credits: Mirosoft

Two years later, in 2016, when Zika emerged, the team had already built a small fleet of smart robotic traps that could autonomously identify and capture mosquito. The system identifies the mosquito and can then make a split-second decision whether to capture it or let it fly. In a single night, Jackson said, the trap has already been able to identify up to 10,000 mosquitos.

In the U.S., the first place where Microsoft deployed these systems was Harris County, Texas.

Image Credits: Microsoft

“Everything we do now in terms of mosquito treatment is reactive – we see a lot of mosquitoes, we go spray a lot of mosquitoes,” said Douglas E. Norris, an entomologist and Johns Hopkins University professor of molecular microbiology and immunology, who was part of this project. “Imagine if you had a forecasting system that shows, in a few days you’re going to have a lot of mosquitoes based on all this data and these models – then you could go out and treat them earlier before they’re biting, spray, hit them early so you don’t get those big mosquito blooms which then might result in disease transmission.”

This is, by all means, a very ambitious project. Why is Microsoft announcing it now, at its Ignite conference? Unsurprisingly, the whole system relies on the Microsoft Azure cloud to provide the storage and compute power to run — and it’s a nice way for Microsoft to show off its AI systems, too.

#artificial-intelligence, #bayer, #cloud-computing, #computing, #internet-of-things, #johns-hopkins-university, #machine-learning, #microsoft, #microsoft-ignite-2020, #mosquito, #national-science-foundation, #science, #tc, #texas, #united-states, #vanderbilt-university, #zika

0

Microsoft launches Azure Orbital to connect satellites to its cloud

At its (virtual) Ignite conference, Microsoft today announced the launch of Azure Orbital, a new service that is meant to give satellite operators a complete platform to communicate with their satellites and process data from them — including the ground stations to receive those signals.

The company is specifically positioning the services as a solution for working with geospatial data and itis already partnering with Amergint, Kratos, KSAT, KubOS, Viasat, US Electrodynamics and Viasat to bring the service to market.

Image Credits: Microsoft

“Microsoft is well-positioned to support customer needs in gathering, transporting, and processing of geospatial data,” Yves Pitsch, Principal Product Manager, Azure Networking, writes in today’s blog post. “With our intelligent cloud and edge strategy currently extending over sixty announced cloud regions, advanced analytics, and AI capabilities coupled with one of the fastest and most resilient networks in the world – security and innovation are at the core of everything we do.”

Image Credits: Microsoft

The promise here is that satellite operators will be able to run not just the data analysis on Microsoft’s cloud but all of their digital ground operations. That includes the ability to schedule contacts with their spacecraft over Microsoft’s owned and operated ground stations (using X, S and UHF frequencies). That data can then immediately flow into Azure’s various solutions for storage, analysis and machine learning.

With AWS Ground Stations, Amazon already offers a similar ground station-as-a-service product that also includes a global network of antennas and direct access to the AWS cloud. AWS went one step further, though, and recently launched a dedicated business unit for aerospace and satellite solutions.

#aerospace, #amazon, #amazon-web-services, #artificial-intelligence, #aws, #cloud, #cloud-computing, #cloud-infrastructure, #computing, #machine-learning, #microsoft, #microsoft-ignite-2020, #viasat

0

Pure Storage acquires data service platform Portworx for $370M

Pure Storage, the public enterprise data storage company, today announced that it has acquired Portworx, a well-funded startup that provides a cloud-native storage and data-management platform based on Kubernetes, for $370 million in cash. This marks Pure Storage’s largest acquisition to date and shows how important this market for multi-cloud data services has become.

Current Portworx enterprise customers include the likes of Carrefour, Comcast, GE Digital, Kroger, Lufthansa, and T-Mobile. At the core of the service is its ability to help users migrate their data and create backups. It creates a storage layer that allows developers to then access that data, no matter where it resides.

Pure Storage will use Portworx’s technology to expand its hybrid and multi-cloud services and provide Kubernetes -based data services across clouds.

Image Credits: Portworx

“I’m tremendously proud of what we’ve built at Portworx: an unparalleled data services platform for customers running mission-critical applications in hybrid and multi-cloud environments,” said Portworx CEO Murli Thirumale. “The traction and growth we see in our business daily shows that containers and Kubernetes are fundamental to the next-generation application architecture and thus competitiveness. We are excited for the accelerated growth and customer impact we will be able to achieve as a part of Pure.”

When the company raised its Series C round last year, Thirumale told me that Portworx had expanded its customer base by over 100 percent and its bookings increased by 376 from 2018 to 2019.

“As forward-thinking enterprises adopt cloud native strategies to advance their business, we are thrilled to have the Portworx team and their groundbreaking technology joining us at Pure to expand our success in delivering multi-cloud data services for Kubernetes,” said Charles Giancarlo, Chairman and CEO of Pure Storage. “This acquisition marks a significant milestone in expanding our Modern Data Experience to cover traditional and cloud native applications alike.”

#carrefour, #ceo, #cloud, #cloud-computing, #cloud-infrastructure, #comcast, #computing, #enterprise, #exit, #kroger, #kubernetes, #lufthansa, #mirantis, #netapp, #portworx, #pure-storage, #series-c, #startups, #storage, #t-mobile

0

Latent AI makes edge AI workloads more efficient

Latent AI, a startup that was spun out of SRI International, makes it easier to run AI workloads at the edge by dynamically managing workloads as necessary.

Using its proprietary compression and compilation process, Latent AI promises to compress library files by 10x and run them with 5x lower latency than other systems, all while using less power thanks to its new adaptive AI technology, which the company is launching as part of its appearance in the TechCrunch Disrupt Battlefield competition today.

Founded by CEO Jags Kandasamy and CTO Sek Chai, the company has already raised a $6.5 million seed round led by Steve Jurvetson of Future Ventures and followed by Autotech Ventures .

Before starting Latent AI, Kandasamy sold his previous startup OtoSense to Analog Devices (in addition to managing HPE Mid-Market Security business before that). OtoSense used data from sound and vibration sensors for predictive maintenance use cases. Before its sale, the company worked with the likes of Delta Airlines and Airbus.

Image Credits: Latent AI

In some ways, Latent AI picks up some of this work and marries it with IP from SRI International .

“With OtoSense, I had already done some edge work,” Kandasamy said. “We had moved the audio recognition part out of the cloud. We did the learning in the cloud, but the recognition was done in the edge device and we had to convert quickly and get it down. Our bill in the first few months made us move that way. You couldn’t be streaming data over LTE or 3G for too long.”

At SRI, Chai worked on a project that looked at how to best manage power for flying objects where, if you have a single source of power, the system could intelligently allocate resources for either powering the flight or running the onboard compute workloads, mostly for surveillance, and then switch between them as needed. Most of the time, in a surveillance use case, nothing happens. And while that’s the case, you don’t need to compute every frame you see.

“We took that and we made it into a tool and a platform so that you can apply it to all sorts of use cases, from voice to vision to segmentation to time series stuff,” Kandasamy explained.

What’s important to note here is that the company offers the various components of what it calls the Latent AI Efficient Inference Platform (LEIP) as standalone modules or as a fully integrated system. The compressor and compiler are the first two of these and what the company is launching today is LEIP Adapt, the part of the system that manages the dynamic AI workloads Kandasamy described above.

Image Credits: Latent AI

In practical terms, the use case for LEIP Adapt is that your battery-powered smart doorbell, for example, can run in a low-powered mode for a long time, waiting for something to happen. Then, when somebody arrives at your door, the camera wakes up to run a larger model — maybe even on the doorbell’s base station that is plugged into power — to do image recognition. And if a whole group of people arrives at ones (which isn’t likely right now, but maybe next year, after the pandemic is under control), the system can offload the workload to the cloud as needed.

Kandasamy tells me that the interest in the technology has been “tremendous.” Given his previous experience and the network of SRI International, it’s maybe no surprise that Latent AI is getting a lot of interest from the automotive industry, but Kandasamy also noted that the company is working with consumer companies, including a camera and a hearing aid maker.

The company is also working with a major telco company that is looking at Latent AI as part of its AI orchestration platform and a large CDN provider to help them run AI workloads on a JavaScript backend.

#5g, #airbus, #analog-devices, #articles, #artificial-intelligence, #autotech-ventures, #battlefield, #cloud-computing, #cto, #delta-airlines, #disrupt-2020, #edge-computing, #enterprise, #future-ventures, #javascript, #sri-international, #startups, #steve-jurvetson, #tc

0

Coming This Fall: Return of the Video Game Console Wars

Gamers are awaiting Sony’s PlayStation 5 and Microsoft’s Xbox Series X, though supply might be limited because of the pandemic.

#cloud-computing, #computer-and-video-games, #computers-and-the-internet, #e-sports, #google-stadia, #microsoft-corp, #nintendo-co-ltd, #playstation-video-game-system, #sony-corporation, #xbox-video-game-system

0

Unity launches its Cloud Content Delivery service for game developers

Unity, the company behind the popular real-time 3D engine, today officially launched its Cloud Content Delivery service. This new service, which is engine-agnostic, combines a content delivery network and backend-as-a-service platform to help developers distribute and update their games. The idea here is to offer Unity developers — and those using other game engines — a live game service option that helps them get the right content to their players at the right time.

As Unity’s Felix The noted, most game developers currently use a standard CDN provider, but that means they must also develop their own last-mile delivery service in order to be able to make their install and update process more dynamic and configurable. Or, as most gamers can attest, the developers simply opt to ship the game as a large binary and with every update, the user has to download that massive file again.

“That can mean the adoption of your new game content or any content will trail a little bit behind because you are reliant on people doing the updates necessary,” The said.

And while the Cloud Delivery Service can be used across platforms, the team is mostly focusing on mobile for now. “We are big fans of focusing on a certain segment when we start and then we can decide how we want to expand. There is a lot of need in the mobile space right now — more so than the rest,” The said. To account for this, the Cloud Content Delivery service allows developers to specify which binary to send to which device, for example.

Having a CDN is one thing, but that last-mile delivery, as The calls it, is where Unity believes it can solve a real pain point for developers.

“CDNs, you get content. Period,” The said. “But in this case, if you want to, as a game developer, test a build — is this QA ready? Is this something that is still being QAed? The build that you want to assign to be downloaded from our Cloud Content Delivery will be different. You want to soft launch new downloadable content for Canada before you release it in the U.S.? You would use our system to configure that. It’s really purpose-built with video games in mind.”

The team decided to keep pricing simple. All developers pay for is the egress pricing, plus a very small fee for storage. There is no regional pricing either, and the first 50GB of bandwidth usage is free, with Unity charging $0.08 per GB for the next 50TB, with additional pricing tiers for those who use more than 50TB ($0.06/GB) and 500TB ($0.03).

“Our intention is that people will look at it and don’t worry about ‘what does this mean? I need a pricing calculator. I need to simulate what’s it going to cost me,’ but really just focus on the fact that they need to make great content,” The explained.

It’s worth highlighting that the delivery service is engine-agnostic. Unity, of course, would like you to use it for games written with the help of the Unity engine, but it’s not a requirement. The argues that this is part of the company’s overall philosophy.

“Our mission has always been centered around democratizing development and making sure that people — regardless of their choices — will have access to success,” he said. “And in terms of operating your game, the decision of a gaming engine typically has been made well before operating your game ever comes into the picture. […] Developer success is at the heart of what we want to focus on.”

#canada, #cloud-computing, #cloud-infrastructure, #cloud-storage, #computing, #content-delivery-network, #developer, #distributed-computing, #game-engine, #gaming, #streaming, #tc, #united-states, #unity, #unity-technologies

0

DNX Ventures launches $315 million fund for US and Japanese B2B startups

DNX Ventures, an investment firm that focuses on early-stage B2B startups in Japan and the United States, announced today that it has closed a new $315 million fund. This is DNX’s third flagship fund; along with supplementary annexed funds, this brings its total managed so far to $567 million.

Founded in 2011, with offices in San Mateo, California and Tokyo, Japan, DNX has invested in more than 100 startups to date, and has 13 exits under its belt. The firm, a member of the Draper Venture Network, focuses on cloud and enterprise software, cybersecurity, edge computing, sales and marketing automation, finance and retail. The companies it invests in are usually raising “seed plus” or Series A funding and DNX’s typical check size ranges from $1 million to $5 million, depending on the startup’s stage, managing director Q Motiwala told TechCrunch.

DNX isn’t disclosing the names of its third fund’s limited partners, but Motiwala said it includes more than 30 LPs, including financial institutions, banks and large conglomerates. DNX began working on the fund last year, before the COVID-19 pandemic hit. Motiwala says DNX is optimistic about the outlook for B2B startups, because past macroeconomic crises, including the 2008 global financial crisis and the 2001 dot-com burst, showed founders continue innovating as they figure out how to make their businesses more efficient while building urgently needed solutions.

For example, DNX has always focused on sectors like cloud computing, cybersecurity, edge computing and robotics, but the COVID-19 pandemic has made those technologies even more relevant. For example, the massive upsurge in remote work means that companies need to adapt their tech infrastructure, while robots like the ones developed by Diligent Robotics, a DNX portfolio company, can help hospitals cope with nursing shortages.

“Our overall theme has always been the digitization of traditional industries like construction, transportation or healthcare, and we’ve always been interested in how to make the reach to the customer much better, so sales and marketing automation, for example,” said Motiwala. “Then the last piece of this is, how do you make society or businesses function better through automation, and those might take things like robotics and other technology.”

The differences and similarities between U.S. and Japanese B2B startups

A graphic featuring DNX Ventures' team members

A graphic featuring DNX Ventures’ team members (Image Credits: DNX Ventures) 

One of the reasons DNX was founded nine years ago was because “Japan has very strong spending on enterprise,” Motiwala said. The firm launched with offices in the U.S. and Japan and has continued to focus on B2B while growing the size of its funds. The firm’s debut fund was $40 million and its second one, announced in 2016, was more than $170 million. Motiwala said the $315 million DNX raised for its third fund was more than the firm expected.

U.S. B2B startups tend to think about global expansion at an earlier stage than their Japanese counterparts, but that has started to change, he said, and many Japanese B2B companies launch with an eye on expanding into different countries. Instead of the U.S. or Europe, however, they tend to focus on Southeast Asian countries like Indonesia, Malaysia and Singapore, or Taiwan. Another difference is that U.S. startups make heavier initial investments in their technology or IP, while in Japan, companies focus on getting to revenue and breaking even earlier. Motiwala said this might be because the Japanese venture capital ecosystem is smaller than in the U.S., but that attitude is also changing.

Examples of DNX portfolio companies that have successfully entered new countries include Cylance, a U.S. company that develops antivirus software using machine learning and predictive math modeling to protect devices from malware. DNX helped Cylance establish operations in Europe and Japan. On the Japan side, software testing company Shift, an investment from DNX’s first fund, has done “phenomenally well” in Southeast Asia, Motiwala said.

In terms of going global, DNX doesn’t push its portfolio companies, but encourages them to expand when the timing is right, especially if a U.S. startup wants to enter Japan, or vice versa. “We like to use the fact that we have teams in both regions. What we’ve seen more is the U.S. companies entering channel partnerships for Japanese distribution,” Motiwala said. “It has been more difficult to show the same thing to Japanese companies, but at the same time what we’ve realized is that instead of saying they should come into the U.S., they’ve done amazing stuff going into the Philippines or Singapore.”

#asia, #cloud-computing, #cybersecurity, #dnx-ventures, #edge-computing, #enterprise, #fundings-exits, #japan, #tc, #venture-capital

0

Google Cloud launches its Business Application Platform based on Apigee and AppSheet

Unlike some of its competitors, Google Cloud has recently started emphasizing how its large lineup of different services can be combined to solve common business problems. Instead of trying to sell individual services, Google is focusing on solutions and the latest effort here is what it calls its Business Application Platform, which combines the API management capabilities of Apigee with the no-code application development platform of AppSheet, which Google acquired earlier this year.

As part of this process, Google is also launching a number of new features for both services today. The company is launching the beta of a new API Gateway, built on top of the open-source Envoy project, for example. This is a fully-managed service that is meant o makes it easier for developers to secure and manage their API across Google’s cloud computing services and serverless offerings like Cloud Functions and Cloud Run. The new gateway, which has been in alpha for a while now, offers all the standard features you’d expect, including authentication, key validation and rate limiting.

As for its low-code service AppSheet, the Google Cloud team is now making it easier to bring in data from third-party applications thanks to the general availability to Apigee as a data source for the service. AppSheet already supported standard sources like MySQL, Salesforce and G Suite, but this new feature adds a lot of flexibility to the service.

With more data comes more complexity, so AppSheet is also launching new tools for automating processes inside the service today, thanks to the early access launch of AppSheet Automation. Like the rest of AppSheet, the promise here is that developers won’t have to write any code. Instead, AppSheet Automation provides a visual interface, that according to Google, “provides contextual suggestions based on natural language inputs.” 

“We are confident the new category of business application platforms will help empower both technical and line of business developers with the core ability to create and extend applications, build and automate workflows, and connect and modernize applications,” Google notes in today’s announcement. And indeed, this looks like a smart way to combine the no-code environment of AppSheet with the power of Apigee .

#alpha, #api, #api-management, #apigee, #appsheet, #cloud, #cloud-applications, #cloud-computing, #computing, #developer, #enterprise, #envoy, #google, #google-cloud, #google-cloud-platform, #mysql, #salesforce, #serverless-computing, #tc

0