Run:AI raises $30M Series B for its AI compute platform

Run:AI, a Tel Aviv-based company that helps businesses orchestrate and optimize their AI compute infrastructure, today announced that it has raised a $30 million Series B round. The new round was led by Insight Partners, with participation from existing investors TLV Partners and S Capital. This brings the company’s total funding to date to $43 million.

At the core of Run:AI’s platform is the ability to effectively virtualize and orchestrate AI workloads on top of its Kubernetes-based scheduler. Traditionally, it was always hard to virtualize GPUs, so even as demand for training AI models has increased, a lot of the physical GPUs often set idle for long periods because it was hard to dynamically allocate them between projects.

Image Credits: Run.AI

The promise behind Run:AI’s platform is that it allows its users to abstract away all of the AI infrastructure and pool all of their GPU resources — no matter whether in the cloud or on-premises. This also makes it easier for businesses to share these resources between users and teams. In the process, IT teams also get better insights into how their compute resources are being used.

“Every enterprise is either already rearchitecting themselves to be built around learning systems powered by AI, or they should be,” said Lonne Jaffe, managing director at Insight Partners and now a board member at Run:AI.” Just as virtualization and then container technology transformed CPU-based workloads over the last decades, Run:AI is bringing orchestration and virtualization technology to AI chipsets such as GPUs, dramatically accelerating both AI training and inference. The system also future-proofs deep learning workloads, allowing them to inherit the power of the latest hardware with less rework. In Run:AI, we’ve found disruptive technology, an experienced team and a SaaS-based market strategy that will help enterprises deploy the AI they’ll need to stay competitive.”

Run:AI says that it is currently working with customers in a wide variety of industries, including automotive, finance, defense, manufacturing and healthcare. These customers, the company says, are seeing their GPU utilization increase from 25 to 75% on average.

“The new funds enable Run:AI to grow the company in two important areas: first, to triple the size of our development team this year,” the company’s CEO Omri Geller told me. “We have an aggressive roadmap for building out the truly innovative parts of our product vision — particularly around virtualizing AI workloads — a bigger team will help speed up development in this area. Second, a round this size enables us to quickly expand sales and marketing to additional industries and markets.”

#artificial-intelligence, #cloud, #computing, #developer, #enterprise, #finance, #gpu, #hardware-acceleration, #insight-partners, #kubernetes, #lonne-jaffe, #recent-funding, #run-ai, #s-capital, #startups, #tc, #technology, #tel-aviv, #tlv-partners

0

MIT aims to speed up robot movements to match robot thoughts using custom chips

MIT researchers are looking to address the significant gap between how quickly robots can process information (relatively slowly), and how fast they can move (very quickly thanks to modern hardware advances), and they’re using something called ‘robomorphic computing’ to do it. The method, designed by MIT Computer Science and Artificial Intelligence (CSAIL) graduate Dr. Sabrina Neuman, results in custom computer chips that can offer hardware acceleration as a means to faster response times.

Custom-built chips tailored to a very specific purpose are not new – if you’re using a modern iPhone, you have one in that device right now. But they have become more popular as companies and technologists look to do more local computing on devices with more conservative power and computing constraints, rather than round-tripping data to large data centers via network connections.

In this case, the method involves creating hyper-specific chips that are designed based on a robot’s physical layout and and its intended use. By taking into account the requirements a robot has in terms of its perception of its surroundings, its mapping and understanding of its position within those surroundings, and its motion planning resulting from said mapping and its required actions, researchers can design processing chips that greatly increase the efficiency of that last stage by supplementing software algorithms with hardware acceleration.

The classic example of hardware acceleration that most people encounter on a regular basis is a graphics processing unit, or GPU. A GPU is essentially a processor designed specifically for the task of handling graphical computing operations – like display rendering and video playback. GPUs are popular because almost all modern computers run into graphics-intensive applications, but custom chips for a range of different functions have become much more popular lately thanks to the advent of more customizable and efficient small-run chip fabrication techniques.

Here’s a description of how Neuman’s system works specifically in the case of optimizing a hardware chip design for robot control, per MIT News:

The system creates a customized hardware design to best serve a particular robot’s computing needs. The user inputs the parameters of a robot, like its limb layout and how its various joints can move. Neuman’s system translates these physical properties into mathematical matrices. These matrices are “sparse,” meaning they contain many zero values that roughly correspond to movements that are impossible given a robot’s particular anatomy. (Similarly, your arm’s movements are limited because it can only bend at certain joints — it’s not an infinitely pliable spaghetti noodle.)

The system then designs a hardware architecture specialized to run calculations only on the non-zero values in the matrices. The resulting chip design is therefore tailored to maximize efficiency for the robot’s computing needs. And that customization paid off in testing.

Neuman’s team used an field-programmable gate array (FPGA), which is sort of like a midpoint between a fully custom chip and an off-the-shelf CPU, and it achieved significantly better performance than the latter. That means that were you to actually custom manufacture a chip from scratch, you could expect much more significant performance improvements.

Making robots react faster to their environments isn’t just about increase manufacturing speed and efficiency – though it will do that. It’s also about making robots even safer to work with in situations where people are working directly alongside and in collaboration with them. That remains a significant barrier to more widespread use of robotics in everyday life, meaning this research could help unlock the sci-fi future of humans and robots living in integrated harmony.

#artificial-intelligence, #computer-chips, #computer-graphics, #computer-hardware, #computing, #fpga, #hardware-acceleration, #iphone, #mit, #robotics, #science, #tc

0