Databricks raises $1.6B at $38B valuation as it blasts past $600M ARR

Databricks this morning confirmed earlier reports that it was raising new capital at a higher valuation. The data- and AI-focused company has secured a $1.6 billion round at a $38 billion valuation, it said. Bloomberg first reported last week that Databricks was pursuing new capital at that price.

The Series H was led by Counterpoint Global, a Morgan Stanley fund. Other new investors included Baillie Gifford, UC Investments and ClearBridge. A grip of prior investors also kicked in cash to the round.

The new funding brings Databricks’ total private funding raised to $3.5 billion. Notably, its latest raise comes just seven months after the late-stage startup raised $1 billion on a $28 billion valuation. Its new valuation represents paper value creation in excess of $1 billion per month.

The company, which makes open source and commercial products for processing structured and unstructured data in one location, views its market as a new technology category. Databricks calls the technology a data “lakehouse,” a mashup of data lake and data warehouse.

Databricks CEO and co-founder Ali Ghodsi believes that its new capital will help his company secure market leadership.

For context, since the 1980s, large companies have stored massive amounts of structured data in data warehouses. More recently, companies like Snowflake and Databricks have provided a similar solution for unstructured data called a data lake.

In Ghodsi’s view, combining structured and unstructured data in a single place with the ability for customers to execute data science and business-intelligence work without moving the underlying data is a critical change in the larger data market.

“[Data lakehouses are] a new category, and we think there’s going to be lots of vendors in this data category. So it’s a land grab. We want to quickly race to build it and complete the picture,” he said in an interview with TechCrunch.

Ghodsi also pointed out that he is going up against well-capitalized competitors and that he wants the funds to compete hard with them.

“And you know, it’s not like we’re up against some tiny startups that are getting seed funding to build this. It’s all kinds of [large, established] vendors,” he said. That includes Snowflake, Amazon, Google and others who want to secure a piece of the new market category that Databricks sees emerging.

The company’s performance indicates that it’s onto something.


Databricks has reached the $600 million annual recurring revenue (ARR) milestone, it disclosed as part of its funding announcement. It closed 2020 at $425 million ARR, to better illustrate how quickly it is growing at scale.

Per the company, its new ARR figure represents 75% growth, measured on a year-over-year basis.

That’s quick for a company of its size; per the Bessemer Cloud Index, top-quartile public software companies are growing at around 44% year over year. Those companies are worth around 22x their forward revenues.

At its new valuation, Databricks is worth 63x its current ARR. So Databricks isn’t cheap, but at its current pace should be able to grow to a size that makes its most recent private valuation easily tenable when it does go public, provided that it doesn’t set a new, higher bar for its future performance by raising again before going public.

Ghodsi declined to share timing around a possible IPO, and it isn’t clear whether the company will pursue a traditional IPO or if it will continue to raise private funds so that it can direct list when it chooses to float. Regardless, Databricks is now sufficiently valuable that it can only exit to one of a handful of mega-cap technology giants or go public.

Why hasn’t the company gone public? Ghodsi is enjoying a rare position in the startup market: He has access to unlimited capital. Databricks had to open another $100 million in its latest round, which was originally set to close at just $1.5 billion. It doesn’t lack for investor interest, allowing its CEO to bring aboard the sort of shareholder he wants for his company’s post-IPO life — while enjoying limited dilution.

This also enables him to hire aggressively, possibly buy some smaller companies to fill in holes in Databricks’ product roadmap, and grow outside of the glare of Wall Street expectations from a position of capital advantage. It’s the startup equivalent of having one’s cake and eating it too.

But staying private longer isn’t without risks. If the larger market for software companies was rapidly devalued, Databricks could find itself too expensive to go public at its final private valuation. However, given the long bull market that we’ve seen in recent years for software shares, and the confidence Ghodsi has in his potential market, that doesn’t seem likely.

There’s still much about Databricks’ financial position that we don’t yet know — its gross margin profile, for example. TechCrunch is also incredibly curious what all its fundraising and ensuing spending have done to near-term Databricks operating cash flow results, as well as how long its gross-margin adjusted CAC payback has evolved since the onset of COVID-19. If we ever get an S-1, we might find out.

For now, winsome private markets are giving Ghodsi and crew space to operate an effectively public company without the annoyances that come with actually being public. Want the same thing for your company? Easy: Just reach $600 million ARR while growing 75% year over year.

#ali-ghodsi, #artificial-intelligence, #cloud, #data-lake, #data-warehouse, #database, #databricks, #enterprise, #fundings-exits, #ml, #startups

Sean Gallagher and an AI expert break down our crazy machine-learning adventure

Sean Gallagher and an AI expert break down our crazy machine-learning adventure


We’ve spent the past few weeks burning copious amounts of AWS compute time trying to invent an algorithm to parse Ars’ front-page story headlines to predict which ones will win an A/B test—and we learned a lot. One of the lessons is that we—and by “we,” I mainly mean “me,” since this odyssey was more or less my idea—should probably have picked a less, shall we say, ambitious project for our initial outing into the machine-learning wilderness. Now, a little older and a little wiser, it’s time to reflect on the project and discuss what went right, what went somewhat less than right, and how we’d do this differently next time.

Our readers had tons of incredibly useful comments, too, especially as we got into the meaty part of the project—comments that we’d love to get into as we discuss the way things shook out. The vagaries of the edit cycle meant that the stories were being posted quite a bit after they were written, so we didn’t have a chance to incorporate a lot of reader feedback as we went, but it’s pretty clear that Ars has some top-shelf AI/ML experts reading our stories (and probably groaning out loud every time we went down a bit of a blind alley). This is a great opportunity for you to jump into the conversation and help us understand how we can improve for next time—or, even better, to help us pick smarter projects if we do an experiment like this again!

Our chat kicks off on Wednesday, July 28, at 1:00 pm Eastern Time (that’s 10:00 am Pacific Time and 17:00 UTC). Our three-person panel will consist of Ars Infosec Editor Emeritus Sean Gallagher and me, along with Amazon Senior Principal Technical Evangelist (and AWS expert) Julien Simon. If you’d like to register so that you can ask questions, use this link here; if you just want to watch, the discussion will be streamed on the Ars Twitter account and archived as an embedded video on this story’s page. Register and join in or check back here after the event to watch!

Read on Ars Technica | Comments

#ai, #ai-ml, #amazon, #artificial-intelligence, #aws, #biz-it, #headlines, #livechat, #machine-learning, #ml, #natural-language-processing, #nlp

Ars AI headline experiment finale—we came, we saw, we used a lot of compute time

Ars AI headline experiment finale—we came, we saw, we used a lot of compute time

Enlarge (credit: Aurich Lawson | Getty Images)

We may have bitten off more than we could chew, folks.

An Amazon engineer told me that when he heard what I was trying to do with Ars headlines, the first thing he thought was that we had chosen a deceptively hard problem. He warned that I needed to be careful about properly setting my expectations. If this was a real business problem… well, the best thing he could do was suggest reframing the problem from “good or bad headline” to something less concrete.

That statement was the most family-friendly and concise way of framing the outcome of my four-week, part-time crash course in machine learning. As of this moment, my PyTorch kernels aren’t so much torches as they are dumpster fires. The accuracy has improved slightly, thanks to professional intervention, but I am nowhere near deploying a working solution. Today, as I am allegedly on vacation visiting my parents for the first time in over a year, I sat on a couch in their living room working on this project and accidentally launched a model training job locally on the Dell laptop I brought—with a 2.4 GHz Intel Core i3 7100U CPU—instead of in the SageMaker copy of the same Jupyter notebook. The Dell locked up so hard I had to pull the battery out to reboot it.

Read 27 remaining paragraphs | Comments

#ai, #al-ml, #artificial-intelligence, #aws, #biz-it, #features, #is-our-machine-learning, #machine-learning, #ml, #natural-language-processing, #nlp, #sagemaker

Our AI headline experiment continues: Did we break the machine?

Our AI headline experiment continues: Did we break the machine?

Enlarge (credit: Aurich Lawson | Getty Images)

We’re in phase three of our machine-learning project now—that is, we’ve gotten past denial and anger, and we’re now sliding into bargaining and depression. I’ve been tasked with using Ars Technica’s trove of data from five years of headline tests, which pair two ideas against each other in an “A/B” test to let readers determine which one to use for an article. The goal is to try to build a machine-learning algorithm that can predict the success of any given headline. And as of my last check-in, it was… not going according to plan.

I had also spent a few dollars on Amazon Web Services compute time to discover this. Experimentation can be a little pricey. (Hint: If you’re on a budget, don’t use the “AutoPilot” mode.)

We’d tried a few approaches to parsing our collection of 11,000 headlines from 5,500 headline tests—half winners, half losers. First, we had taken the whole corpus in comma-separated value form and tried a “Hail Mary” (or, as I see it in retrospect, a “Leeroy Jenkins“) with the Autopilot tool in AWS’ SageMaker Studio. This came back with an accuracy result in validation of 53 percent. This turns out to be not that bad, in retrospect, because when I used a model specifically built for natural-language processing—AWS’ BlazingText—the result was 49 percent accuracy, or even worse than a coin-toss. (If much of this sounds like nonsense, by the way, I recommend revisiting Part 2, where I go over these tools in much more detail.)

Read 29 remaining paragraphs | Comments

#ai, #ai-ml, #amazon-sagemaker, #artificial-intelligence, #aws, #biz-it, #feature, #features, #machine-learning, #ml, #natural-language-processing, #nlp, #tokenization

Is our machine learning? Ars takes a dip into artificial intelligence

Is our machine learning? Ars takes a dip into artificial intelligence


Every day, some little piece of logic constructed by very specific bits of artificial intelligence technology makes decisions that affect how you experience the world. It could be the ads that get served up to you on social media or shopping sites, or the facial recognition that unlocks your phone, or the directions you take to get to wherever you’re going. These discreet, unseen decisions are being made largely by algorithms created by machine learning (ML), a segment of artificial intelligence technology that is trained to identify correlation between sets of data and their outcomes. We’ve been hearing in movies and TV for years that computers control the world, but we’ve finally reached the point where the machines are making real autonomous decisions about stuff. Welcome to the future, I guess.

In my days as a staffer at Ars, I wrote no small amount about artificial intelligence and machine learning. I talked with data scientists who were building predictive analytic systems based on terabytes of telemetry from complex systems, and I babbled with developers trying to build systems that can defend networks against attacks—or, in certain circumstances, actually stage those attacks. I’ve also poked at the edges of the technology myself, using code and hardware to plug various things into AI programming interfaces (sometimes with horror-inducing results, as demonstrated by Bearlexa).

Many of the problems to which ML can be applied are tasks whose conditions are obvious to humans. That’s because we’re trained to notice those problems through observation—which cat is more floofy or at what time of day traffic gets the most congested. Other ML-appropriate problems could be solved by humans as well given enough raw data—if humans had a perfect memory, perfect eyesight, and an innate grasp of statistical modeling, that is.

Read 33 remaining paragraphs | Comments

#ai, #ai-ml, #artificial-intelligence, #biz-it, #feature, #feature-report, #features, #machine-learning, #ml

Achieving digital transformation through RPA and process mining

Understanding what you will change is most important to achieve a long-lasting and successful robotic process automation transformation. There are three pillars that will be most impacted by the change: people, process and digital workers (also referred to as robots). The interaction of these three pillars executes workflows and tasks, and if integrated cohesively, determines the success of an enterprisewide digital transformation.

Robots are not coming to replace us, they are coming to take over the repetitive, mundane and monotonous tasks that we’ve never been fond of. They are here to transform the work we do by allowing us to focus on innovation and impactful work. RPA ties decisions and actions together. It is the skeletal structure of a digital process that carries information from point A to point B. However, the decision-making capability to understand and decide what comes next will be fueled by RPA’s integration with AI.

From a strategic standpoint, success measures for automating, optimizing and redesigning work should not be solely centered around metrics like decreasing fully loaded costs or FTE reduction, but should put the people at the center.

We are seeing software vendors adopt vertical technology capabilities and offer a wide range of capabilities to address the three pillars mentioned above. These include powerhouses like UiPath, which recently went public, Microsoft’s Softomotive acquisition, and Celonis, which recently became a unicorn with a $1 billion Series D round. RPA firms call it “intelligent automation,” whereas Celonis targets the execution management system. Both are aiming to be a one-stop shop for all things related to process.

We have seen investments in various product categories for each stage in the intelligent automation journey. Process and task mining for process discovery, centralized business process repositories for CoEs, executives to manage the pipeline and measure cost versus benefit, and artificial intelligence solutions for intelligent document processing.

For your transformation journey to be successful, you need to develop a deep understanding of your goals, people and the process.

Define goals and measurements of success

From a strategic standpoint, success measures for automating, optimizing and redesigning work should not be solely centered around metrics like decreasing fully loaded costs or FTE reduction, but should put the people at the center. To measure improved customer and employee experiences, give special attention to metrics like decreases in throughput time or rework rate, identify vendors that deliver late, and find missed invoice payments or determine loan requests from individuals that are more likely to be paid back late. These provide more targeted success measures for specific business units.

The returns realized with an automation program are not limited to metrics like time or cost savings. The overall performance of an automation program can be more thoroughly measured with the sum of successes of the improved CX/EX metrics in different business units. For each business process you will be redesigning, optimizing or automating, set a definitive problem statement and try to find the right solution to solve it. Do not try to fit predetermined solutions into the problems. Start with the problem and goal first.

Understand the people first

To accomplish enterprise digital transformation via RPA, executives should put people at the heart of their program. Understanding the skill sets and talents of the workforce within the company can yield better knowledge of how well each employee can contribute to the automation economy within the organization. A workforce that is continuously retrained and upskilled learns how to automate and flexibly complete tasks together with robots and is better equipped to achieve transformation at scale.

#api, #artificial-intelligence, #automation, #business-process-management, #cloud-elements, #column, #ec-column, #ec-enterprise-applications, #enterprise, #microsoft, #ml, #process-mining, #robot-process-automation, #uipath, #workflow

For successful AI projects, celebrate your graveyard and be prepared to fail fast

AI teams invest a lot of rigor in defining new project guidelines. But the same is not true for killing existing projects. In the absence of clear guidelines, teams let infeasible projects drag on for months.

They put up a dog and pony show during project review meetings for fear of becoming the messengers of bad news. By streamlining the process to fail fast on infeasible projects, teams can significantly increase their overall success with AI initiatives.

In order to fail fast, AI initiatives should be managed as a conversion funnel analogous to marketing and sales funnels.

AI projects are different from traditional software projects. They have a lot more unknowns: availability of right datasets, model training to meet required accuracy threshold, fairness and robustness of recommendations in production, and many more.

In order to fail fast, AI initiatives should be managed as a conversion funnel analogous to marketing and sales funnels. Projects start at the top of the five-stage funnel and can drop off at any stage, either to be temporarily put on ice or permanently suspended and added to the AI graveyard. Each stage of the AI funnel defines a clear set of unknowns to be validated with a list of time-bound success criteria.

The AI project funnel has five stages:

Image Credits: Sandeep Uttamchandani

1. Problem definition: “If we build it, will they come?”

This is the top of the funnel. AI projects require significant investments not just during initial development but ongoing monitoring and refinement. This makes it important to verify that the problem being solved is truly worth solving with respect to potential business value compared to the effort to build. Even if the problem is worth solving, AI may not be required. There might be easier human-encoded heuristics to solve the problem.

Developing the AI solution is only half the battle. The other half is how the solution will actually be used and integrated. For instance, in developing an AI solution for predicting customer churn, there needs to be a clear understanding of incorporating attrition predictions in the customer support team workflow. A perfectly powerful AI project will fail to deliver business value without this level of integration clarity.

To successfully exit this stage, the following statements need to be true:

  • The AI project will produce tangible business value if delivered successfully.
  • There are no cheaper alternatives that can address the problem with the required accuracy threshold.
  • There is a clear path to incorporate the AI recommendations within the existing flow to make an impact.

In my experience, the early stages of the project have a higher ratio of aspiration compared to ground realities. Killing an ill-formed project can avoid teams from building “solutions in search of problems.”

2. Data availability : “We have the data to build it.”

At this stage of the funnel, we have verified the problem is worth solving. We now need to confirm the data availability to build the perception, learning and reasoning capabilities required in the AI project. Data needs vary based on the type of AI project  —  the requirements for a project building classification intelligence will be different from one providing recommendations or ranking.

Data availability broadly translates to having the right quality, quantity and features. Right quality refers to the fact that the data samples are an accurate reflection of the phenomenon we are trying to model  and meet properties such as independent and identically distributed. Common quality checks involve uncovering data collection errors, inconsistent semantics and errors in labeled samples.

The right quantity refers to the amount of data that needs to be available. A common misconception is that a significant amount of data is required for training machine learning models. This is not always true. Using pre-built transfer learning models, it is possible to get started with very little data. Also, more data does not always mean useful data. For instance, historic data spanning 10 years may not be a true reflection of current customer behavior. Finally, the right features need to be available to build the model. This is typically iterative and involves ML model design.

To successfully exit this stage, the following statements need to be true:

#artificial-intelligence, #developer, #ec-column, #ec-how-to, #machine-learning, #ml, #startups, #tc

GitLab acquires UnReview as it looks to bring more ML tools to its platform

DevOps platform GitLab today announced that it has acquired UnReview, a machine learning-based tool that helps software teams recommend the best reviewers for when developers want to check in their latest code. GitLab, which is looking to bring more of these machine learning capabilities to its platform, will integrate UnReview’s capabilities into its own code review workflow. The two companies did not disclose the price of the acquisition.

“Last year we decided that the future of DevOps includes ML/AI, both within the DevOps lifecycle as well as the growth of adoption of ML/AI with our customers,” David DeSanto, GitLab’s senior director, Product Management – Dev & Sec, told me. He noted that when GitLab recently surveyed its customers, 75% of the teams said they are already using AI/ML. The company started by adding a bot to the platform that can automatically label issues, which then led to the team meeting with UnReview and, finally, acquiring it.

Image Credits: GitLab

“Our primary focus for the second half of this year in bringing on UnReview is to help automate the selection of code reviewers. It’s a very interesting problem to solve, even we at GitLab occasionally end up picking the wrong reviewers based off of what people know,” DeSanto noted.

GitLab launched its original code review components last year. As Wayne Haber, GitLab’s director of Engineering, noted, that was still a very manual process. Even with the new system, teams still retain full control over which reviewers will be assigned to a merge request, but the tool will automatically — and transparently — rank potential reviewers based on who the system believes is best suited to this task.

“I am grateful for the opportunity to share my passion for data science and machine learning with GitLab and its community,” said Alexander Chueshev, UnReview’s founder (and now a senior full stack engineer at GitLab). “I look forward to enhancing the user experience by playing a role in integrating UnReview into the GitLab platform and extending machine learning and artificial intelligence into additional DevOps stages in the future.”

DeSanto noted that GitLab now has quite a bit of experience in acquiring companies and integrating them into its stack. “We’re always looking to acquire strong teams and strong concepts that can help accelerate our roadmap or strategy or help the platform in general,” he said. “And you can see it over the last couple of years of acquisitions. When we were looking at extending what we did in security, we acquired two leaders in the security space to help build that portfolio out. And that’s fully integrated today. […] In the case of this, UnReview is doing something that we thought we may need to do in the future. They had already built it, they were able to show the value of it, and it became a good partnership between the two companies, which then led to this acquisition.”

One interesting wrinkle here is that GitLab offers both a hosted SaaS service and allows users to run their own on-premises systems as well. Running an ML service like UnReview on-premises isn’t necessarily something that most businesses are equipped to do, so at first, UnReview will be integrated with the SaaS service. The team is still looking at how to best bring it to its self-hosted user base, including a hybrid model.

#artificial-intelligence, #cloud, #continuous-integration, #developer, #devops, #engineer, #free-software, #git, #gitlab, #go, #ma, #machine-learning, #ml, #software-engineering, #tc, #unreview, #version-control

Iterative raises $20M for its MLOps platform

Iterative, an open-source startup that is building an enterprise AI platform to help companies operationalize their models, today announced that it has raised a $20 million Series A round led by 468 Capital and Mesosphere co-founder Florian Leibert. Previous investors True Ventures and Afore Capital also participated in this round, which brings the company’s total funding to $25 million.

The core idea behind Iterative is to provide data scientists and data engineers with a platform that closely resembles a modern GitOps-driven development stack.

After spending time in academia, Iterative co-founder and CEO Dmitry Petrov joined Microsoft as a data scientist on the Bing team in 2013. He noted that the industry has changed quite a bit since then. While early on, the questions were about how to build machine learning models, today the problem is how to build predictable processes around machine learning, especially in large organizations with sizable teams. “How can we make the team productive not the person? This is a new challenge for the entire industry,” he said.

Big companies (like Microsoft) were able to build their own proprietary tooling and processes to build their AI operations, Petrov noted, but that’s not an option for smaller companies.

Currently, Iterative’s stack consists of a couple of different components that sit on top of tools like GitLab and GitHub. These include DVC for running experiments and data and model versioning, CML, the company’s CI/CD platform for machine learning, and the company’s newest product, Studio, its SaaS platform for enabling collaboration between teams. Instead of reinventing the wheel, Iterative essentially provides data scientists who already use GitHub or GitLab to collaborate on their source code with a tool like DVC Studio that extends this to help them collaborate on data and metrics, too.

Image Credits: Iterative

“DVC Studio enables machine learning developers to run hundreds of experiments with full transparency, giving other developers in the organization the ability to collaborate fully in the process,” said Dmitry Petrov, CEO and founder of Iterative. “The funding today will help us bring more innovative products and services into our ecosystem.”

Petrov stressed that he wants to build an ecosystem of tools, not a monolithic platform. When the company closed this current funding round about three months ago, Iterative had about 30 employees, many of which were previously active in the open-source community around its projects. Today, that number is already closer to 60.

“Data, ML and AI are becoming an essential part of the industry and IT infrastructure,” said Leibert, general partner at 468 Capital. “Companies with great open source adoption and bottom-up market strategy, like Iterative, are going to define the standards for AI tools and processes around building ML models.”

#afore-capital, #artificial-intelligence, #cloud, #cybernetics, #data-scientist, #developer, #enterprise, #free-software, #funding, #fundings-exits, #git, #github, #gitlab, #learning, #machine-learning, #microsoft, #ml, #recent-funding, #saas, #software-engineering, #startups, #true-ventures, #version-control

Brazil’s Divibank raises millions to become the Clearbanc of LatAm

Divibank, a financing platform offering LatAm businesses access to growth capital, has closed on a $3.6 million round of seed funding led by San Francisco-based Better Tomorrow Ventures (BTV).

São Paulo-based Divibank was founded in March 2020, right as the COVID-pandemic was starting. The company has built a data-driven financing platform aimed at giving businesses access to non-dilutive capital to finance their growth via revenue-share financing.

“We are changing the way entrepreneurs scale their online businesses by providing quick and affordable capital to startups and SMEs in Latin America,” said co-founder and CEO Jaime Taboada. In particular, Divibank is targeting e-commerce and SaaS companies although it also counts edtechs, fintechs and marketplaces among its clients.

The company is now also offering marketing analytics software for its clients so they can “get more value out of the capital they receive.”

A slew of other investors participated in the round, including existing backer MAYA Capital and new investors such as Village Global, Clocktower Ventures, Magma Partners, Gilgamesh Ventures, Rally Cap Ventures and Alumni Ventures Group. A group of high-profile angel investors also put money in the round, including Rappi founder and president Sebastian Mejia, Tayo Oviosu (founder/CEO of Paga, who participated via Kairos Angels), Ramp founder and CTO Karim Atiyeh and Bread founders Josh Abramowitz and Daniel Simon.

In just over a year’s time, Divibank has seen some impressive growth (albeit from a small base). In the past six months alone, the company said it has signed on over 50 new clients; seen its total loan issuance volume increase by 7x; revenues climb by 5x; customer base increase by 11x and employee base by 4x. Customers include Dr. Jones, CapaCard and Foodz, among others.

“Traditional banks and financial institutions do not know how to evaluate internet businesses, so they generally do not offer loans to these companies. If they do, it is generally a long and tedious process at a very high cost,” Taboada said. “With our revenue-share offering, the entrepreneur does not have to pledge his home, drown in credit card debts or even give up his equity to invest in marketing and growth.”

For now, Divibank is focused on Brazil, considering the country is huge and has more than 11 million SMEs “with many growth opportunities to explore,” according to Taboada. It’s looking to expand to the rest of LatAm and other emerging markets in the future, but no timeline has yet been set.

As in many other sectors, the COVID-19 pandemic served as a tailwind to Divibank’s business, considering it accelerated the digitalization of everything globally.

“We founded Divibank the same week as the lockdown started in Brazil, and we saw many industries that didn’t traditionally advertise online migrate to Google and Facebook Ads rapidly,” Taboada told TechCrunch. “This obviously helped our thesis a lot, as many of our clients had actually recently went from only selling offline to selling mostly online. And there’s no better way to attract new clients online than with digital ads.”

Divibank will use its new capital to accelerate its product roadmap, scale its go-to-market strategy and ramp up hiring. Specifically, it will invest more aggressively in engineering/tech, sales, marketing, credit risk and operations. Today the team consists of eight employees in Brazil, and that number will likely grow to more than 25 or 30 in the coming 12 months, according to Taboada.

The startup is also developing what it describes as “value additive” software, aimed at helping clients better manage their digital ads campaigns and “optimize their investment returns.”

Looking ahead, Divibank is working on a few additional financial products for its clients, targeting the more than $205 billion e-commerce and SaaS markets in Latin America with offerings such as inventory financing and recurring revenue securitizations. Specifically, it plans to continue developing its banking tech platform by “automating the whole credit process,” developing its analytics platform and building its data science/ML capabilities to improve its credit model.

Jake Gibson, general partner at Better Tomorrow Ventures, noted that his firm is also an investor in Clearbanc, which also provided non-dilutive financing for founders. The company’s “20-minute term sheet” product, perhaps its most well-known in tech, allowed e-commerce companies to raise non-dilutive marketing growth capital between $10,000 to $10 million based on its revenue and ad spend.

“We are very bullish on the idea that not every company should be funded with venture dollars, and that lack of funding options can keep too many would-be entrepreneurs out of the market,” he said. “Combine that with the growth of e-commerce in Brazil and LatAm, and expected acceleration fueled by COVID, and the opportunity to build something meaningful seemed obvious.”

Also, since there aren’t a lot of similar offerings in the region, Better Tomorrow views the space that Divibank is addressing as a “massive untapped market.”

Besides Clearbanc, Divibank is also similar to another U.S.-based fintech, Pipe, in that both companies aim to help clients with SaaS, subscription and other recurring revenue models with new types of financings that can help them grow without dilution.

“Like the e-commerce market, we see the SaaS, and the recurring revenues markets in general, growing rapidly,” Taboada said.

#alumni-ventures-group, #angel-investor, #banking, #better-tomorrow-ventures, #brazil, #business, #clearbanc, #clocktower-ventures, #daniel-simon, #divibank, #e-commerce, #ecommerce, #economy, #entrepreneur, #entrepreneurship, #finance, #fintech, #funding, #fundings-exits, #jake-gibson, #latin-america, #magma-partners, #ml, #paga, #private-equity, #recent-funding, #saas, #san-francisco, #sao-paulo, #startup, #startup-company, #startups, #tayo-oviosu, #tc, #venture-capital, #village-global

Materials Zone raises $6M for its materials discovery platform

Materials Zone, a Tel Aviv-based startup that uses AI to speed up materials research, today announced that it has raised a $6 million seed funding round led by Insight Partners, with participation from crowdfunding platform OurCrowd.

The company’s platform consists of a number of different tools, but at the core is a database that takes in data from scientific instruments, manufacturing facilities, lab equipment, external databases, published articles, Excel sheets and more, and then parses it and standardizes it. Simply having this database, the company argues, is a boon for researchers, who can then also visualize it as needed.

Image Credits: Materials Zone

“In order to develop new technologies and physical products, companies must first understand the materials that comprise those products, as well as those materials’ properties,” said Materials Zone founder and CEO Dr. Assaf Anderson. “Understanding the science of materials has therefore become a driving force behind innovation. However, the data behind materials R&D and production has traditionally been poorly managed, unstructured, and underutilized, often leading to redundant experiments, limited capacity to build on past experience, and an inability to effectively collaborate, which inevitably wastes countless dollars and man-hours.”

Image Credits: Materials Zone

Before founding Materials Zone, Anderson spent time at the Bar Ilan University’s Institute for Nanotechnology and Advanced Materials, where he was the head of the Combinatorial Materials lab.

Assaf Anderson, Ph.D., founder and CEO of Materials Zone

Assaf Anderson, PhD, founder/CEO of Materials Zone. Image Credits: Materials Zone

“As a materials scientist, I have experienced R&D challenges firsthand, thereby gaining an understanding of how R&D can be improved,” Anderson said. “We developed our platform with our years of experience in mind, leveraging innovative AI/ML technologies to create a unique solution for these problems.”

He noted that in order to, for example, develop a new photovoltaic transparent window, it would take thousands of experiments to find the right core materials and their parameters. The promise of Materials Zone is that it can make this process faster and cheaper by aggregating and standardizing all of this data and then offer data and workflow management tools to work with it. Meanwhile, the company’s analytical and machine learning tools can help researchers interpret this data.


#artificial-intelligence, #insight-partners, #machine-learning, #materials-science, #materials-zone, #ml, #nanotechnology, #ourcrowd, #recent-funding, #science, #science-and-technology, #startups, #tel-aviv

5 emerging use cases for productivity infrastructure in 2021

When the world flipped upside down last year, nearly every company in every industry was forced to implement a remote workforce in just a matter of days — they had to scramble to ensure employees had the right tools in place and customers felt little to no impact. While companies initially adopted solutions for employee safety, rapid response and short-term air cover, they are now shifting their focus to long-term, strategic investments that empower growth and streamline operations.

As a result, categories that make up productivity infrastructure — cloud communications services, API platforms, low-code development tools, business process automation and AI software development kits — grew exponentially in 2020. This growth was boosted by an increasing number of companies prioritizing tools that support communication, collaboration, transparency and a seamless end-to-end workflow.

Productivity infrastructure is on the rise and will continue to be front and center as companies evaluate what their future of work entails and how to maintain productivity, rapid software development and innovation with distributed teams.

According to McKinsey & Company, the pandemic accelerated the share of digitally enabled products by seven years, and “the digitization of customer and supply-chain interactions and of internal operations by three to four years.” As demand continues to grow, companies are taking advantage of the benefits productivity infrastructure brings to their organization both internally and externally, especially as many determine the future of their work.

Automate workflows and mitigate risk

Developers rely on platforms throughout the software development process to connect data, process it, increase their go-to-market velocity and stay ahead of the competition with new and existing products. They have enormous amounts of end-user data on hand, and productivity infrastructure can remove barriers to access, integrate and leverage this data to automate the workflow.

Access to rich interaction data combined with pre-trained ML models, automated workflows and configurable front-end components enables developers to drastically shorten development cycles. Through enhanced data protection and compliance, productivity infrastructure safeguards critical data and mitigates risk while reducing time to ROI.

As the post-pandemic workplace begins to take shape, how can productivity infrastructure support enterprises where they are now and where they need to go next?

#artificial-intelligence, #business-process-management, #cloud-computing, #column, #ec-column, #ec-enterprise-applications, #ml, #productivity, #remote-work, #startups

Hustle Fund backs Fintor, which wants to make it easier to invest in real estate

Farshad Yousefi and Masoud Jalali used to drive through Palo Alto neighborhoods and marvel at the outrageous home prices. But the drives sparked an idea. They were not in a financial position to purchase a home in those neighborhoods (to be clear, not many people are) either for investment or to live. But what if they could invest in homes in up and coming cities throughout the U.S.?

Then they realized that even that might be a challenge considering that with all their student debt, affording a down payment would be impossible.

“There was nothing available out there besides a crowdfunding platform, which when we first signed up, took away $1,000 from our account that we didn’t have, and then our capital would be locked up for 3 to 10 years,” recalls Yousefi.

So the pair started doing research and spoke to 1,000 individuals under the age of 35. Eight out of 10 said they would like to invest in real estate but were deterred by all the barriers to entry.

“There is clearly a large demand for access to real estate,” Yousefi said. “And we wanted to give people a way to invest in it like they can in stocks, via a mobile app.”

And so the idea for Fintor was born.

Yousefi and Jalali founded the company in 2020 with the goal of purchasing homes via an LLC, and turning each into shares through a SEC-approved broker dealer. Individuals can then buy shares of the homes via Fintor’s platform. Its next step is to sign agreements with individual real estate investors or bigger real estate development firms to list their properties on the platform and give people the opportunity to buy shares.

And now Fintor has raised $2.5 million in seed money to continue building out its fractional real estate investing platform. The startup aims to “fractionalize” houses and other residential property, giving people in the U.S. access to investment opportunities “starting with as little as $5.” The company attracted the interest of investors such as 500 Startups, Hustle Fund, Graphene Ventures, Houston-based real estate investor Manny Khoshbin, Mana Ventures and other angel investors such as Cindy Bi, Skyler Fernandes, VU Venture Partners, Minal Hasan, Andrew Zalasin, Alluxo CEO and Founder Safa Mahzari, SquareFoot CEO and founder Jonathan Wasserstrum and Teachable CEO and founder Ankur Nagpal.

Image Credits: Fintor

Fintor is eying markets such as Kansas City, South Carolina, and Houston, Texas, where it already has some properties. It’s looking for homes in the $80,000 to $350,000 price range, and millennials and GenZers are its target demographic.

“Fintor can give the same return as the stock market, but at half the risk,” Yousefi said. “As two [Iranian] immigrants, we’ve seen how much this country has to offer and how real estate sits at the top of everything, yet is so inaccessible.”

The pair had originally set out to raise just $1 million but the round was quickly “way oversubscribed,” according to Yousefi, and they ended up raising $2.5 million at triple the original valuation.

Jalali said the company will use machine learning technology to filter and rate properties as it scales its business model.

“We’ll use ML to categorize neighborhoods and to come up with the price of properties to offer to potential sellers,” he added. “Our ultimate goal is to create indexes so that people can invest in multiple properties in a given city. That creates diversification right away.”

.Elizabeth Yin, co-founder and general partner of Hustle Fund, believes that Fintor is solving a generational problem with real estate.

“Retail investors have almost no access to great real estate investments today and the best opportunities are reserved for the select few,” she told TechCrunch. “Not to mention that in addition to access, retail investors often need a lot of capital in order to have a diversified portfolio or be accredited to join funds.”

Fintor’s approach to securitize real estate assets will give millions of investors who are not accredited investors access they would otherwise not have had, Yin added. 

“Simultaneously, it provides increased liquidity to property owners, while improving the user experience for both parties,” she said. “Effectively this becomes a new asset class, because it’s entirely turnkey and is fractionalized, which opens up many new pockets of investors.”

#ankur-nagpal, #articles, #ceo, #cindy-bi, #crowdfunding, #elizabeth-yin, #entrepreneurship, #financial-technology, #fintech, #funding, #fundings-exits, #graphene-ventures, #houston, #hustle-fund, #kansas-city, #machine-learning-technology, #ml, #palo-alto, #proptech, #real-estate, #recent-funding, #retail-investors, #south-carolina, #squarefoot, #startups, #tc, #technology, #texas, #u-s-securities-and-exchange-commission, #united-states, #venture-capital helps devs add language processing smarts to their apps

While visual ‘no code‘ tools are helping businesses get more out of computing without the need for armies of in-house techies to configure software on behalf of other staff, access to the most powerful tech tools — at the ‘deep tech’ AI coal face — still requires some expert help (and/or costly in-house expertise).

This is where bootstrapping French startup,, is plying a trade in MLOps/AIOps — or ‘compute platform as a service’ (being as it runs the queries on its own servers) — with a focus on natural language processing (NLP), as its name suggests.

Developments in artificial intelligence have, in recent years, led to impressive advances in the field of NLP — a technology that can help businesses scale their capacity to intelligently grapple with all sorts of communications by automating tasks like Named Entity Recognition, sentiment-analysis, text classification, summarization, question answering, and Part-Of-Speech tagging, freeing up (human) staff to focus on more complex/nuanced work. (Although it’s worth emphasizing that the bulk of NLP research has focused on the English language — meaning that’s where this tech is most mature; so associated AI advances are not universally distributed.)

Production ready (pre-trained) NLP models for English are readily available ‘out of the box’. There are also dedicated open source frameworks offering help with training models. But businesses wanting to tap into NLP still need to have the DevOps resource and chops to implement NLP models. is catering to businesses that don’t feel up to the implementation challenge themselves — offering “production-ready NLP API” with the promise of “no DevOps required”.

Its API is based on Hugging Face and spaCy open-source models. Customers can either choose to use ready-to-use pre-trained models (it selects the “best” open source models; it does not build its own); or they can upload custom models developed internally by their own data scientists — which it says is a point of differentiation vs SaaS services such as Google Natural Language (which uses Google’s ML models) or Amazon Comprehend and Monkey Learn. says it wants to democratize NLP by helping developers and data scientists deliver these projects “in no time and at a fair price”. (It has a tiered pricing model based on requests per minute, which starts at $39pm and ranges up to $1,199pm, at the enterprise end, for one custom model running on a GPU. It does also offer a free tier so users can test models at low request velocity without incurring a charge.)

“The idea came from the fact that, as a software engineer, I saw many AI projects fail because of the deployment to production phase,” says sole founder and CTO Julien Salinas. “Companies often focus on building accurate and fast AI models but today more and more excellent open-source models are available and are doing an excellent job… so the toughest challenge now is being able to efficiently use these models in production. It takes AI skills, DevOps skills, programming skill… which is why it’s a challenge for so many companies, and which is why I decided to launch”

The platform launched in January 2021 and now has around 500 users, including 30 who are paying for the service. While the startup, which is based in Grenoble, in the French Alps, is a team of three for now, plus a couple of independent contractors. (Salinas says he plans to hire five people by the end of the year.)

“Most of our users are tech startups but we also start having a couple of bigger companies,” he tells TechCrunch. “The biggest demand I’m seeing is both from software engineers and data scientists. Sometimes it’s from teams who have data science skills but don’t have DevOps skills (or don’t want to spend time on this). Sometimes it’s from tech teams who want to leverage NLP out-of-the-box without hiring a whole data science team.”

“We have very diverse customers, from solo startup founders to bigger companies like BBVA, Mintel, Senuto… in all sorts of sectors (banking, public relations, market research),” he adds.

Use cases of its customers include lead generation from unstructured text (such as web pages), via named entities extraction; and sorting support tickets based on urgency by conducting sentiment analysis.

Content marketers are also using its platform for headline generation (via summarization). While text classification capabilities are being used for economic intelligence and financial data extraction, per Salinas.

He says his own experience as a CTO and software engineer working on NLP projects at a number of tech companies led him to spot an opportunity in the challenge of AI implementation.

“I realized that it was quite easy to build acceptable NLP models thanks to great open-source frameworks like spaCy and Hugging Face Transformers but then I found it quite hard to use these models in production,” he explains. “It takes programming skills in order to develop an API, strong DevOps skills in order to build a robust and fast infrastructure to serve NLP models (AI models in general consume a lot of resources), and also data science skills of course.

“I tried to look for ready-to-use cloud solutions in order to save weeks of work but I couldn’t find anything satisfactory. My intuition was that such a platform would help tech teams save a lot of time, sometimes months of work for the teams who don’t have strong DevOps profiles.”

“NLP has been around for decades but until recently it took whole teams of data scientists to build acceptable NLP models. For a couple of years, we’ve made amazing progress in terms of accuracy and speed of the NLP models. More and more experts who have been working in the NLP field for decades agree that NLP is becoming a ‘commodity’,” he goes on. “Frameworks like spaCy make it extremely simple for developers to leverage NLP models without having advanced data science knowledge. And Hugging Face’s open-source repository for NLP models is also a great step in this direction.

“But having these models run in production is still hard, and maybe even harder than before as these brand new models are very demanding in terms of resources.”

The models offers are picked for performance — where “best” means it has “the best compromise between accuracy and speed”. Salinas also says they are paying mind to context, given NLP can be used for diverse user cases — hence proposing number of models so as to be able to adapt to a given use.

“Initially we started with models dedicated to entities extraction only but most of our first customers also asked for other use cases too, so we started adding other models,” he notes, adding that they will continue to add more models from the two chosen frameworks — “in order to cover more use cases, and more languages”.

SpaCy and Hugging Face, meanwhile, were chosen to be the source for the models offered via its API based on their track record as companies, the NLP libraries they offer and their focus on production-ready framework — with the combination allowing to offer a selection of models that are fast and accurate, working within the bounds of respective trade-offs, according to Salinas.

“SpaCy is developed by a solid company in Germany called This library has become one of the most used NLP libraries among companies who want to leverage NLP in production ‘for real’ (as opposed to academic research only). The reason is that it is very fast, has great accuracy in most scenarios, and is an opinionated” framework which makes it very simple to use by non-data scientists (the tradeoff is that it gives less customization possibilities),” he says.

Hugging Face is an even more solid company that recently raised $40M for a good reason: They created a disruptive NLP library called ‘transformers’ that improves a lot the accuracy of NLP models (the tradeoff is that it is very resource intensive though). It gives the opportunity to cover more use cases like sentiment analysis, classification, summarization… In addition to that, they created an open-source repository where it is easy to select the best model you need for your use case.”

While AI is advancing at a clip within certain tracks — such as NLP for English — there are still caveats and potential pitfalls attached to automating language processing and analysis, with the risk of getting stuff wrong or worse. AI models trained on human-generated data have, for example, been shown reflecting embedded biases and prejudices of the people who produced the underlying data.

Salinas agrees NLP can sometimes face “concerning bias issues”, such as racism and misogyny. But he expresses confidence in the models they’ve selected.

“Most of the time it seems [bias in NLP] is due to the underlying data used to trained the models. It shows we should be more careful about the origin of this data,” he says. “In my opinion the best solution in order to mitigate this is that the community of NLP users should actively report something inappropriate when using a specific model so that this model can be paused and fixed.”

“Even if we doubt that such a bias exists in the models we’re proposing, we do encourage our users to report such problems to us so we can take measures,” he adds.


#amazon, #api, #artificial-intelligence, #artificial-neural-networks, #bbva, #computing, #developer, #devops, #europe, #germany, #google, #hugging-face, #ml, #natural-language-processing, #nlpcloud-io, #public-relations, #software-development, #speech-recognition, #startups, #transformer

Aporia raises $5M for its AI observability platform

Machine learning (ML) models are only as good as the data you feed them. That’s true during training, but also once a model is put in production. In the real world, the data itself can change as new events occur and even small changes to how databases and APIs report and store data could have implications on how the models react. Since ML models will simply give you wrong predictions and not throw an error, it’s imperative that businesses monitor their data pipelines for these systems.

That’s where tools like Aporia come in. The Tel Aviv-based company today announced that it has raised a $5 million seed round for its monitoring platform for ML models. The investors are Vertex Ventures and TLV Partners.

Image Credits: Aporia

Aporia co-founder and CEO Liran Hason, after five years with the Israel Defense Forces, previously worked on the data science team at Adallom, a security company that was acquired by Microsoft in 2015. After the sale, he joined venture firm Vertex Ventures before starting Aporia in late 2019. But it was during his time at Adallom where he first encountered the problems that Aporio is now trying to solve.

“I was responsible for the production architecture of the machine learning models,” he said of his time at the company. “So that’s actually where, for the first time, I got to experience the challenges of getting models to production and all the surprises that you get there.”

The idea behind Aporia, Hason explained, is to make it easier for enterprises to implement machine learning models and leverage the power of AI in a responsible manner.

“AI is a super powerful technology,” he said. “But unlike traditional software, it highly relies on the data. Another unique characteristic of AI, which is very interesting, is that when it fails, it fails silently. You get no exceptions, no errors. That becomes really, really tricky, especially when getting to production, because in training, the data scientists have full control of the data.”

But as Hason noted, a production system may depend on data from a third-party vendor and that vendor may one day change the data schema without telling anybody about it. At that point, a model — say for predicting whether a bank’s customer may default on a loan — can’t be trusted anymore, but it may take weeks or months before anybody notices.

Aporia constantly tracks the statistical behavior of the incoming data and when that drifts too far away from the training set, it will alert its users.

One thing that makes Aporio unique is that it gives its users an almost IFTTT or Zapier-like graphical tool for setting up the logic of these monitors. It comes pre-configured with more than 50 combinations of monitors and provides full visibility in how they work behind the scenes. That, in turn, allows businesses to fine-tune the behavior of these monitors for their own specific business case and model.

Initially, the team thought it could build generic monitoring solutions. But the team realized that this wouldn’t only be a very complex undertaking, but that the data scientists who build the models also know exactly how those models should work and what they need from a monitoring solution.

“Monitoring production workloads is a well-established software engineering practice, and it’s past time for machine learning to be monitored at the same level,” said Rona Segev, founding partner at  TLV Partners. “Aporia‘s team has strong production-engineering experience, which makes their solution stand out as simple, secure and robust.”


#adallom, #aporia, #artificial-intelligence, #enterprise, #machine-learning, #microsoft, #ml, #recent-funding, #startups, #tc, #tel-aviv, #tlv-partners, #vertex-ventures

5 machine learning essentials non-technical leaders need to understand

We’re living in a phenomenal moment for machine learning (ML), what Sonali Sambhus, head of developer and ML platform at Square, describes as “the democratization of ML.” It’s become the foundation of business and growth acceleration because of the incredible pace of change and development in this space.

But for engineering and team leaders without an ML background, this can also feel overwhelming and intimidating. I regularly meet smart, successful, highly competent and normally very confident leaders who struggle to navigate a constructive or effective conversation on ML — even though some of them lead teams that engineer it.

I’ve spent more than two decades in the ML space, including work at Apple to build the world’s largest online app and music store. As the senior director of engineering, anti-evil, at Reddit, I used ML to understand and combat the dark side of the web.

For this piece, I interviewed a select group of successful ML leaders including Sambhus; Lior Gavish, co-founder at Monte Carlo; and Yotam Hadass, VP of engineering at, for their insights. I’ve distilled our best practices and must-know components into five practical and easily applicable lessons.

1. ML recruiting strategy

Recruiting for ML comes with several challenges.

The first is that it can be difficult to differentiate machine learning roles from more traditional job profiles (such as data analysts, data engineers and data scientists) because there’s a heavy overlap between descriptions.

Secondly, finding the level of experience required can be challenging. Few people in the industry have substantial experience delivering production-grade ML (for instance, you’ll sometimes notice resumes that specify experience with ML models but then find their models are rule-based engines rather than real ML models).

When it comes to recruiting for ML, hire experts when you can, but also look into how training can help you meet your talent needs. Consider upskilling your current team of software engineers into data/ML engineers or hire promising candidates and provide them with an ML education.

machine learning essentials for leaders

Image Credits: Snehal Kundalkar

The other effective way to overcome these recruiting challenges is to define roles largely around:

  • Product: Look for candidates with a technical curiosity and a strong business/product sense. This framework is often more important than the ability to apply the most sophisticated models.
  • Data: Look for candidates that can help select models, design features, handle data modeling/vectorization and analyze results.
  • Platform/Infrastructure: Look for people who evaluate/integrate/build platforms to significantly accelerate the productivity of data and engineering teams; extract, transform, load (ETLs); warehouse infrastructures; and CI/CD frameworks for ML.

    #artificial-intelligence, #column, #ec-future-of-work, #ec-how-to, #engineer, #machine-learning, #ml, #startups

OctoML raises $28M Series B for its machine learning acceleration platform

OctoML, a Seattle-based startup that offers a machine learning acceleration platform build on top of the open-source Apache TVM compiler framework project, today announced that it has raised a $28 million Series B funding round led by Addition Captial. Previous investors Madrona Venture Group and Amplify Partners also participated in this round, which brings the company’s total funding to $47 million. The company last raised in April 2020, when it announced its $15 million Series A round led by Amplify

The promise of OctoML is that developers can bring their models to its platform and the service will automatically optimize that model’s performance for any given cloud or edge device. The founding team created the TVM project, which

As Brazil-born OctoML co-founder and CEO Luis Ceze told me, since raising its Series A round, the company started onboarding some early adopters to its ‘Octomizer’ SaaS platform.

Image Credits: OctoML

“It’s still in early access, but we are we have close to 1,000 early access sign-ups on the waitlist,” Ceze said. “That was a pretty strong signal for us to end up taking this [funding]. The Series B was pre-emptive. We were planning on starting to raise money right about now. We had barely started spending our Series A money — we still had a lot of that left. But since we saw this growth and we had more paying customers than we anticipated, there were a lot of signals like, ‘hey, now we can accelerate the go-to-market machinery, build a customer success team and continue expanding the engineering team to build new features.”

Ceze tells me that the team also saw strong growth signals in the overall community around the TVM project (with about 1,000 people attending its virtual conference last year). As for its customer base (and companies on its waitlist), Ceze says it represents a wide range of verticals that range from defense contractors to financial services and life science companies, automotive firms and startups in a variety of fields.

Recently, OctoML also launched support for the Apple M1 chip — and saw very good performance from that.

The company has also formed partnerships with industry heavyweights like Microsoft (which is also a customer), Qualcomm, AMD and Sony to build out the open-source components and optimize its service for an even wider range of models (and larger ones, too).

On the engineering side, Ceze tells me that the team is looking at not just optimizing and tuning models but also the training process. Training ML models can quickly become costly and any service that can speed up that process leads to direct savings for its users — which in turn makes OctoML an easier sell. The plan here, Ceze tells me, is to offer an end-to-end solution where people can optimize their ML training and the resulting models and then push their models out to their preferred platform. Right now, its users still have to take the artifact that the Octomizer creates and deploy that themselves, but deployment support is on OctoML’s roadmap.

“When we first met Luis and the OctoML team, we knew they were poised to transform the way ML teams deploy their machine learning models,” said Lee Fixel, founder of Addition. “They have the vision, the talent and the technology to drive ML transformation across every major enterprise. They launched Octomizer six months ago and it’s already becoming the go-to solution developers and data scientists use to maximize ML model performance. We look forward to supporting the company’s continued growth.”

#amd, #amplify, #amplify-partners, #artificial-intelligence, #brazil, #developer, #enterprise, #lee-fixel, #machine-learning, #madrona-venture-group, #microsoft, #ml, #octoml, #qualcomm, #recent-funding, #seattle, #series-a, #sony, #startups, #venture-capital

Adobe delivers native Photoshop for Apple Silicon Macs and a way to enlarge images without losing detail

Adobe has been moving quickly to update its imaging software to work natively on Apple’s new in-house processors for Macs, starting with the M1-based MacBook Pro and MacBook Air released late last year. After shipping native versions of Lightroom and Camera Raw, it’s now releasing an Apple Silicon-optimized version of Photoshop, which delivers big performance gain vs. the Intel version running on Apple’s Rosetta 2 software emulation layer.

How much better? Per internal testing, Adobe says that users should see improvements of up to 1.5x faster performance on a number of different features offered by Photoshop, vs. the same tasks being done on the emulated version. That’s just the start, however, since Adobe says it’s going to continue to coax additional performance improvements out of the software on Apple Silicon in collaboration with Apple over time. Some features are also still missing from the M1-friendly addition, including the ‘Invite to Edit Cloud Documents’ and ‘Preset Syncing’ options, but those will be ported over in future iterations as well.

In addition to the Apple Silicon version of Photoshop, Adobe is also releasing a new Super Resolution feature in the Camera Raw plugin (to be released for Lightroom later) that ships with the software. This is an image enlarging feature that uses machine learning trained on a massive image dataset to blow up pictures to larger sizes while still preserving details. Adobe has previously offered a super resolution option that combined multiple exposures to boost resolution, but this works from a single photo.

It’s the classic ‘Computer, enhance’ sci-fi feature made real, and it builds on work that Photoshop previously did to introduce its ‘Enhance details’ feature. If you’re not a strict Adobe loyalist, you might also be familiar with Pixelmator Pro’s ‘ML Super Resolution’ feature, which works in much the same way – albeit using a different ML model and training data set.

Adobe's Super Resolution comparison photo

Adobe’s Super Resolution in action

The bottom line is that Adobe’s Super Resolution will output an image with twice the horizontal and twice the vertical resolution – meaning in total, it has 4x the number of pixels. It’ll do that while preserving detail and sharpness, which adds up to allowing you to make larger prints from images that previously wouldn’t stand up to that kind of enlargement. It’s also great for cropping in on photos in your collection to capture tighter shots of elements that previously would’ve been rendered blurry and disappointing as a result.

This feature benefits greatly from GPUs that are optimized for machine learning jobs, including CoreML and Windows ML. That means that Apple’s M1 chip is a perfect fit, since it includes a dedicated ML processing region called the Neural Engine. Likewise, Nvidia’s RTX series of GPUs and their TensorCores are well-suited to the task.

Adobe also released some major updates for Photoshop for iPad, including version history for its Cloud Documents non-local storage. You can also now store versions of Cloud Documents offline and edit them locally on your device.

#adobe-creative-cloud, #adobe-lightroom, #adobe-photoshop, #apple, #apple-inc, #apps, #artificial-intelligence, #imaging, #intel, #m1, #machine-learning, #macintosh, #ml, #photoshop, #pixelmator, #software, #steve-jobs, #tc

Microsoft’s Azure Arc multi-cloud platform now supports machine learning workloads

With Azure Arc, Microsoft offers a service that allows its customers to run Azure in any Kubernetes environment, no matter where that container cluster is hosted. From Day One, Arc supported a wide range of use cases, but one feature that was sorely missing when it first launched was support for machine learning (ML). But one of the advantages of a tool like Arc is that it allows enterprises to run their workloads close to their data and today, that often means using that data to train ML models.

At its Ignite conference, Microsoft today announced that it bringing exactly this capability to Azure Arc with the addition of Azure Machine Learning to the set of Arc-enabled data services.

“By extending machine learning capabilities to hybrid and multicloud environments, customers can run training models where the data lives while leveraging existing infrastructure investments. This reduces data movement and network latency, while meeting security and compliance requirements,” Azure GM Arpan Shah writes in today’s announcement.

This new capability is now available to Arc customers.

In addition to bringing this new machine learning capability to Arc, Microsoft also today announced that Azure Arc enabled Kubernetes, which allows users to deploy standard Kubernetes configurations to their clusters anywhere, is now generally available.

Also new in this world of hybrid Azure services is support for Azure Kubernetes Service on Azure Stack HCI. That’s a mouthful, but Azure Stack HCI is Microsoft’s platform for running Azure on a set of standardized, hyperconverged hardware inside a customer’s datacenter. The idea pre-dates Azure Arc, but it remains a plausible alternative for enterprises who want to run Azure in their own data center and has continued support from vendors like Dell, Lenovo, HPE, Fujitsu and DataOn.

On the open-source side of Arc, Microsoft also today stressed that Arc is built to work with any Kubernetes distribution that is conformant to the standard of the Cloud Native Computing Foundation (CNCF) and that it has worked with RedHat, Canonical, Rancher and now Nutanix to test and validate their Kubernetes implementations on Azure Arc.

#cloud-computing, #cloud-infrastructure, #cloud-native-computing-foundation, #computing, #dell, #fujitsu, #hpe, #kubernetes, #lenovo, #machine-learning, #microsoft, #microsoft-ignite-2021, #microsoft-azure, #ml, #nutanix, #red-hat, #redhat, #tc

NeuReality raises $8M for its novel AI inferencing platform

NeuReality, an Israeli AI hardware startup that is working on a novel approach to improving AI inferencing platforms by doing away with the current CPU-centric model, is coming out of stealth today and announcing an $8 million seed round. The group of investors includes Cardumen Capital, crowdfunding platform OurCrowd and Varana Capital. The company also today announced that Naveen Rao, the GM of Intel’s AI Products Group and former CEO of Nervana System (which Intel acquired), is joining the company’s board of directors.

The founding team, CEO Moshe Tanach, VP of operations Tzvika Shmueli and VP for very large-scale integration Yossi Kasus, has a background in AI but also networking, with Tanach spending time at Marvell and Intel, for example, Shmueli at Mellanox and Habana Labs and Kasus at Mellanox, too.

It’s the team’s networking and storage knowledge and seeing how that industry built its hardware that now informs how NeuReality is thinking about building its own AI platform. In an interview ahead of today’s announcement, Tanach wasn’t quite ready to delve into the details of NeuReality’s architecture, but the general idea here is to build a platform that will allow hyperscale clouds and other data center owners to offload their ML models to a far more performant architecture where the CPU doesn’t become a bottleneck.

“We kind of combined a lot of techniques that we brought from the storage and networking world,” Tanach explained. Think about traffic manager and what it does for Ethernet packets. And we applied it to AI. So we created a bottom-up approach that is built around the engine that you need. Where today, they’re using neural net processors — we have the next evolution of AI computer engines.”

As Tanach noted, the result of this should be a system that — in real-world use cases that include not just synthetic benchmarks of the accelerator but also the rest of the overall architecture — offer 15 times the performance per dollar for basic deep learning offloading and far more once you offload the entire pipeline to its platform.

NeuReality is still in its early days, and while the team has working prototypes now, based on a Xilinx FPGA, it expects to be able to offer its fully custom hardware solution early next year. As its customers, NeuReality is targeting the large cloud providers, but also data center and software solutions providers like WWT to help them provide specific vertical solutions for problems like fraud detection, as well as OEMs and ODMs.

Tanach tells me that the team’s work with Xilinx created the groundwork for its custom chip — though building that (and likely on an advanced node), will cost money, so he’s already thinking about raising the next round of funding for that.

“We are already consuming huge amounts of AI in our day-to-day life and it will continue to grow exponentially over the next five years,” said Tanach. “In order to make AI accessible to every organization, we must build affordable infrastructure that will allow innovators to deploy AI-based applications that cure diseases, improve public safety and enhance education. NeuReality’s technology will support that growth while making the world smarter, cleaner and safer for everyone. The cost of the AI infrastructure and AIaaS will no longer be limiting factors.”

NeuReality team. Photo credit - NeuReality

Image Credits: NeuReality

#artificial-intelligence, #cardumen-capital, #computing, #ethernet, #fpga, #funding, #fundings-exits, #habana-labs, #hardware-startup, #intel, #mellanox, #ml, #neureality, #nvidia, #ourcrowd, #recent-funding, #science-and-technology, #startups, #tc, #technology, #varana-capital, #xilinx

IPRally is building a knowledge graph-based search engine for patents

IPRally, a burgeoning startup out of Finland aiming to solve the patent search problem, has raised €2 million in seed funding.

Leading the round is by JOIN Capital, and Spintop Ventures, with participation from existing pre-seed backer Icebreaker VC. It brings the total raised by the 2018-founded company to €2.35 million.

Co-founded by CEO Sakari Arvela, who has 15 years experience as a patent attorney, IPRally has built a knowledge graph to help machines better understand the technical details of patents and to enable humans to more efficiently trawl through existing patients. The premise is that a graph-based approach is more suited to patent search than simple keywords or freeform text search.

That’s because, argues Arvela, every patent publication can be distilled down to a simpler knowledge graph that “resonates” with the way IP professionals think and is infinitely more machine readable.

“We founded IPRally in April 2018, after one year of bootstrapping and proof-of-concepting with my co-founder and CTO Juho Kallio,” he tells me. “Before that, I had digested the graph approach myself for about two years and collected the courage to start the venture”.

Arvela says patent search is a hard problem to solve since it involves both deep understanding of technology and the capability to compare different technologies in detail.

“This is why this has been done almost entirely manually for as long as the patent system has existed. Even the most recent out-of-the-box machine learning models are way too inaccurate to solve the problem. This is why we have developed a specific ML model for the patent domain that reflects the way human professionals approach the search task and make the problem sensible for the computers too”.

That approach appears to be paying off, with IPRally already being used by customers such as Spotify and ABB, as well as intellectual property offices. Target customers are described as any corporation that actively protects its own R&D with patents and has to navigate the IPR landscape of competitors.

Meanwhile, IPRally is not without its own competition. Arvela cites industry giants like Clarivate and Questel that dominate the market with traditional keyword search engines.

In addition, there are a few other AI-based startups, like Amplified and IPScreener. “IPRally’s graph approach makes the searches much more accurate, allows detail-level computer analysis, and offer a non-black-box solution that is explainable for and controllable by the user,” he adds.

#europe, #finland, #fundings-exits, #iprally, #machine-learning, #ml, #patent, #patent-law, #patent-search, #startups, #tc

Tips for applying an intersectional framework to AI development

By now, most of us in tech know that the inherent bias we possess as humans creates an inherent bias in AI applications — applications that have become so sophisticated they’re able to shape the nature of our everyday lives and even influence our decision-making.

The more prevalent and powerful AI systems become, the sooner the industry must address questions like: What can we do to move away from using AI/ML models that demonstrate unfair bias?

How can we apply an intersectional framework to build AI for all people, knowing that different individuals are affected by and interact with AI in different ways based on the converging identities they hold?

Start with identifying the variety of voices that will interact with your model.

Intersectionality: What it means and why it matters

Before tackling the tough questions, it’s important to take a step back and define “intersectionality.” A term defined by Kimberlé Crenshaw, it’s a framework that empowers us to consider how someone’s distinct identities come together and shape the ways in which they experience and are perceived in the world.

This includes the resulting biases and privileges that are associated with each distinct identity. Many of us may hold more than one marginalized identity and, as a result, we’re familiar with the compounding effect that occurs when these identities are layered on top of one another.

At The Trevor Project, the world’s largest suicide prevention and crisis intervention organization for LGBTQ youth, our chief mission is to provide support to each and every LGBTQ young person who needs it, and we know that those who are transgender and nonbinary and/or Black, Indigenous, and people of color face unique stressors and challenges.

So, when our tech team set out to develop AI to serve and exist within this diverse community — namely to better assess suicide risk and deliver a consistently high quality of care — we had to be conscious of avoiding outcomes that would reinforce existing barriers to mental health resources like a lack of cultural competency or unfair biases like assuming someone’s gender based on the contact information presented.

Though our organization serves a particularly diverse population, underlying biases can exist in any context and negatively impact any group of people. As a result, all tech teams can and should aspire to build fair, intersectional AI models, because intersectionality is the key to fostering inclusive communities and building tools that serve people from all backgrounds more effectively.

Doing so starts with identifying the variety of voices that will interact with your model, in addition to the groups for which these various identities overlap. Defining the opportunity you’re solving is the first step because once you understand who is impacted by the problem, you can identify a solution. Next, map the end-to-end experience journey to learn the points where these people interact with the model. From there, there are strategies every organization, startup and enterprise can apply to weave intersectionality into every phase of AI development — from training to evaluation to feedback.

Datasets and training

The quality of a model’s output relies on the data on which it’s trained. Datasets can contain inherent bias due to the nature of their collection, measurement and annotation — all of which are rooted in human decision-making. For example, a 2019 study found that a healthcare risk-prediction algorithm demonstrated racial bias because it relied on a faulty dataset for determining need. As a result, eligible Black patients received lower risk scores in comparison to white patients, ultimately making them less likely to be selected for high-risk care management.

Fair systems are built by training a model on datasets that reflect the people who will be interacting with the model. It also means recognizing where there are gaps in your data for people who may be underserved. However, there’s a larger conversation to be had about the overall lack of data representing marginalized people — it’s a systemic problem that must be addressed as such, because sparsity of data can obscure both whether systems are fair and whether the needs of underrepresented groups are being met.

To start analyzing this for your organization, consider the size and source of your data to identify what biases, skews or mistakes are built-in and how the data can be improved going forward.

The problem of bias in datasets can also be addressed by amplifying or boosting specific intersectional data inputs, as your organization defines it. Doing this early on will inform your model’s training formula and help your system stay as objective as possible — otherwise, your training formula may be unintentionally optimized to produce irrelevant results.

At The Trevor Project, we may need to amplify signals from demographics that we know disproportionately find it hard to access mental health services, or for demographics that have small sample sizes of data compared to other groups. Without this crucial step, our model could produce outcomes irrelevant to our users.


Model evaluation is an ongoing process that helps organizations respond to ever-changing environments. Evaluating fairness began with looking at a single dimension — like race or gender or ethnicity. The next step for the tech industry is figuring out how to best compare intersectional groupings to evaluate fairness across all identities.

To measure fairness, try defining intersectional groups that could be at a disadvantage and the ones that may have an advantage, and then examine whether certain metrics (for example, false-negative rates) vary among them. What do these inconsistencies tell you? How else can you further examine which groups are underrepresented in a system and why? These are the kinds of questions to ask at this phase of development.

Developing and monitoring a model based on the demographics it serves from the start is the best way for organizations to achieve fairness and alleviate unfair bias. Based on the evaluation outcome, a next step might be to purposefully overserve statistically underrepresented groups to facilitate training a model that minimizes unfair bias. Since algorithms can lack impartiality due to societal conditions, designing for fairness from the outset helps ensure equal treatment of all groups of individuals.

Feedback and collaboration

Teams should also have a diverse group of people involved in developing and reviewing AI products — people who are diverse not only in identities, but also in skillset, exposure to the product, years of experience and more. Consult stakeholders and those who are impacted by the system for identifying problems and biases.

Lean on engineers when brainstorming solutions. For defining intersectional groupings, at The Trevor Project, we worked across the teams closest to our crisis-intervention programs and the people using them — like Research, Crisis Services and Technology. And reach back out to stakeholders and people interacting with the system to collect feedback upon launch.

Ultimately, there isn’t a “one-size-fits-all” approach to building intersectional AI. At The Trevor Project, our team has outlined a methodology based on what we do, what we know today and the specific communities we serve. This is not a static approach and we remain open to evolving as we learn more. While other organizations may take a different approach to build intersectional AI, we all have a moral responsibility to construct fairer AI systems, because AI has the power to highlight — and worse, magnify — the unfair biases that exist in society.

Depending on the use case and community in which an AI system exists, the magnification of certain biases can result in detrimental outcomes for groups of people who may already face marginalization. At the same time, AI also has the ability to improve quality of life for all people when developed through an intersectional framework. At The Trevor Project, we strongly encourage tech teams, domain experts and decision-makers to think deeply about codifying a set of guiding principles to initiate industry-wide change — and to ensure future AI models reflect the communities they serve.

#artificial-intelligence, #bias, #column, #cybernetics, #developer, #diversity, #ml, #risk

AI’s next act: Genius chips, programmable silicon and the future of computing

If only 10% of the world had enough power to run a cell phone, would mobile have changed the world in the way that it did?

It’s often said the future is already here — just not evenly distributed. That’s especially true in the world of artificial intelligence (AI) and machine learning (ML). Many powerful AI/ML applications already exist in the wild, but many also require enormous computational power — often at scales only available to the largest companies in existence or entire nation-states. Compute-heavy technologies are also hitting another roadblock: Moore’s law is plateauing and the processing capacity of legacy chip architectures are running up against the limits of physics.

If major breakthroughs in silicon architecture efficiency don’t happen, AI will suffer an unevenly distributed future and huge swaths of the population miss out on the improvements AI could make to their lives.

The next evolutionary stage of technology depends on completing the transformation that will make silicon architecture as flexible, efficient and ultimately programmable as the software we know today. If we cannot take major steps to provide easy access to ML we’ll lose unmeasurable innovation by having only a few companies in control of all the technology that matters. So what needs to change, how fast is it changing and what will that mean for the future of technology?

An inevitable democratization of AI: A boon for startups and smaller businesses

If you work at one of the industrial giants (including those “outside” of tech), congratulations — but many of the problems with current AI/ML computing capabilities I present here may not seem relevant.

For those of you working with lesser caches of resources, whether financially or talent-wise, view the following predictions as the herald of a new era in which organizations of all sizes and balance sheets have access to the same tiers of powerful AI and ML-powered software. Just like cell phones democratized internet access, we see a movement in the industry today to put AI in the hands of more and more people.

Of course, this democratization must be fueled by significant technological advancement that actually makes AI more accessible — good intentions are not enough, regardless of the good work done by companies like Intel and Google. Here are a few technological changes we’ll see that will make that possible.

From dumb chip to smart chip to “genius” chip

For a long time, raw performance was the metric of importance for processors. Their design reflected this. As software rose in ubiquity, processors needed to be smarter: more efficient and more commoditized, so specialized processors like GPUs arose — “smart” chips, if you will.

Those purpose-built graphics processors, by happy coincidence, proved to be more useful than CPUs for deep learning functions, and thus the GPU became one of the key players in modern AI and ML. Knowing this history, the next evolutionary step becomes obvious: If we can purpose-build hardware for graphics applications, why not for specific deep learning, AI and ML?

There’s also a unique confluence of factors that makes the next few years pivotal for chipmaking and tech in general. First and second, we’re seeing a plateauing of Moore’s law (which predicts a doubling of transistors on integrated circuits every two years) and the end of Dennard scaling (which says performance-per-watt doubles at about the same rate). Together, that used to mean that for any new generation of technology, chips doubled in density and increased in processing power while drawing the same amount of power. But we’ve now reached the scale of nanometers, meaning we’re up against the limitations of physics.

Thirdly, compounding the physical challenge, the computing demands of next-gen AI and ML apps are beyond what we could have imagined. Training neural networks to within even a fraction of human image recognition, for example, is surprisingly hard and takes huge amounts of processing power. The most intense applications of machine learning are things like natural language processing (NLP), recommender systems that deal with billions or trillions of possibilities, or super high-resolution computer vision, as is used in the medical and astronomical fields.

Even if we could have predicted we’d have to create and train algorithmic brains to learn how to speak human language or identify objects in deep space, we still could not have guessed just how much training — and therefore processing power — they might need to become truly useful and “intelligent” models.

Of course, many organizations are performing these sorts of complex ML applications. But these sorts of companies are usually business or scientific leaders with access to huge amounts of raw computing power and the talent to understand and deploy them. All but the largest enterprises are locked out of the upper tiers of ML and AI.

That’s why the next generation of smart chips — call them “genius” chips — will be about efficiency and specialization. Chip architecture will be made to optimize for the software running on it and run altogether more efficiently. When using high-powered AI doesn’t take a whole server farm and becomes accessible to a much larger percentage of the industry, the ideal conditions for widespread disruption and innovation become real. Democratizing expensive, resource intensive AI goes hand-in-hand with these soon-to-be-seen advances in chip architecture and software-centered hardware design.

A renewed focus on future-proofing innovation

The nature of AI creates a special challenge for the creators and users of AI hardware. The amount of change itself is huge: We’re living through the leap from humans writing code to software 2.0 — where engineers can train machine learning programs to eventually “run themselves.” The rate of change is also unprecedented: ML models can be obsolete in months or even weeks, and the very methods through which training happens are in constant evolution.

But creating new AI hardware products still requires designing, prototyping, calibrating, troubleshooting, production and distribution. It can take two years from concept to product-in-hand. Software has, of course, always outpaced hardware development, but now the differential in velocity is irreconcilable. We need to be more clever about the hardware we create for a future we increasingly cannot predict.

In fact, the generational way we think about technology is beginning to break down. When it comes to ML and AI, hardware must be built with the expectation that much of what we know today will be obsolete by the time we have the finished product. Flexibility and customization will be the key attributes of successful hardware in the age of AI, and I believe this will be a further win for entire market.

Instead of sinking resources into the model du jour or a specific algorithm, companies looking to take advantage of these technologies will have more options for processing stacks that can evolve and change as the demands of ML and AI models evolve and change.

This will allow companies of all sizes and levels of AI savvy to stay creative and competitive for longer and prevent the stagnation that can occur when software is limited by hardware — all leading to more interesting and unexpected AI applications for more organizations.

Widespread adoption of real AI and ML technologies

I’ll be the first to admit to tech’s fascination with shiny objects. There was a day when big data was the solution to everything and IoT was to be the world’s savior. AI has been through the hype cycle in the same way (arguably multiple times). Today, you’d be hard pressed to find a tech company that doesn’t purport to use AI in some way, but chances are they are doing something very rudimentary that’s more akin to advanced analytics.

It’s my firm belief that the AI revolution we’ve all been so excited about simply has not happened yet. In the next two to three years however, as the hardware that enables “real” AI power makes its way into more and more hands, it will happen. As far as predicting the change and disruption that will come from widespread access to the upper echelons of powerful ML and AI — there are few ways to make confident predictions, but that is exactly the point!

Much like cellphones put so much power in the hands of regular people everywhere, with no barriers to entry either technical or financial (for the most part), so will the coming wave of software-defined hardware that is flexible, customizable and future-proof. The possibilities are truly endless, and it will mark an important turning point in technology. The ripple effects of AI democratization and commoditization will not stop with just technology companies, and so even more fields stand to be blown open as advanced, high-powered AI becomes accessible and affordable.

Much of the hype around AI — all the disruption it was supposed to bring and the leaps it was supposed to fuel — will begin in earnest in the next few years. The technology that will power it is being built as we speak or soon to be in the hands of the many people in the many industries who will use their newfound access as a springboard to some truly amazing advances. We’re especially excited to be a part of this future, and look forward to all the progress it will bring.

#artificial-general-intelligence, #artificial-intelligence, #column, #hardware, #machine-learning, #ml, #natural-language-processing, #neural-networks, #science

AWS launches Trainium, its new custom ML training chip

At its annual re:Invent developer conference, AWS today announced the launch of AWS Trainium, the company’s next-gen custom chip dedicated to training machine learning models. The company promises that it can offer higher performance than any of its competitors in the cloud, with support for TensorFlow, PyTorch and MXNet.

It will be available as EC2 instances and inside Amazon SageMaker, the company’s machine learning platform.

New instances based on these custom chips will launch next year.

The main arguments for these custom chips are speed and cost. AWS promises 30% higher throughput and 45% lower cost-per-inference compared to the standard AWS GPU instances.

In addition, AWS is partnering with Intel to launch Habana Gaudi-based EC2 instances for machine learning training. Coming next year, these instances promise to offer up to 40% better price/performance compared to the current set of GPU-based EC2 instances for machine learning. These chips will support TensorFlow and PyTorch.

These new chips will make their debut in the AWS cloud in the first half of 2021.

Both of these new offerings complement AWS Inferentia, which the company launched at last year’s re:Invent. Inferentia is the inferencing counterpart to these machine learning pieces, which also uses a custom chip.

Trainium, it’s worth noting, will use the same SDK as Inferentia.

“While Inferentia addressed the cost of inference, which constitutes up to 90% of ML infrastructure costs, many development teams are also limited by fixed ML training budgets,” the AWS team writes. “This puts a cap on the scope and frequency of training needed to improve their models and applications. AWS Trainium addresses this challenge by providing the highest performance and lowest cost for ML training in the cloud. With both Trainium and Inferentia, customers will have an end-to-end flow of ML compute from scaling training