A new formula may help black patients’ access to kidney care

A new formula may help black patients’ access to kidney care

Enlarge (credit: Getty Images)

For decades, doctors and hospitals saw kidney patients differently based on their race. A standard equation for estimating kidney function applied a correction for Black patients that made their health appear rosier, inhibiting access to transplants and other treatments.

On Thursday, a task force assembled by two leading kidney care societies said the practice is unfair and should end.

The group, a collaboration between the National Kidney Foundation and the American Society of Nephrology, recommended use of a new formula that does not factor in a patient’s race. In a statement, Paul Palevsky, the foundation’s president, urged “all laboratories and health care systems nationwide to adopt this new approach as rapidly as possible.” That call is significant because recommendations and guidelines from professional medical societies play a powerful role in shaping how specialists care for patients.

Read 15 remaining paragraphs | Comments

#ai, #algorithms, #bias, #dialysis, #health, #kidney-care, #medicine, #science, #transplants

Longtime VC, and happy Miami resident, David Blumberg has raised a new $225 million fund

Blumberg Capital, founded in 1991 by investor David Blumberg, has just closed its fifth early-stage venture fund with $225 million, a vehicle that Blumberg says was oversubscribed — he planned to raise $200 million — and that has already been used to invest in 16 startups around the world (the firm has small offices in San Francisco, New York, Tel Aviv, and Miami, where Blumberg moved his family last year).

We caught up with him earlier this week to talk shop and he sounded pretty ecstatic about the current market, which has evidently been good for returns, with Blumberg Capital’s biggest hits tied to Nutanix (it claims a 68x return), DoubleVerify (a 98x return at IPO in April, the firm says), Katapult (which went public via SPAC in July), Addepar (currently valued above $2 billion) and Braze (it submitted its S-1 in June).

We also talked a bit about his new life in Florida, which he was quick to note is “not a clone of Silicon Valley.” Not last, he told us why he thinks we’re in a “golden era of applying intelligence to every business,” from mining to the business of athletic performance.

More from our conversation, edited lightly for length and clarity, follows:

TC: What are you funding right now?

DB: Our last 30 to 40 deals have basically been about big data that’s been analyzed by artificial intelligence of some sort, then riding in a better wrapper of software process automation on rails of internet and mobility. Okay, that’s a lot of buzzwords.

TC: Yes.

DB: What I’m saying is that this ability to take raw information data that’s either been sitting around and not analyzed, or from new sources of data like sensors or social media or many other places, then analyze it and take it to the problem of all these businesses that have been there forever, is beginning to make incremental improvements that may sound small [but add up].

TC: What’s a very recent example?

One of our [unannounced] companies applies AI to mining — lithium mining and gold and copper — so miners don’t waste their time before finding the richest vein of deposit. We partner with mining owners and we bring extra data that they don’t have access to — some is proprietary, some is public — and because we’re experts at the AI modeling of it, we can apply it to their geography and geology, and as part of the business model, we take part of the mine in return.

TC: So your fund now owns not just equity but part of a mine?

DB: This is evidently done a lot in what’s called E&P, exploration and production in the oil and gas industry, and we’re just following a time-tested model, where some of the service providers put in value and take out a share. So as we see it, it aligns our interests and the better we do for them, the better they do.

TC: This fund is around the same size of your fourth fund, which closed with $207 million in 2017. How do you think about check sizes in this market?

DB: We write checks of $1 million to $6 million generally. We could go down a little bit for something in a seed where we can’t get more of a slice, but we like to have large ownership up front. We found that to have a fund return at least three x — and our funds seem to be returning much more than that — [we need to be math-minded about things].

We have 36 companies in our portfolio typically, and 20% of them fail, 20% of them are our superstars, and 60% are kind of medium. Of those superstars, six of them have to return $100 million each in a $200 million fund to make it a $600 million return, and to get six companies to [produce a] $100 million [for us] they have to reach a billion dollars in value, where we own 10% at the end.

TC You’re buying 10% and maintaining your pro rata or this is after being diluted over numerous rounds?

DB: It’s more like we want 15% to 20% of a company and it gets [diluted] down to 10%. And it’s been working. Some of our funds are way above that number.

TC: Are all four of your earlier funds in the black?

DB: Yes. I love to say this: We have never, ever lost money for our fund investors.

TC: You were among a handful of VCs who were cited quite a lot last year for hightailing it out of the Bay Area for Miami. One year into the move, how is it going?

DB: It is not a clone of Silicon Valley. They are different and add value each in their own way. But Florida is a great place for our family to be and I find for our business, it’s going to be great as well. I can be on the phone to Israel and New York without any time zone-related problems. Some of our companies are moving here, including one from from Israel recently, one from San Francisco, and one from Texas. A lot of our LPs are moving here or live here already. We can also up and down to South America for distribution deals more easily.

If we need to get to California or New York, airplanes still work, too, so it hasn’t been a negative at all. I’m going to a JPMorgan event tonight for a bunch of tech founders where there should be 150 people.

TC: That sounds great, though did you feel about summer in Miami?

DB: We were in France.

Pictured above, from left to right: Firm founder David Blumberg, managing director Yodfat Harel Buchris, COO Steve Gillan, and managing director Bruce Taragin.

#addepar, #ai, #artificial-intelligence, #blumberg-capital, #david-blumberg, #doubleverify, #israel, #miami, #nutanix, #tc, #venture-capital, #yotpo

The responsibilities of AI-first investors

Investors in AI-first technology companies serving the defense industry, such as Palantir, Primer and Anduril, are doing well. Anduril, for one, reached a valuation of over $4 billion in less than four years. Many other companies that build general-purpose, AI-first technologies — such as image labeling — receive large (undisclosed) portions of their revenue from the defense industry.

Investors in AI-first technology companies that aren’t even intended to serve the defense industry often find that these firms eventually (and sometimes inadvertently) help other powerful institutions, such as police forces, municipal agencies and media companies, prosecute their duties.

Most do a lot of good work, such as DataRobot helping agencies understand the spread of COVID, HASH running simulations of vaccine distribution or Lilt making school communications available to immigrant parents in a U.S. school district.

The first step in taking responsibility is knowing what on earth is going on. It’s easy for startup investors to shrug off the need to know what’s going on inside AI-based models.

However, there are also some less positive examples — technology made by Israeli cyber-intelligence firm NSO was used to hack 37 smartphones belonging to journalists, human-rights activists, business executives and the fiancée of murdered Saudi journalist Jamal Khashoggi, according to a report by The Washington Post and 16 media partners. The report claims the phones were on a list of over 50,000 numbers based in countries that surveil their citizens and are known to have hired the services of the Israeli firm.

Investors in these companies may now be asked challenging questions by other founders, limited partners and governments about whether the technology is too powerful, enables too much or is applied too broadly. These are questions of degree, but are sometimes not even asked upon making an investment.

I’ve had the privilege of talking to a lot of people with lots of perspectives — CEOs of big companies, founders of (currently!) small companies and politicians — since publishing “The AI-First Company” and investing in such firms for the better part of a decade. I’ve been getting one important question over and over again: How do investors ensure that the startups in which they invest responsibly apply AI?

Let’s be frank: It’s easy for startup investors to hand-wave away such an important question by saying something like, “It’s so hard to tell when we invest.” Startups are nascent forms of something to come. However, AI-first startups are working with something powerful from day one: Tools that allow leverage far beyond our physical, intellectual and temporal reach.

AI not only gives people the ability to put their hands around heavier objects (robots) or get their heads around more data (analytics), it also gives them the ability to bend their minds around time (predictions). When people can make predictions and learn as they play out, they can learn fast. When people can learn fast, they can act fast.

Like any tool, one can use these tools for good or for bad. You can use a rock to build a house or you can throw it at someone. You can use gunpowder for beautiful fireworks or firing bullets.

Substantially similar, AI-based computer vision models can be used to figure out the moves of a dance group or a terrorist group. AI-powered drones can aim a camera at us while going off ski jumps, but they can also aim a gun at us.

This article covers the basics, metrics and politics of responsibly investing in AI-first companies.

The basics

Investors in and board members of AI-first companies must take at least partial responsibility for the decisions of the companies in which they invest.

Investors influence founders, whether they intend to or not. Founders constantly ask investors about what products to build, which customers to approach and which deals to execute. They do this to learn and improve their chances of winning. They also do this, in part, to keep investors engaged and informed because they may be a valuable source of capital.

#ai, #artificial-general-intelligence, #artificial-intelligence, #column, #cybernetics, #ec-column, #machine-learning, #nso, #palantir, #private-equity, #startup-company, #startups, #venture-capital

News aggregator SmartNews raises $230 million, valuing its business at $2 billion

SmartNews, a Tokyo-headquartered news aggregation website and app that’s grown in popularity despite hefty competition from built-in aggregators like Apple News, today announced it has closed on $230 million in Series F funding. The round brings SmartNews’ total raise to date to over $400 million and values the business at $2 billion — or as the company touts in its press release, a “double unicorn.” (Ha!)

The funding included new U.S. investors Princeville Capital and Woodline Partners, as well as JIC Venture Growth Investments, Green Co-Invest Investment, and Yamauchi-No.10 Family Office in Japan. Existing investors participating in this round included ACA Investments and SMBC Venture Capital.

Founded in 2012 in Japan, the company launched to the U.S. in 2014 and expanded its local news footprint early last year. While the app’s content team includes former journalists, machine learning is used to pick which articles are shown to readers to personalize their experience. However, one of the app’s key differentiators is how it works to pop users’ “filter bubbles” through its “News From All Sides” feature, which allows its users to access news from across a range of political perspectives.

It has also developed new products, like its Covid-19 vaccine dashboard and U.S. election dashboard, that provide critical information at a glance. With the additional funds, the company says it plans to develop more features for its U.S. audience — one of its largest, in addition to Japan —  that will focus on consumer health and safety. These will roll out in the next few months and will include features for tracking wildfires and crime and safety reports. It also recently launched a hurricane tracker.

The aggregator’s business model is largely focused on advertising, as the company has said before that 85-80% of Americans aren’t paying to subscribe to news. But SmartNews’ belief is that these news consumers still have a right to access quality information.

In total, SmartNews has relationships with over 3,000 global publishing partners whose content is available through its service on the web and mobile devices.

To generate revenue, the company sells inline ads and video ads, where revenue is shared with publishers. Over 75% of its publishing partners also take advantage of its “SmartView” feature. This is the app’s quick-reading mode, and alternative to something like Google AMP. Here, users can quickly load an article to read, even if they’re offline. The company promises publishers that these mobile-friendly stories, which are marked with a lightning bolt icon in the app, deliver higher engagement — and its algorithm rewards that type of content, bringing them more readers. Among SmartView partners are well-known brands like USA Today, ABC, HuffPost, and others. Currently, over 70% of all SmartNews’ pageviews are coming from SmartView first.

SmartNews’ app has proven to be very sticky, in terms of attracting and keeping users’ attention. The company tells us, citing App Annie July 2021 data, that it sees an average time spent per user per month on U.S. mobile devices that’s higher than Google News or Apple News combined.

Image Credits: App Annie data provided by SmartNews

The company declined to share its monthly active users (MAUs), but had said in 2019 it had grown to 20 million in the U.S. and Japan. Today, it says its U.S. MAUs doubled over the last year.

According to data provided to us by Apptopia, the SmartNews app has seen around 85 million downloads since its October 2014 launch, and 14 million of those took place in the past 365 days. Japan is the largest market for installs, accounting for 59% of lifetime downloads, the firm noted.

“This latest round of funding further affirms the strength of our mission, and fuels our drive to expand our presence and launch features that specifically appeal to users and publishers in the United States,” said SmartNews co-founder and CEO Ken Zuzuki. “Our investors both in the U.S. and globally acknowledge the tremendous growth potential and value of SmartNews’s efforts to democratize access to information and create an ecosystem that benefits consumers, publishers, and advertisers,” he added.

The company says the new funds will be used to invest in further U.S. growth and expanding the company’s team. Since its last fundraise in 2019, where it became a unicorn, the company more than doubled its headcount to approximately 500 people globally. it now plans to double its headcount of 100 in the U.S., with additions across engineering, product, and leadership roles.

The Wall Street Journal reports SmartNews is exploring an IPO, but the company declined to comment on this.

The SmartNews app is available on iOS and Android across more than 150 countries worldwide.

#aca-investments, #aggregation, #ai, #android, #apple-news, #apps, #funding, #google, #google-news, #japan, #machine-learning, #media, #mobile, #mobile-applications, #mobile-devices, #mobile-software, #new-aggregator, #news, #news-aggregation, #news-reading, #recent-funding, #smartnews, #software, #startups, #tokyo, #united-states

NVIDIA’s latest tech makes AI voices more expressive and realistic

The voices on Amazon’s Alexa, Google Assistant and other AI assistants are far ahead of old-school GPS devices, but they still lack the rhythms, intonation and other qualities that make speech sound, well, human. NVIDIA has unveiled new research and tools that can capture those natural speech qualities by letting you train the AI system with your own voice, the company announced at the Interspeech 2021 conference.

To improve its AI voice synthesis, NVIDIA’s text-to-speech research team developed a model called RAD-TTS, a winning entry at an NAB broadcast convention competition to develop the most realistic avatar. The system allows an individual to train a text-to-speech model with their own voice, including the pacing, tonality, timbre and more.

Another RAD-TTS feature is voice conversion, which lets a user deliver one speaker’s words using another person’s voice. That interface gives fine, frame-level control over a synthesized voice’s pitch, duration and energy.

Using this technology, NVIDIA’s researchers created more conversational-sounding voice narration for its own I Am AI video series using synthesized rather than human voices. The aim was to get the narration to match the tone and style of the videos, something that hasn’t been done well in many AI narrated videos to date. The results are still a bit robotic, but better than any AI narration I’ve ever heard.

“With this interface, our video producer could record himself reading the video script, and then use the AI model to convert his speech into the female narrator’s voice. Using this baseline narration, the producer could then direct the AI like a voice actor — tweaking the synthesized speech to emphasize specific words, and modifying the pacing of the narration to better express the video’s tone,” NVIDIA wrote.

NVIDIA is distributing some of this research — optimized to run efficiently on NVIDIA GPUs, of course — to anyone who wants to try it via open source through the NVIDIA NeMo Python toolkit for GPU-accelerated conversational AI, available on the company’s NGC hub of containers and other software.

“Several of the models are trained with tens of thousands of hours of audio data on NVIDIA DGX systems. Developers can fine tune any model for their use cases, speeding up training using mixed-precision computing on NVIDIA Tensor Core GPUs,” the company wrote.

Editor’s note: This post originally appeared on Engadget.

#ai, #artificial-intelligence, #column, #nvidia, #speech-synthesis, #tc, #tceng, #voice-assistant

Peak raises $75M for a platform that helps non-tech companies build AI applications

As artificial intelligence continues to weave its way into more enterprise applications, a startup that has built a platform to help businesses, especially non-tech organizations, build more customized AI decision making tools for themselves has picked up some significant growth funding. Peak AI, a startup out of Manchester, England, that has built a “decision intelligence” platform, has raised $75 million, money that it will be using to continue building out its platform as well as to expand into new markets, and hire some 200 new people in the coming quarters.

The Series C is bringing a very big name investor on board. It is being led by SoftBank Vision Fund 2, with previous backers Oxx, MMC Ventures, Praetura Ventures, and Arete also participating. That group participated in Peak’s Series B of $21 million, which only closed in February of this year. The company has now raised $118 million; it is not disclosing its valuation.

(This latest funding round was rumored last week, although it was not confirmed at the time and the total amount was not accurate.)

Richard Potter, Peak’s CEO, said the rapid follow-on in funding was based on inbound interest, in part because of how the company has been doing.

Peak’s so-called Decision Intelligence platform is used by retailers, brands, manufacturers and others to help monitor stock levels, build personalized customer experiences, as well as other processes that can stand to have some degree of automation to work more efficiently, but also require sophistication to be able to measure different factors against each other to provide more intelligent insights. Its current customer list includes the likes of Nike, Pepsico, KFC, Molson Coors, Marshalls, Asos, and Speedy, and in the last 12 months revenues have more than doubled.

The opportunity that Peak is addressing goes a little like this: AI has become a cornerstone of many of the most advanced IT applications and business processes of our time, but if you are an organization — and specifically one not built around technology — your access to AI and how you might use it will come by way of applications built by others, not necessarily tailored to you, and the costs of building more tailored solutions can often be prohibitively high. Peak claims that those using its tools have seen revenues on average rise 5%; return on ad spend double; supply chain costs reduce by 5%; and inventory holdings (a big cost for companies) reduce by 12%.

Peak’s platform, I should point out, is not exactly a “no-code” approach to solving that problem — not yet at least: it’s aimed at data scientists and engineers at those organizations so that they can easily identify different processes in their operations where they might benefit from AI tools, and to build those out with relatively little heavy lifting.

There have also been different market factors that have also played a role. Covid-19, for example, and the boost that we have seen both in increasing “digital transformation” in businesses, and making e-commerce processes more efficient to cater to rising consumer demand and more strained supply chains, have all led to businesses being more open to and keen to invest in more tools to improve their automation intelligently.

This, combined with Peak AI’s growing revenues, is part of what interested SoftBank. The investor has been long on AI for a while, but it has been building out a section of its investment portfolio to provide strategic services to the kinds of businesses that it invests in. Those include e-commerce and other consumer-facing businesses, which make up one of the main segments of Peak’s customer base.

“In Peak we have a partner with a shared vision that the future enterprise will run on a centralized AI software platform capable of optimizing entire value chains,” Max Ohrstrand, senior investor for SoftBank Investment Advisers, said in a statement. “To realize this a new breed of platform is needed and we’re hugely impressed with what Richard and the excellent team have built at Peak. We’re delighted to be supporting them on their way to becoming the category-defining, global leader in Decision Intelligence.”

Longer term, it will be interesting to see how and if Peak evolves to be extend its platform to a wider set of users at the organizations that are already its customers.

Potter said he believes that “those with technical predispositions” will be the most likely users of its products in the near and medium term. You might assume that would cut out, for example, marketing managers, although the general trend in a lot of software tools has precisely been to build versions of the same tools used by data scientists for these tell technical people to engage in the process of building what it is that they want to use. “I do think it’s important to democratize the ability to stream data pipelines, and to be able to optimize those to work in applications,” he added.

#ai, #articles, #artificial-intelligence, #automation, #business-process-management, #ceo, #e-commerce, #enterprise, #europe, #funding, #kfc, #manchester, #mmc-ventures, #nike, #partner, #peak, #peak-ai, #pepsico, #science-and-technology, #series-b, #softbank-group, #softbank-vision-fund, #software-platform, #tc, #united-kingdom, #vodafone

Otter.ai expands automatic transcription assistant to Microsoft Teams, Google Meet and Cisco Webex

AI-powered voice transcription service Otter.ai is expanding its Otter Assistant feature for Microsoft Teams, Google Meet, and Cisco Webex. Otter.ai first released this feature for Zoom users earlier this year in May. With this new integration, Otter Assistant can now join and transcribe meetings on more platforms, even if the Otter user is not attending the meeting.

The Otter Assistant automatically joins calendared meetings and records, takes notes and shares transcriptions with meeting participants. If a user decides to skip a meeting altogether, they catch up on the discussion through the recorded notes afterwards. The tool can also help in instances where you have overlapping meetings or larger meetings where only a portion of them are relevant to you.

To use the new tool, users need to synchronize their calendars with the service. The assistant will then automatically join all future meetings, where it appears in the meeting as a separate participant, for transparency’s sake.

“With more companies adapting to a hybrid work model where professionals work and take meetings in-office, at home, and on mobile, many are looking to Otter as a tool to improve team communication and collaboration,” said Otter.ai co-founder and CEO Sam Liang in a statement. “We’re excited to make using Otter even easier and more accessible no matter where or how people conduct and participate in meetings.”

The new integration will be handy for those who attend meetings across several platforms, as the tool can keep all of your meeting notes in one place. The Otter Assistant is available to Otter.ai Business users. The business tier starts at $20 per month and includes features like two-factor authentication, advanced search, audio imports, custom vocabulary, shared speaker identification and more.

#ai, #otter-ai, #tc, #transcription

Kapacity.io is using AI to drive energy and emissions savings for real estate

Y Combinator-backed Kapacity.io is on a mission to accelerate the decarbonization of buildings by using AI-generated efficiency savings to encourage electrification of commercial real estate — wooing buildings away from reliance on fossil fuels to power their heating and cooling needs.

It does this by providing incentives to buildings owners/occupiers to shift to clean energy usage through a machine learning-powered software automation layer.

The startup’s cloud software integrates with buildings’ HVAC systems and electricity meters — drawing on local energy consumption data to calculate and deploy real-time adjustments to heating/cooling systems which not only yield energy and (CO2) emissions savings but generate actual revenue for building owners/tenants — paying them to reduce consumption such as at times of peak energy demand on the grid.

“We are controlling electricity consumption in buildings, focusing on heating and cooling devices — using AI machine learning to optimize and find the best ways to consume electricity,” explains CEO and co-founder Jaakko Rauhala, a former consultant in energy technology. “The actual method is known as ‘demand response’. Basically that is a way for electricity consumer to get paid for adjusting their energy consumption, based on a utility company’s demand.

“For example if there is a lot of wind power production and suddenly the wind drops or the weather changes and the utility company is running power grids they need to balance that reduction — and the way to do that is either you can fire up natural gas turbine or you can reduce power consumption… Our product estimates how much can we reduce electricity consumption at any given minute. We are [targeting] heating and cooling devices because they consume a lot of electricity.”

“The way we see this is this is a way we can help our customers electrify their building stocks faster because it makes their investments more lucrative and in addition we can then help them use more renewable electricity because we can shift the use from fossil fuels to other areas. And in that we hope to help push for a more greener power grid,” he adds.

Kapcity’s approach is applicable in deregulated energy markets where third parties are able to play a role offering energy saving services and fluctuations in energy demand are managed by an auction process involving the trading of surplus energy — typically overseen by a transmission system operator — to ensure energy producers have the right power balance to meet customer needs.

Demand for energy can fluctuate regardless of the type of energy production feeding the grid but renewable energy sources tend to increase the volatility of energy markets as production can be less predictable vs legacy energy generation (like nuclear or burning fossil fuels) — wind power, for example, depends on when and how strongly the wind is blowing (which both varies and isn’t perfectly predictable). So as economies around the world dial up efforts to tackle climate change and hit critical carbon emissions reduction targets there’s growing pressure to shift away from from fossil fuels-based power generation toward cleaner, renewable alternatives. And the real estate sector specifically remains a major generator of CO2 so is squarely in the frame for ‘greening’.

Simultaneously, decarbonization and the green shift looks likely to drive demand for smart solutions to help energy grids manage increasing complexity and volatility in the energy supply mix.

“Basically more wind power — and solar, to some extent — correlates with demand for balancing power grids and this is why there is a lot of talk usually about electricity storage when it comes to renewables,” says Rauhala. “Demand response, in the way that we do it, is an alternative for electricity storage units. Basically we’re saying that we already have a lot of electricity consuming devices — and we will have more and more with electrification. We need to adjust their consumption before we invest billions of dollars into other systems.”

“We will need a lot of electricity storage units — but we try to push the overall system efficiency to the maximum by utilising what we already have in the grid,” he adds.

There are of course limits to how much ‘adjustment’ (read: switching off) can be done to a heating or cooling system by even the cleverest AI without building occupants becoming uncomfortable.

But Kapacity’s premise is that small adjustments — say turning off the boilers/coolers for five, 15 or 30 minutes — can go essentially unnoticed by building occupants if done right, allowing the startup to tout a range of efficiency services for its customers; such as a peak-shaving offering which automatically reduces energy usage to avoid peaks in consumption and generate significant energy cost savings.

“Our goal — which is a very ambitious goal — is that the customers and occupants in the buildings wouldn’t notice the adjustments. And that they would fall into the normal range of temperature fluctuations in a building,” says Rauhala.

Kapacity’s algorithms are designed to understand how to make dynamic adjustments to buildings’ heating/cooling without compromising “thermal comfort”, as Rauhala puts it — noting that co-founder (and COO) Sonja Salo, has both a Phd in demand response and researched thermal comfort during a stint as a visiting researcher at UC Berkley — making the area a specialist focus for the engineer-led founding team.

At the same time, the carrots it’s dangling at the commercial real estate to sign up for a little algorithmic HVAC tweaking look substantial: Kapacity says its system has been able to achieve a 25% reduction in electricity costs and a 10% reduction in CO2-emissions in early pilots. Although early tests have been limited to its home market for now.

Its other co-founder, Rami El Geneidy, researched smart algorithms for demand response involving heat pumps for his PhD dissertation — and heat pumps are another key focus for the team’s tech, per Rauhala.

Heat pumps are a low carbon technology that’s fairly commonly used in the Nordics for heating buildings but whose use is starting to spread as countries around the world look for greener alternatives to heat buildings.

In the UK, for example, the government announced a plan last year to install hundreds of thousands of heat pumps per year by 2028 as it seeks to move the country away from widespread use of gas boilers to heat homes. And Rauhala names the UK as one of the startup’s early target markets — along with the European Union and the US where they also envisage plenty of demand for their services.

While the initial focus is the commercial real estate sector, he says they are also interested in residential buildings — noting that from a “tech core point of view we can do any type of building”.

“We have been focusing on larger buildings — multi-family buildings, larger office buildings, certain type of industrial or commercial buildings so we don’t do single family detached homes at the moment,” he goes on, adding: “We have been looking at that and it’s an interesting avenue but our current pilots are in larger buildings.”

The Finnish startup was only founded last year — taking in a pre-seed round of funding from Nordic Makers prior to getting backing from YC — where it will be presenting at the accelerator’s demo day next week. (But Rauhala won’t comment on any additional fund raising plans at this stage.)

He says it’s spun up five pilot projects over the last seven months involving commercial landlords, utilities, real estate developers and engineering companies (all in Finland for now), although — again — full customer details are not yet being disclosed. But Rauhala tells us they expect to move to their first full commercial deals with pilot customers this year.

“The reason why our customers are interested in using our products is that this is a way to make electrification cheaper because they are being paid for adjusting their consumption and that makes their operating cost lower and it makes investments more lucrative if — for example — you need to switch from natural gas boilers to heat pumps so that you can decarbonize your building,” he also tells us. “If you connect the new heat pump running on electricity — if you connect that to our service we can reduce the operating cost and that will make it more lucrative for everybody to electrify their buildings and run their systems.

“We can also then make their electricity consumed more sustainable because we are shifting consumption away from hours with most CO2 emissions on the grid. So we try to avoid the hours when there’s a lot of fossil fuel-based production in the grid and try to divert that into times when we have more renewable electricity.

“So basically the big question we are asking is how do we increase the use of renewables and the way to achieve that is asking when should we consume? Well we should consume electricity when we have more renewable in the grid. And that is the emission reduction method that we are applying here.”

In terms of limitations, Kapacity’s software-focused approach can’t work in every type of building — requiring that real estate customers have some ability to gather energy consumption (and potentially temperature) data from their buildings remotely, such as via IoT devices.

“The typical data that we need is basic information on the heating system — is it running at 100% or 50% or what’s the situation? That gets us pretty far,” says Rauhala. “Then we would like to know indoor temperatures. But that is not mandatory in the sense that we can still do some basic adjustments without that.”

It also of course can’t offer much in the way of savings to buildings that are running 100% on natural gas (or oil) — i.e. with electricity only used for lighting (turning lights off when people are inside buildings obviously wouldn’t fly); there must be some kind of air conditioning, cooling or heat pump systems already installed (or the use of electric hot water boilers).

“An old building that runs on oil or natural gas — that’s a target for decarbonization,” he continues. “That’s a target where you could consider installing heat pumps and that is where we could help some of our customers or potential customers to say ok we need to estimate how much would it cost to install a heat pump system here and that’s where our product can come in and we can say you can reduce the operating cost with demand response. So maybe we should do something together here.”

Rauhala also confirms that Kapacity’s approach does not require invasive levels of building occupant surveillance, telling TechCrunch: “We don’t collect information that is under GDPR [General Data Protection Regulation], I’ll put it that way. We don’t take personal data for this demand response.”

So any guestimates its algorithms are making about building occupants’ tolerance for temperature changes are, therefore, not going to be based on specific individuals — but may, presumably, factor in aggregated information related to specific industry/commercial profiles.

The Helsinki-based startup is not the only one looking at applying AI to drive energy cost and emissions savings in the commercial buildings sector — another we spoke to recently is Düsseldorf-based Dabbel, for example. And plenty more are likely to take an interest in the space as governments start to pump more money into accelerating decarbonization.

Asked about competitive differentiation, Rauhala points to a focus on real-time adjustments and heat pump technologies.

“One of our key things is we’re developing a system so that we can do close to real time control — very very short term control. That is a valuable service to the power grid so we can then quickly adjust,” he says. “And the other one is we are focusing on heat pump technologies to get started — heat pumps here in the Nordics are a very common and extremely good way to decarbonize and understanding how we can combine these to demand response with new heat pumps that is where we see a lot of advantages to our approach.”

“Heat pumps are a bit more technically complex than your basic natural gas boiler so there are certain things that have to be taken it account and that is where we have been focusing our efforts,” he goes on, adding: “We see heat pumps as an excellent way to decarbonize the global building stock and we want to be there and help make that happen.”

Per capita, the Nordics has the most heat pump installations, according to Rauhala — including a lot of ground source heat pump installations which can replace fossil fuel consumption entirely.

“You can run your building with a ground source heat pump system entirely — you don’t need any supporting systems for it. And that is the area where we here in Europe are more far ahead than in the US,” he says on that.

“The UK government is pushing for a lot of heat pump installations and there are incentives in place for people to replace their existing natural gas systems or whatever they have. So that is very interesting from our point of view. The UK also there is a lot of wind power coming online and there have been days when the UK has bee running 100% with renewable electricity which is great. So that actually is a really good thing for us. But then in the longer term in the US — Seattle, for example, has banned the use of fossil fuels in new buildings so I’m very confident that the market in the US will open up more and quickly. There’s a lot of opportunities in that space as well.

“And of course from a cooling perspective air conditioning in general in the US is very wide spread — especially in commercial buildings so that is already an existing opportunity for us.”

“My estimate on how valuable electricity use for heating and cooling is it’s tens of billions of dollars annually in the US and EU,” he adds. “There’s a lot of electricity being used already for this and we expect the market to grow significantly.”

On the business model front, the startup’s cloud software looks set to follow a SaaS model but the plan is also to take a commission of the savings and/or generated income from customers. “We also have the option to provide the service with a fixed fee, which might be easier for some customers, but we expect the majority to be under a commission,” adds Rauhala.

Looking ahead, were the sought for global shift away from fossil fuels to be wildly successful — and all commercial buildings’ gas/oil boilers got replaced with 100% renewable power systems in short order — there would still be a role for Kapacity’s control software to play, generating energy cost savings for its customers, even though our (current) parallel pressing need to shrink carbon emissions would evaporate in this theoretical future.

“We’d be very happy,” says Rauhala. “The way we see emission reductions with demand response now is it’s based on the fact that we do still have fossil fuels power system — so if we were to have a 100% renewable power system then the electricity does nothing to reduce emissions from the electricity consumption because it’s all renewable. So, ironically, in the future we see this as a way to push for a renewable energy system and makes that transition happen even faster. But if we have a 100% renewable system then there’s nothing [in terms of CO2 emissions] we can reduce but that is a great goal to achieve.”

#ai, #decarbonization, #energy-savings, #hvac-control-automation, #kapacity-io, #machine-learning, #nordic-makers, #tc, #y-combinator

Now that machines can learn, can they unlearn?

Now that machines can learn, can they unlearn?

Enlarge (credit: Andriy Onufriyenko | Getty Images)

Companies of all kinds use machine learning to analyze people’s desires, dislikes, or faces. Some researchers are now asking a different question: How can we make machines forget?

A nascent area of computer science dubbed machine unlearning seeks ways to induce selective amnesia in artificial intelligence software. The goal is to remove all trace of a particular person or data point from a machine learning system, without affecting its performance.

If made practical, the concept could give people more control over their data and the value derived from it. Although users can already ask some companies to delete personal data, they are generally in the dark about what algorithms their information helped tune or train. Machine unlearning could make it possible for a person to withdraw both their data and a company’s ability to profit from it.

Read 13 remaining paragraphs | Comments

#ai, #algorithms, #bias, #biz-it, #privacy, #science

Cardiomatics bags $3.2M for its ECG-reading AI

Poland-based healthtech AI startup Cardiomatics has announced a $3.2M seed raise to expand use of its electrocardiogram (ECG) reading automation technology.

The round is led by Central and Eastern European VC Kaya, with Nina Capital, Nova Capital and Innovation Nest also participating.

The seed raise also includes a $1M non-equity grant from the Polish National Centre of Research and Development.

The 2017-founded startup sells a cloud tool to speed up diagnosis and drive efficiency for cardiologists, clinicians and other healthcare professionals to interpret ECGs — automating the detection and analyse of some 20 heart abnormalities and disorders with the software generating reports on scans in minutes, faster than a trained human specialist would be able to work.

Cardiomatics touts its tech as helping to democratize access to healthcare — saying the tool enables cardiologists to optimise their workflow so they can see and treat more patients. It also says it allows GPs and smaller practices to offer ECG analysis to patients without needing to refer them to specialist hospitals.

The AI tool has analyzed more than 3 million hours of ECG signals commercially to date, per the startup, which says its software is being used by more than 700 customers in 10+ countries, including Switzerland, Denmark, Germany and Poland.

The software is able to integrate with more than 25 ECG monitoring devices at this stage, and it touts offering a modern cloud software interface as a differentiator vs legacy medical software.

Asked how the accuracy of its AI’s ECG readings has been validated, the startup told us: “The data set that we use to develop algorithms contains more than 10 billion heartbeats from approximately 100,000 patients and is systematically growing. The majority of the data-sets we have built ourselves, the rest are publicly available databases.

“Ninety percent of the data is used as a training set, and 10% for algorithm validation and testing. According to the data-centric AI we attach great importance to the test sets to be sure that they contain the best possible representation of signals from our clients. We check the accuracy of the algorithms in experimental work during the continuous development of both algorithms and data with a frequency of once a month. Our clients check it everyday in clinical practice.”

Cardiomatics said it will use the seed funding to invest in product development, expand its business activities in existing markets and gear up to launch into new markets.

“Proceeds from the round will be used to support fast-paced expansion plans across Europe, including scaling up our market-leading AI technology and ensuring physicians have the best experience. We prepare the product to launch into new markets too. Our future plans include obtaining FDA certification and entering the US market,” it added.

The AI tool received European medical device certification in 2018 — although it’s worth noting that the European Union’s regulatory regime for medical devices and AI is continuing to evolve, with an update to the bloc’s Medial Devices Directive (now known as the EU Medical Device Regulation) coming into application earlier this year (May).

A new risk-based framework for applications of AI — aka the Artificial Intelligence Act — is also incoming and will likely expand compliance demands on AI healthtech tools like Cardiomatics, introducing requirements such as demonstrating safety, reliability and a lack of bias in automated results.

Asked about the regulatory landscape it said: “When we launched in 2018 we were one of the first AI-based solutions approved as medical device in Europe. To stay in front of the pace we carefully observe the situation in Europe and the process of legislating a risk-based framework for regulating applications of AI. We also monitor draft regulations and requirements that may be introduced soon. In case of introducing new standards and requirements for artificial intelligence, we will immediately undertake their implementation in the company’s and product operations, as well as extending the documentation and algorithms validation with the necessary evidence for the reliability and safety of our product.”

However it also conceded that objectively measuring efficacy of ECG reading algorithms is a challenge.

“An objective assessment of the effectiveness of algorithms can be very challenging,” it told TechCrunch. “Most often it is performed on a narrow set of data from a specific group of patients, registered with only one device. We receive signals from various groups of patients, coming from different recorders. We are working on this method of assessing effectiveness. Our algorithms, which would allow them to reliably evaluate their performance regardless of various factors accompanying the study, including the recording device or the social group on which it would be tested.”

“When analysis is performed by a physician, ECG interpretation is a function of experience, rules and art. When a human interprets an ECG, they see a curve. It works on a visual layer. An algorithm sees a stream of numbers instead of a picture, so the task becomes a mathematical problem. But, ultimately, you cannot build effective algorithms without knowledge of the domain,” it added. “This knowledge and the experience of our medical team are a piece of art in Cardiomatics. We shouldn’t forget that algorithms are also trained on the data generated by cardiologists. There is a strong correlation between the experience of medical professionals and machine learning.”

#ai, #artificial-intelligence, #cardiomatics, #ecg, #europe, #fundings-exits, #health, #healthtech, #kaya, #startups, #tc

Samsung has its own AI-designed chip. Soon, others will too

Samsung has its own AI-designed chip. Soon, others will too

Enlarge (credit: Getty Images)

Samsung is using artificial intelligence to automate the insanely complex and subtle process of designing cutting-edge computer chips.

The South Korean giant is one of the first chipmakers to use AI to create its chips. Samsung is using AI features in new software from Synopsys, a leading chip design software firm used by many companies. “What you’re seeing here is the first of a real commercial processor design with AI,” says Aart de Geus, the chairman and co-CEO of Synopsys.

Others, including Google and Nvidia, have talked about designing chips with AI. But Synopsys’ tool, called DSO.ai, may prove the most far-reaching because Synopsys works with dozens of companies. The tool has the potential to accelerate semiconductor development and unlock novel chip designs, according to industry watchers.

Read 17 remaining paragraphs | Comments

#ai, #android, #biz-it, #chip-design, #computers, #cpu, #ics, #laptops, #samsung, #smartphones, #tech

Robotic AI firm Covariant raises another $80 million

In May of last year, Covariant announced that it had raised a $40 million Series B. It was a healthy sum of money for the young company, bringing its total funding up to $67 million. Just a little over a year later, the Berkeley-based AI startup is adding another $80 million to its coffers, riding on a wave that dramatically accelerated interest in robotics and AI during the pandemic.

“Companies across multiple industries had already been looking to realize significant gains with AI robotics and with COVID-19, market demands then increased by an order of magnitude,” president, chief scientist and co-founder Pieter Abbeel tells TechCrunch. “Combining this with our last year of successes, our investors are keen to double down. We’ll use the funding to significantly accelerate our global expansion and grow our current lead in a competitive industry.”

The Series C was led by existing investor Index Ventures and features Amplify Partners, Radical Ventures, CPPIB and Temasek. It brings the firm’s total funding up to $147 million for what it calls universal AI for robotic manipulation. “Universal” is really the key word for the Covariant Brain, and the company has already proven how versatile its tech can be in the two years since it came out of stealth.

The company currently employs just under 80 people. Part of the funding will go toward increasing its headcount “substantially.” Today’s news also includes the addition of some high-profile team members, including Raghavendra Prabhu as head of Engineering and Research, Ally Lynch as head of Marketing and Sam Cauthen as head of People.

Image Credits: Covariant

Covariant has deployed its technology in a number of markets in North America, Europe and Asia, across a broad range of different sectors requiring pick and place, from grocery to fashion to pharmaceuticals.

“As of today, the Covariant Brain is powering a wide range of industrial robots to manage order picking, putwall, sorter induction — all for companies in various industries with drastically different types of products to manipulate,” CEO Peter Chen said in a release. “The breadth of use demonstrates the Covariant Brain can help robots of different types to manipulate new objects they’ve never seen before in environments where they’ve never operated.”

Existing customers include Obeta, Knapp, ABB and Bastian.

“Forward-looking customers value our platform approach since it allows them to future-proof their long-term modernization strategy,” Abbeel says. “The Covariant Brain has unlimited learning potential to act on multiple applications across the warehouse. Our current deployments are just the tip of the iceberg on everything that AI Robotics can do for the supply chain and beyond.”

#ai, #artificial-intelligence, #covariant, #funding, #index-ventures, #pieter-abbeel, #recent-funding, #robotics, #startups

Sean Gallagher and an AI expert break down our crazy machine-learning adventure

Sean Gallagher and an AI expert break down our crazy machine-learning adventure

Enlarge

We’ve spent the past few weeks burning copious amounts of AWS compute time trying to invent an algorithm to parse Ars’ front-page story headlines to predict which ones will win an A/B test—and we learned a lot. One of the lessons is that we—and by “we,” I mainly mean “me,” since this odyssey was more or less my idea—should probably have picked a less, shall we say, ambitious project for our initial outing into the machine-learning wilderness. Now, a little older and a little wiser, it’s time to reflect on the project and discuss what went right, what went somewhat less than right, and how we’d do this differently next time.

Our readers had tons of incredibly useful comments, too, especially as we got into the meaty part of the project—comments that we’d love to get into as we discuss the way things shook out. The vagaries of the edit cycle meant that the stories were being posted quite a bit after they were written, so we didn’t have a chance to incorporate a lot of reader feedback as we went, but it’s pretty clear that Ars has some top-shelf AI/ML experts reading our stories (and probably groaning out loud every time we went down a bit of a blind alley). This is a great opportunity for you to jump into the conversation and help us understand how we can improve for next time—or, even better, to help us pick smarter projects if we do an experiment like this again!

Our chat kicks off on Wednesday, July 28, at 1:00 pm Eastern Time (that’s 10:00 am Pacific Time and 17:00 UTC). Our three-person panel will consist of Ars Infosec Editor Emeritus Sean Gallagher and me, along with Amazon Senior Principal Technical Evangelist (and AWS expert) Julien Simon. If you’d like to register so that you can ask questions, use this link here; if you just want to watch, the discussion will be streamed on the Ars Twitter account and archived as an embedded video on this story’s page. Register and join in or check back here after the event to watch!

Read on Ars Technica | Comments

#ai, #ai-ml, #amazon, #artificial-intelligence, #aws, #biz-it, #headlines, #livechat, #machine-learning, #ml, #natural-language-processing, #nlp

Researchers demonstrate that malware can be hidden inside AI models

This photo has a job application for Boston University hidden within it. The technique introduced by Wang, Liu, and Cui could hide data inside an image classifier rather than just an image.

Enlarge / This photo has a job application for Boston University hidden within it. The technique introduced by Wang, Liu, and Cui could hide data inside an image classifier rather than just an image. (credit: Keith McDuffy CC-BY 2.0)

Researchers Zhi Wang, Chaoge Liu, and Xiang Cui published a paper last Monday demonstrating a new technique for slipping malware past automated detection tools—in this case, by hiding it inside a neural network.

The three embedded 36.9MiB of malware into a 178MiB AlexNet model without significantly altering the function of the model itself. The malware-embedded model classified images with near-identical accuracy, within 1% of the malware-free model. (This is possible because the number of layers and total neurons in a convolutional neural network is fixed prior to training—which means that, much like in human brains, many of the neurons in a trained model end up being either largely or entirely dormant.)

Just as importantly, squirreling the malware away into the model broke it up in ways that prevented detection by standard antivirus engines. VirusTotal, a service that “inspects items with over 70 antivirus scanners and URL/domain blocklisting services, in addition to a myriad of tools to extract signals from the studied content,” did not raise any suspicions about the malware-embedded model.

Read 4 remaining paragraphs | Comments

#ai, #deep-learning, #machine-learning, #malware, #neural-networks, #steganography, #tech

Shares of protein discovery platform Absci pop in market debut

Absci Corp., a Vancouver company behind a multi-faceted drug development platform, went public on Thursday. It’s another sign of snowballing interest in new approaches to drug development – a traditionally risky business. 

Absci focuses on speeding drug development in the preclinical stages. The company has developed and acquired a handful of tools that can predict drug candidates, identify potential therapeutic targets, and test therapeutic proteins on billions of cells and identify which ones are worth pursuing. 

“We are offering a fully-integrated end-to-end solution for pharmaceutical drug development,” Absci founder Sean McClain tells TechCrunch. “Think of this as the Google index search for protein drug discovery and biomanufacturing.” 

The IPO was initially priced at $16 per share, with a pre-money valuation of about $1.5 billion, per S-1 filings. The company is offering 12.5 million shares of common stock, with plans to raise $200 million. However, Absci stock has already ballooned to $21 per share as of writing. Common stock is trading under the ticker “ABSI.” 

The company has elected to go public now, McClain says, to increase the company’s ability to attract and retain new talent. “As we continue to rapidly grow and scale, we need access to the best talent, and the IPO gives us amazing visibility for talent acquisition and retention,” says McClain.

Absci was founded in 2011 with a focus on manufacturing proteins in E.Coli. By 2018, the company had launched its first commercial product called SoluPro – a biogeneered E.Coli system that can build complex proteins. In 2019, the company scaled this process up by implementing a “protein printing” platform.

Since its founding Absci has grown to 170 employees and raised $230 million – the most recent influx was a $125 million crossover financing round closed in June 2020 led by Casdin Capital and Redmile Group. But this year, two major acquisitions have rounded out Absci’s offerings from protein manufacturing and testing to AI-enabled drug development. 

In January 2021, Absci acquired Denovium, a company using deep learning AI to categorize and predict the behavior of proteins. Denovium’s “engine” had been trained on more than 100 million proteins. In June, the company also acquired Totient, a biotech company that analyzes the immune system’s response to certain diseases. At the time of Totient’s acquisition, the company had already reconstructed 4,500 antibodies gleaned from immune system data from 50,000 patients. 

Absci already had protein manufacturing, evaluation and screening capabilities, but the Totient acquisition allowed it to identify potential targets for new drugs. The Denovium acquisition added an AI-based engine to aid in protein discovery. 

“What we’re doing is now feeding [our own data] into deep learning models and so that is why we acquired Denovium. Prior to Totient we were doing drug discovery and cell line development. This [acquisition] allows us to go fully integrated where we can now do target discovery as well,” McClain says. 

These two acquisitions place Absci into a particularly active niche in the drug development world. 

To start with, there’s been some noteworthy fiscal interest in developing new approaches to drug development, even after decades of low returns on drug R&D. In the first half of 2021, Evaluate reported that new drug developers raised about $9 billion in IPOs on Western exchanges. This is despite the fact that drug development is traditionally high risk. R&D returns for biopharmaceuticals hit a record low of 1.6 percent in 2019, and have rebounded to only about 2.5 percent, a Deloitte 2021 report notes. 

Within the world of drug development, we’ve seen AI play an increasingly large role. That same Deloitte report notes that “most biopharma companies are attempting to integrate AI into drug discovery, and development processes.” And, drug discovery projects received the greatest amount of AI investment dollars in 2020, according to Stanford University’s Artificial Intelligence Index annual report

More recently, the outlook on the use of AI in drug development has been bolstered by companies that have moved a candidate through the stages of pre-clinical development. 

In June, Insilico Medicine, a Hong Kong-based startup, announced that it had brought an A.I-identified drug candidate for idiopathic pulmonary fibrosis through the preclinical testing stages – a feat that helped close a $255 million Series C round. Founder Alexander Zharaonkov told TechCrunch the PI drug would begin a clinical trial on the drug late this year or early next year. 

With a hand in AI and in protein manufacturing, Absci has already positioned itself in a crowded, but hype-filled space. But going forward, the company will still have to work out the details of its business model.  

Absci is pursuing a partnership business model with drug manufacturers. This means that the company doesn’t have plans to run clinical trials of its own. Rather, it expects to earn revenue through “milestone payments” (conditional upon reaching certain stages of the drug development process) or, if drugs are approved, royalties on sales. 

This does offer some advantages, says McClain. The company is able to sidestep the risk of drug candidates failing after millions of R&D cash is poured into testing and can invest in developing “hundreds” of drug candidates at once. 

At this point, Absci does have nine currently “active programs” with drugmakers. The company’s cell line manufacturing platforms are in use in drug testing programs at eight biopharma companies, including Merck, Astellas, and Alpha Cancer technologies (the rest are undisclosed). Five of these projects are in the preclinical stage, one is in Phase 1 clinical trials, one is in a Phase 3 clinical trial, and the last is focused on animal health, per the company’s S-1 filing. 

One company, Astellas, is currently using Absci’s discovery platforms. But McClain notes that Absci has only just rolled out its drug discovery capabilities this year. 

However, none of these partners have formally licensed any of Absci’s platforms for clinical or commercial use. McClain notes that the nine active programs have milestones and royalty “potentials” associated with them. 

The company does have some ground to make up when it comes to profitability. So far this year, Absci has generated about $4.8 million in total revenue – up from about $2.1 million in 2019. Still, the costs have remained high, and S-1 filings note that the company has incurred net losses in the past two years. In 2019, the company reported $6.6 million in net losses in 2019 and $14.4 million in net losses in 2020. 

The company’s S-1 chalks up these losses to expenditures related to cost of research and development, establishing an intellectual property portfolio, hiring personnel, raising capital and providing support for these activities. 

Absci has recently completed the construction of a 77,000 square foot facility, notes McClain. So going forward the company does foresee the potential to increase the scale of its operations. 

In the immediate future, the company plans to use money raised from the IPO to grow the number of programs using Absci’s technology, invest in R&D and continue to refine the company’s new AI-based products. 

 

#ai, #artificial-intelligence, #biotech, #drug-development, #drug-discovery, #tc, #therapeutics

Ars AI headline experiment finale—we came, we saw, we used a lot of compute time

Ars AI headline experiment finale—we came, we saw, we used a lot of compute time

Enlarge (credit: Aurich Lawson | Getty Images)

We may have bitten off more than we could chew, folks.

An Amazon engineer told me that when he heard what I was trying to do with Ars headlines, the first thing he thought was that we had chosen a deceptively hard problem. He warned that I needed to be careful about properly setting my expectations. If this was a real business problem… well, the best thing he could do was suggest reframing the problem from “good or bad headline” to something less concrete.

That statement was the most family-friendly and concise way of framing the outcome of my four-week, part-time crash course in machine learning. As of this moment, my PyTorch kernels aren’t so much torches as they are dumpster fires. The accuracy has improved slightly, thanks to professional intervention, but I am nowhere near deploying a working solution. Today, as I am allegedly on vacation visiting my parents for the first time in over a year, I sat on a couch in their living room working on this project and accidentally launched a model training job locally on the Dell laptop I brought—with a 2.4 GHz Intel Core i3 7100U CPU—instead of in the SageMaker copy of the same Jupyter notebook. The Dell locked up so hard I had to pull the battery out to reboot it.

Read 27 remaining paragraphs | Comments

#ai, #al-ml, #artificial-intelligence, #aws, #biz-it, #features, #is-our-machine-learning, #machine-learning, #ml, #natural-language-processing, #nlp, #sagemaker

Google turns AlphaFold loose on the entire human genome

Image of a diagram of ribbons and coils.

Enlarge (credit: Sloan-Kettering)

Just one week after Google’s DeepMind AI group finally described its biology efforts in detail, the company is releasing a paper that explains how it analyzed nearly every protein encoded in the human genome and predicted its likely three-dimensional structure—a structure that can be critical for understanding disease and designing treatments. In the very near future, all of these structures will be released under a Creative Commons license via the European Bioinformatics Institute, which already hosts a major database of protein structures.

In a press conference associated with the paper’s release, DeepMind’s Demis Hassabis made clear that the company isn’t stopping there. In addition to the work described in the paper, the company will release structural predictions for the genomes of 20 major research organisms, from yeast to fruit flies to mice. In total, the database launch will include roughly 350,000 protein structures.

What’s in a structure?

We just described DeepMind’s software last week, so we won’t go into much detail here. The effort is an AI-based system trained on the structure of existing proteins that had been determined (often laboriously) through laboratory experiments. The system uses that training, plus information it obtains from families of proteins related by evolution, to predict how a protein’s chain of amino acids folds up in three-dimensional space.

Read 14 remaining paragraphs | Comments

#ai, #biochemistry, #biology, #computer-science, #protein-folding, #science

How we built an AI unicorn in 6 years

Today, Tractable is worth $1 billion. Our AI is used by millions of people across the world to recover faster from road accidents, and it also helps recycle as many cars as Tesla puts on the road.

And yet six years ago, Tractable was just me and Raz (Razvan Ranca, CTO), two college grads coding in a basement. Here’s how we did it, and what we learned along the way.

Build upon a fresh technological breakthrough

In 2013, I was fortunate to get into artificial intelligence (more specifically, deep learning) six months before it blew up internationally. It started when I took a course on Coursera called “Machine learning with neural networks” by Geoffrey Hinton. It was like being love struck. Back then, to me AI was science fiction, like “The Terminator.”

Narrowly focusing on a branch of applied science that was undergoing a paradigm shift which hadn’t yet reached the business world changed everything.

But an article in the tech press said the academic field was amid a resurgence. As a result of 100x larger training data sets and 100x higher compute power becoming available by reprogramming GPUs (graphics cards), a huge leap in predictive performance had been attained in image classification a year earlier. This meant computers were starting to be able to understand what’s in an image — like humans do.

The next step was getting this technology into the real world. While at university — Imperial College London — teaming up with much more skilled people, we built a plant recognition app with deep learning. We walked our professor through Hyde Park, watching him take photos of flowers with the app and laughing from joy as the AI recognized the right plant species. This had previously been impossible.

I started spending every spare moment on image classification with deep learning. Still, no one was talking about it in the news — even Imperial’s computer vision lab wasn’t yet on it! I felt like I was in on a revolutionary secret.

Looking back, narrowly focusing on a branch of applied science undergoing a breakthrough paradigm shift that hadn’t yet reached the business world changed everything.

Search for complementary co-founders who will become your best friends

I’d previously been rejected from Entrepreneur First (EF), one of the world’s best incubators, for not knowing anything about tech. Having changed that, I applied again.

The last interview was a hackathon, where I met Raz. He was doing machine learning research at Cambridge, had topped EF’s technical test, and published papers on reconstructing shredded documents and on poker bots that could detect bluffs. His bare-bones webpage read: “I seek data-driven solutions to currently intractable problems.” Now that had a ring to it (and where we’d get the name for Tractable).

That hackathon, we coded all night. The morning after, he and I knew something special was happening between us. We moved in together and would spend years side by side, 24/7, from waking up to Pantera in the morning to coding marathons at night.

But we also wouldn’t have got where we are without Adrien (Cohen, president), who joined as our third co-founder right after our seed round. Adrien had previously co-founded Lazada, an online supermarket in South East Asia like Amazon and Alibaba, which sold to Alibaba for $1.5 billion. Adrien would teach us how to build a business, inspire trust and hire world-class talent.

Find potential customers early so you can work out market fit

Tractable started at EF with a head start — a paying customer. Our first use case was … plastic pipe welds.

It was as glamorous as it sounds. Pipes that carry water and natural gas to your home are made of plastic. They’re connected by welds (melt the two plastic ends, connect them, let them cool down and solidify again as one). Image classification AI could visually check people’s weld setups to ensure good quality. Most of all, it was real-world value for breakthrough AI.

And yet in the end, they — our only paying customer — stopped working with us, just as we were raising our first round of funding. That was rough. Luckily, the number of pipe weld inspections was too small a market to interest investors, so we explored other use cases — utilities, geology, dermatology and medical imaging.

#ai, #artificial-intelligence, #column, #cybernetics, #ec-column, #ec-enterprise-applications, #ec-fintech, #ec-how-to, #enterprise, #insurance, #insurtech, #machine-learning, #startups

Our AI headline experiment continues: Did we break the machine?

Our AI headline experiment continues: Did we break the machine?

Enlarge (credit: Aurich Lawson | Getty Images)

We’re in phase three of our machine-learning project now—that is, we’ve gotten past denial and anger, and we’re now sliding into bargaining and depression. I’ve been tasked with using Ars Technica’s trove of data from five years of headline tests, which pair two ideas against each other in an “A/B” test to let readers determine which one to use for an article. The goal is to try to build a machine-learning algorithm that can predict the success of any given headline. And as of my last check-in, it was… not going according to plan.

I had also spent a few dollars on Amazon Web Services compute time to discover this. Experimentation can be a little pricey. (Hint: If you’re on a budget, don’t use the “AutoPilot” mode.)

We’d tried a few approaches to parsing our collection of 11,000 headlines from 5,500 headline tests—half winners, half losers. First, we had taken the whole corpus in comma-separated value form and tried a “Hail Mary” (or, as I see it in retrospect, a “Leeroy Jenkins“) with the Autopilot tool in AWS’ SageMaker Studio. This came back with an accuracy result in validation of 53 percent. This turns out to be not that bad, in retrospect, because when I used a model specifically built for natural-language processing—AWS’ BlazingText—the result was 49 percent accuracy, or even worse than a coin-toss. (If much of this sounds like nonsense, by the way, I recommend revisiting Part 2, where I go over these tools in much more detail.)

Read 29 remaining paragraphs | Comments

#ai, #ai-ml, #amazon-sagemaker, #artificial-intelligence, #aws, #biz-it, #feature, #features, #machine-learning, #ml, #natural-language-processing, #nlp, #tokenization

Numerade lands $100M valuation for short-form STEM videos

Edtech entrepreneurs are using their moment in the sun to rethink the structures and impact of nearly every aspect of modern-day learning, from the art of testing to the reality of information retention. Yet, the most popular product up for grabs may just be a seemingly simple one: the almighty tutoring session. and Numerade, an edtech founded in 2018, just had its take on scalable, high-quality tutoring sessions valued at $100 million.

Numerade sells subscriptions to short-form videos that explain how certain equations and experiments work, and then uses an algorithm to make those explainers better suited to a learner’s comprehension style. Per CEO and co-founder Nhon Ma, the startup’s focus on asynchronous, contextualized content will make it easier to scale high-quality tutoring at an affordable price.

“Real teaching involves sight and sound, but also the context of how something is delivered in the vernacular of how a student actually learns,” Ma said. And he wants Numerade to be a platform that goes beyond the robotic Q&A and step-by-step answer platforms such as Wolfram Alpha, and actually integrates science into how solutions are communicated to users.

Today, the company announced that it has raised $26 million at a $100 million valuation in a round including investors such as IDG Capital, General Catalyst, Mucker Capital, Kapor Capital, Interplay Ventures, and strategic investors such as Margo Georgiadis, the former CEO of Ancestry, Khaled Helioui, the former CEO of Bigpoint Games and angel investor in Uber, and Taavet Hinrikus, founder of Wise.

“There are supply and demand mechanics inherent to synchronous tutoring,” Ma said. He explained how the best tutors have limited time, may demand premiums, and overall lead to a constraint on the supply side of marketplaces. Group tutoring has been an option employed by some companies, pairing multiple students to one tutor for efficiency saake, but he thinks that it is “really outdated, and actually decreases the quality of tutoring.”

With Numerade avoiding both live learning and Wolfram-Alpha style explainers that just give the answer to students, the company has turned to a third option: videos. Videos are not new to edtech, but currently majorly reside in massive open online course providers such as Coursera or Udemy, or ‘edutainment’ platforms like MasterClass and Outschool. Numerade thinks that teacher-led or educator-guided videos can be built around a specific problem within Chapter 2 of Fundamentals of Physics.

numerade

Student learning from Numerade videos.

The company has three main products: bootcamp videos for foundational knowledge, step-by-step videos that turn that knowledge into a skill and focus on sequence, and finally, quizzes that assess how much of the aforementioned information was retained.

The true moonshot in the startup, though, is the algorithm that decides which students see which videos. When explaining how the algorithm works, Ma used words like “deep learning” and “computer vision” and “ontology” but mostly the algorithm boils down to this: it wants to bring TikTok-level specificity to educational videos, using users’ historical actions to better push certain content that fits their learning style.

For example, the startup believes that offering step-by-step videos help the brain understand patterns, diversity of problems, and eventually better understand solutions. The algorithm mostly shows up in Numerade quizzes, which will see how a student performs on a topic and then input those results back into the model to assumedly better cater a new series of bootcamps and questions.

“To help a student grow and learn, our model first understands their strengths and weaknesses and then surfaces relevant conceptual, practical, and assessment content to build their subject knowledge. The algorithm can parse structured data from videos and provide different teaching styles to suit the needs of all students,” he said.

As of now, Numerade’s algorithm appears preliminary. Users need to be paid subscribers and have a sufficient usage history in order to start benefiting from more targeted content. Even so, it’s unclear how the algorithm leads to different pedagogical content to students beyond resurfacing concepts that a student erred on in a previous quiz.

Numerade’s moonshot is built on an equally ambitious premise: that students want to learn concepts, not just Google for the fastest answer so they can finish procrastinated homework. Ma explained how engagement time on Numerade videos can be somewhere from double to triple the video’s entire length, which means that students are interacting with the content beyond just skipping over to the answer

Numerade isn’t alone in trying to take on Wolfram Alpha. Over the past year, edtech unicorns like Quizlet and Course Hero have invested heavily in AI-powered chatbots and live calculators, the latter largely through acquisitions of companies such as Numerade. These platforms are rallying around the idea that tech-powered tutoring sessions should prioritize speed and simplicity, instead of relationship-building and time. In other words, maybe students won’t go to a tutor once a week for math, but they will go to a platform that can methodically explain an answer at midnight, hours before their precalculus exam.

Despite its somewhat early-stage algorithm innovation and heavy-weigh competition, Numerade’s fresh venture backing and ability to bring in revenue is promising. While declining to divulge specifics, Ma said that the company is “quickly tracking” to eight figures in ARR, meaning it’s making at least $10 million in annual revenue from its current subscriber base. He sees perspective as Numerade’s biggest competitive advantage.

“A common criticism of commercial STEM education is that it’s too modular – textbooks teach physics as stand-alone,” Ma said. “Our algorithm does not, instead it treats STEM as an interlocking ecosystem; concepts in math, physics, chemistry, and biology are omnidirectionally related.”

#ai, #early-stage, #edtech, #education, #numbers, #numerade, #series-a, #tc, #tiktok

Google details its protein-folding software, academics offer an alternative

Image of two multi-colored traces of complex structures.

Enlarge (credit: University of Washington)

Thanks to the development of DNA-sequencing technology, it has become trivial to obtain the sequence of bases that encode a protein and translate that to the sequence of amino acids that make up the protein. But from there, we often end up stuck. The actual function of the protein is only indirectly by its sequence. Instead, the sequence dictates how the amino acid chain folds and flexes in three-dimensional space, forming a specific structure. That structure is typically what dictates the function of the protein, but obtaining it can require years of lab work.

For decades, researchers have tried to develop software that can take a sequence of amino acids and accurately predict the structure it will form. Despite this being a matter of chemistry and thermodynamics, we’ve only had limited success—until last year. That’s when Google’s DeepMind AI group announced the existence of AlphaFold, which can typically predict structures with a high degree of accuracy.

At the time, DeepMind said it would give everyone the details on its breakthrough in a future peer-reviewed paper, which it finally released yesterday. In the meantime, some academic researchers got tired of waiting, took some of DeepMind’s insights, and made their own. The paper describing that effort also was released yesterday.

Read 17 remaining paragraphs | Comments

#ai, #biochemistry, #biology, #deepmind, #google, #protein-folding, #science, #software

Visualping raises $6M to make its website change monitoring service smarter

Visualping, a service that can help you monitor websites for changes like price drops or other updates, announced that it has raised a $6 million extension to the $2 million seed round it announced earlier this year. The round was led by Seattle-based FUSE Ventures, a relatively new firm with investors who spun out of Ignition Partners last year. Prior investors Mistral Venture Partners and N49P also participated.

The Vancouver-based company is part of the current Google for Startups Accelerator class in Canada. This program focuses on services that leverage AI and machine learning, and, while website monitoring may not seem like an obvious area where machine learning can add a lot of value, if you’ve ever used one of these services, you know that they can often unleash a plethora of false alerts. For the most part, after all, these tools simply look for something in a website’s underlying code to change and then trigger an alert based on that (and maybe some other parameters you’ve set).

Image Credits: Visualping

Earlier this week, Visualping launched its first machine learning-based tools to avoid just that. The company argues that it can eliminate up to 80% of false alerts by combining feedback from its more than 1.5 million users with its new ML algorithms. Thanks to this, Visualping can now learn the best configuration for how to monitor a site when users set up a new alert.

“Visualping has the hearts of over a million people across the world, as well as the vast majority of the Fortune 500. To be a part of their journey and to lead this round of financing is a dream,” FUSE’s Brendan Wales said.

Visualping founder and CEO Serge Salager tells me that the company plans to use the new funding to focus on building out its product but also to build a commercial team. So far, he said, the company’s growth has been primarily product led.

As a part of these efforts, the company also plans to launch Visualping Business, with support for these new ML tools and additional collaboration features, and Visualping Personal for individual users who want to monitor things like ticket availability for concerts or to track news, price drops or job postings, for example. For now, the personal plan will not include support for ML. “False alerts are not a huge problem for personal use as people are checking two-three websites but a huge problem for enterprise where teams need to process hundreds of alerts per day,” Salager told me.

The current idea is to launch these new plans in November, together with mobile apps for iOS and Android. The company will also relaunch its extensions around this time, too.

It’s also worth noting that while Visualping monetizes its web-based service, you can still use the extension in the browser for free.

#ai, #artificial-intelligence, #funding, #fundings-exits, #fuse-ventures, #ignition-partners, #recent-funding, #seattle, #startups, #visualping, #website-monitoring

The price differential for engineers is declining

Hello and welcome back to Equity, TechCrunch’s venture-capital-focused podcast, where we unpack the numbers behind the headlines.

The whole crew was here this week, with Danny and Natasha and Alex  together with Grace and Chris to sort through a very, very busy week. Yep, somehow it is Friday again which means it’s time for our weekly news roundup.

Here’s what we got to in our short window of time:

Like we said, a busy week! Chat you all on Monday morning, early.

Equity drops every Monday at 7:00 a.m. PDT, Wednesday, and Friday morning at 7:00 a.m. PDT, so subscribe to us on Apple PodcastsOvercastSpotify and all the casts.

#affirm, #ai, #apple, #artificial-intelligence, #beyond-meat, #bnpl, #china, #chorus-ai, #commodity-capital, #discord, #early-stage-startup, #edtech, #emerging-fund-manager, #equity, #equity-podcast, #fintech, #gourmey, #india, #ipo, #jianzhi-education, #klarna, #next-gen-foods, #nooks, #public-market, #reddit, #sentropy, #tc, #venture-capital, #virtual-hq, #zomato, #zoominfo

Aidoc raises over $66M for AI radiology analysis technology

Aidoc, an artificial intelligence company that develops triage and analysis software, is about to double its funding. On Tuesday, the company announced a round of $66 million. Before that the company had raised about $67 million in its five-year lifetime. 

Aidoc develops “decision support” software based on artificial intelligence. This software can read images like CT scans, detect certain abnormalities, and advise radiologists on what to do with certain patients. So far, the company’s algorithm, called BriefCase, has been approved by the FDA to evaluate patients with intracranial hemorrhage, large vessel occlusion (a type of stroke), cervical spine injuries, pulmonary embolisms, incidental pulmonary embolisms, intra-abdominal free gas, and rib fractures

The algorithms were approved via the FDA’s 510(k) premarket pathway – which allows for fast adoption of technology that is substantially similar to other products on the market. Most algorithms are approved via that pathway, as there is no specific regulatory pathway for these products in the US (though the FDA is continuously workshopping its oversight of AI and machine learning). 

“What’s really unique about us as a company is the breadth and scope of what we do. Instead of building one solution we spend the time building a platform that can accelerate AI development at a really rapid pace,” says Elad Walach, the company’s founder and CEO. “This is why today we have the most FDA cleared solutions on the market.”

The current funding round was led by the New England VC firm General Catalyst. Chris Bischoff, a managing director of General Catalyst, says the firm was convinced by the strength of the Aidoc’s team, and the seven FDA clearances. However, they were most attracted to the company’s larger approach: create an algorithm (or series of them) that addresses many different conditions. 

In short, the company aims to help hospitals build an entire A.I strategy that can be applied to different conditions over time. 

“It’s this sort of consultative approach, that they’re going in as a partner rather than a vendor,” says Bischoff. “It’s not just effectively a point solution, it’s a workflow tool.” 

The biggest question facing Aidoc, or any other startup in the field of artificial intelligence, though, is whether it can convince hospitals and clinics to adopt an AI strategy in the first place. 

Aidoc is a software made for radiologists – a group that is dwindling faster than most would like to see. Though the US Bureau of Labor Statistics projects a 7 percent increase in need for radiologists by 2029 as the population grows older, The Association of American Medical Colleges predicts the country will experience a shortfall of specialists – a group which includes radiologists. 

That shortfall could range from 17,100 to 41,900 (these numbers include radiologists among other specialities, but the report doesn’t break down exactly how many radiologists are included in that count). 

The UK, however, is already struggling with a radiologist shortage that ranges from 27 to 37 percent, depending on location. 

AI has been proposed as one solution to the widening radiologist shortage – per the American Medical Association’s report: “advances in artificial intelligence could improve the productivity of radiologists, pathologists, and others,” it notes. 

It’s only a partial solution. Though we might be able to make each individual radiologist more productive with A.I., even Walach notes that A.I software can’t really diagnose a patient all on its own. 

“Definitely a radiologist needs to confirm the findings,” he notes.  “That the AI solution doesn’t do anything without the radiologist.” 

At this point, the pitch for AI in the world of radiology is that it’s a tool, not a replacement for a trained radiologist. And there’s been a lot of academic work done to develop these AI tools: There were an estimated 596 papers on the topic in 2010 and 12,422 by 2019.

Despite this, AI has yet to truly become a go-to for radiologists. A 2020 survey done by American College of Radiology found that just 30 percent of radiologists were using A.I in their practices – the authors called the penetration of the technology “moderate.” Only 20 percent of those not using A.I said they had plans to purchase it in the next one to five years. 

We may be poised to see a bit of an uptick in the use of A.I. in clinics or hospitals post-coronavirus. Some hospitals turned to A.I facing shortages of staff and high patient loads. 

That said, Walach says that it wasn’t a major boon to Aidoc. If anything he says it slowed the company’s growth (though it was still trending upwards anyway – Walach estimates the company increased contracted annual recurring revenue seven-fold between Q1 of 2020 and Q1 2021). 

Instead, he attributes the growth to a wave of research on A.I’s impact on patient welfare beyond initial diagnosis. He points to studies that have found A.I can reduce length of hospital stays by identifying which patients to send home

“I think the biggest trends in this market is that one more and more evidence is starting to come up that AI has significant downstream benefits,” Walach says. 

Despite barriers to widespread adoption of AI, Aidoc has a long list of commercial partners. The company has almost 600 partnerships with hospitals and clinics including Radiology Partners, Yale New Haven Medical Center, Cedars-Sinai Medical Center, UMass, University of Rochester Medical Center, LucidHealth, 4ways Healthcare, Telemedicine Clinic, Grupo Fleury, University Hospital of Basel, Sheba Medical Center, Hadassah Medical Center, and Global Diagnostics Australia.

Meanwhile, the company projects that it will be used in 10 percent of US hospitals within two years. 

To some extent, Bischoff argues that the adoption of Aidoc at other high profile health centers could help drive this growth. That could perhaps signal that the company may overcome the slower-than-expected leak of A.I from lab to clinic. 

“That ability to drive high levels of trust – that would allow that deeper integration,” he says. “There’s a little bit of a lighthouse type dynamic in healthcare where if the leaders do certain things others follow.”

Aidoc is headquartered in New York, with a research branch in Israel and has about 200 employees. With this new round of funding, the company plans to invest heavily into R&D and has plans to double the amount of conditions evaluated by the company’s algorithms.

#ai, #biotech, #diagnostics, #medical-imaging, #radiology, #tc

Is our machine learning? Ars takes a dip into artificial intelligence

Is our machine learning? Ars takes a dip into artificial intelligence

Enlarge

Every day, some little piece of logic constructed by very specific bits of artificial intelligence technology makes decisions that affect how you experience the world. It could be the ads that get served up to you on social media or shopping sites, or the facial recognition that unlocks your phone, or the directions you take to get to wherever you’re going. These discreet, unseen decisions are being made largely by algorithms created by machine learning (ML), a segment of artificial intelligence technology that is trained to identify correlation between sets of data and their outcomes. We’ve been hearing in movies and TV for years that computers control the world, but we’ve finally reached the point where the machines are making real autonomous decisions about stuff. Welcome to the future, I guess.

In my days as a staffer at Ars, I wrote no small amount about artificial intelligence and machine learning. I talked with data scientists who were building predictive analytic systems based on terabytes of telemetry from complex systems, and I babbled with developers trying to build systems that can defend networks against attacks—or, in certain circumstances, actually stage those attacks. I’ve also poked at the edges of the technology myself, using code and hardware to plug various things into AI programming interfaces (sometimes with horror-inducing results, as demonstrated by Bearlexa).

Many of the problems to which ML can be applied are tasks whose conditions are obvious to humans. That’s because we’re trained to notice those problems through observation—which cat is more floofy or at what time of day traffic gets the most congested. Other ML-appropriate problems could be solved by humans as well given enough raw data—if humans had a perfect memory, perfect eyesight, and an innate grasp of statistical modeling, that is.

Read 33 remaining paragraphs | Comments

#ai, #ai-ml, #artificial-intelligence, #biz-it, #feature, #feature-report, #features, #machine-learning, #ml

Quantexa raises $153M to build out AI-based big data tools to track risk and run investigations

As financial crime has become significantly more sophisticated, so too have the tools that are used to combat it. Now, Quantexa — one of the more interesting startups that has been building AI-based solutions to help detect and stop money laundering, fraud, and other illicit activity — has raised a growth round of $153 million, both to continue expanding that business in financial services and to bring its tools into a wider context, so to speak: linking up the dots around all customer and other data.

“We’ve diversified outside of financial services and working with government, healthcare, telcos and insurance,” Vishal Marria, its founder and CEO, said in an interview. “That has been substantial. Given the whole journey that the market’s gone through in contextual decision intelligence as part of bigger digital transformation, was inevitable.”

The Series D values the London-based startup between $800 million and $900 million on the heels of Quantexa growing its subscriptions revenues 108% in the last year.

Warburg Pincus led the round, with existing backers Dawn Capital, AlbionVC, Evolution Equity Partners (a specialist cybersecurity VC), HSBC, ABN AMRO Ventures and British Patient Capital also participating. The valuation is a significant hike up for Quantexa, which was valued between $200 million and $300 million in its Series C last July. It has now raised over $240 million to date.

Quantexa got its start out of a gap in the market that Marria identified when he was working as a director at Ernst & Young tasked with helping its clients with money laundering and other fraudulent activity. As he saw it, there were no truly useful systems in the market that efficiently tapped the world of data available to companies — matching up and parsing both their internal information as well as external, publicly available data — to get more meaningful insights into potential fraud, money laundering and other illegal activities quickly and accurately.

Quantexa’s machine learning system approaches that challenge as a classic big data problem — too much data for a humans to parse on their own, but small work for AI algorithms processing huge amounts of that data for specific ends.

Its so-called “Contextual Decision Intelligence” models (the name Quantexa is meant to evoke “quantum” and “context”) were built initially specifically to address this for financial services, with AI tools for assessing risk and compliance and identifying financial criminal activity, leveraging relationships that Quantexa has with partners like Accenture, Deloitte, Microsoft and Google to help fill in more data gaps.

The company says its software — and this, not the data, is what is sold to companies to use over their own datasets — has handled up to 60 billion records in a single engagement. It then presents insights in the form of easily digestible graphs and other formats so that users can better understand the relationships between different entities and so on.

Today, financial services companies still make up about 60% of the company’s business, Marria said, with 7 of the top 10 UK and Australian banks and 6 of the top 14 financial institutions in North America among its customers. (The list includes its strategic backer HSBC, as well as Standard Chartered Bank and Danske Bank.)

But alongside those — spurred by a huge shift in the market to relying significantly more on wider data sets, to businesses updating their systems in recent years, and the fact that, in the last year, online activity has in many cases become the “only” activity — Quantexa has expanded more significantly into other sectors.

“The Financial crisis [of 2007] was a tipping point in terms of how financial services companies became more proactive, and I’d say that the pandemic has been a turning point around other sectors like healthcare in how to become more proactive,” Marria said. “To do that you need more data and insights.”

So in the last year in particular, Quantexa has expanded to include other verticals facing financial crime, such as healthcare, insurance, government (for example in tax compliance), and telecoms/communications, but in addition to that, it has continued to diversify what it does to cover more use cases, such as building more complete customer profiles that can be used for KYC (know your customer) compliance or to serve them with more tailored products. Working with government, it’s also seeing its software getting applied to other areas of illicit activity, such as tracking and identifying human trafficking.

In all, Quantexa has “thousands” of customers in 70 markets. Quantexa cites figures from IDC that estimate the market for such services — both financial crime and more general KYC services — is worth about $114 billion annually, so there is still a lot more to play for.

“Quantexa’s proprietary technology enables clients to create single views of individuals and entities, visualized through graph network analytics and scaled with the most advanced AI technology,” said Adarsh Sarma, MD and co-head of Europe at Warburg Pincus, in a statement. “This capability has already revolutionized the way KYC, AML and fraud processes are run by some of the world’s largest financial institutions and governments, addressing a significant gap in an increasingly important part of the industry. The company’s impressive growth to date is a reflection of its invaluable value proposition in a massive total available market, as well as its continued expansion across new sectors and geographies.”

Interestingly, Marria admitted to me that the company has been approached by big tech companies and others that work with them as an acquisition target — no real surprises there — but longer term, he would like Quantexa to consider how it continues to grow on its own, with an independent future very much in his distant sights.

“Sure, an acquisition to the likes of a big tech company absolutely could happen, but I am gearing this up for an IPO,” he said.

#ai, #big-data, #enterprise, #europe, #financial-crime, #funding, #quantexa, #security

Cheat-maker brags of computer-vision auto-aim that works on “any game”

When it comes to the cat-and-mouse game of stopping cheaters in online games, anti-cheat efforts often rely in part on technology that ensures the wider system running the game itself isn’t compromised. On the PC, that can mean so-called “kernel-level drivers” which monitor system memory for modifications that could affect the game’s intended operation. On consoles, that can mean relying on system-level security that prevents unsigned code from being run at all (until and unless the system is effectively hacked, that is).

But there’s a growing category of cheating methods that can now effectively get around these forms of detection in many first-person shooters. By using external tools like capture cards and “emulated input” devices, along with machine learning-powered computer vision software running on a separate computer, these cheating engines totally circumvent the secure environments set up by PC and console game makers. This is forcing the developers behind these games to look to alternate methods to detect and stop these cheaters in their tracks.

How it works

The basic toolchain used for these external emulated-input cheating methods is relatively simple. The first step is using an external video capture card to record a game’s live output and instantly send it to a separate computer. Those display frames are then run through a computer vision-based object detection algorithm like You Only Look Once (YOLO) that has been trained to find human-shaped enemies in the image (or at least in a small central portion of the image near the targeting reticle).

Read 16 remaining paragraphs | Comments

#ai, #cheating, #computer-vision, #gaming-culture

Didi gets hit by Chinese government, and Pelo raises $150M

Hello and welcome back to Equity, TechCrunch’s venture-capital-focused podcast where we unpack the numbers behind the headlines.

This is Equity Monday Tuesday, our weekly kickoff that tracks the latest private market news, talks about the coming week, digs into some recent funding rounds and mulls over a larger theme or narrative from the private markets. You can follow the show on Twitter here and myself here.

What a busy weekend we missed while mostly hearing distant explosions and hugging our dogs close. Here’s a sampling of what we tried to recap on the show:

It’s going to be a busy week! Chat tomorrow.

Equity drops every Monday at 7:00 a.m. PST, Wednesday, and Friday at 6:00 a.m. PST, so subscribe to us on Apple PodcastsOvercastSpotify and all the casts!

#ai, #byrd, #china, #didi, #equity, #equity-podcast, #funding, #fundings-exits, #india, #pleo, #startups, #twitter, #venture-capital

Uber’s first head of data science just launched a new venture fund to back nascent AI startups

Kevin Novak joined Uber as its 21st employee its seventh engineer in 2011, and by 2014, he was the company’s head of data science. He talks proudly of that time, but like all good things, it ran its course and by the end of 2017, having accomplished what he wanted at the company, he left.

At first, he picked up the pace of his angel investing, work he’d already begun focusing on during weekends and evenings, ultimately building a portfolio of more than 50 startups (including the fintech Pipe and the autonomous checkout company Standard Cognition).

He also began advising both startups and venture firms — including Playground Global, Costanoa Ventures, Renegade Partners and Data Collective — and after falling in love with the work, Novak this year decided to launch his own venture outfit in Menlo Park, Ca., called Rackhouse Venture Capital. Indeed, Rackhouse just closed its debut fund with $15 million, anchored by Uber’s first head of engineering, Curtis Chambers; Steve Gilula, a former chairman of Searchlight Pictures, and the fund of funds Cendana Capital. A lot of the VCs Novak knows are also investors in the fund.

We caught up with Novak late last week to chat out that new vehicle. We also talked about this tenure at Uber, where, be warned, he played a major role in creating surge pricing (though he prefers the term “dynamic pricing.”) You can hear that fuller discussion or check out excerpts from it, edited lightly for length and clarity, below.

TC: You were planning to become a nuclear physicist. How did you wind up at Uber?

KN: As an undergrad, I was studying physics, math and computer science, and when I got to grad school, I really wanted to teach. But I also really liked programming and applying physics concepts in the programming space, and the nuke department had the largest allocation of supercomputer time, so that ended up driving a lot of my research  — just the opportunity to play on computers while doing physics. So [I] was studying to become a nuclear physicist was funded very indirectly through the research that eventually became the Higgs boson. As the Higgs got discovered, it was very good for humanity and absolutely horrible for my research budget . . .

A friend of mine heard what I was doing and sort of knew my skill set and said, like, ‘Hey, you should come check out this Uber cab company that it’s like a limo company with an app. There’s a very interesting data problem and a very interesting math problem.’ So I ended up applying [though I committed] the cardinal sin of startup applications and wore a suit and tie to my interview.

TC: You’re from Michigan. I also grew up in the Midwest so appreciate why you might think that people would wear a suit to an interview.

KN: I got off the elevator and the friend who’d encouraged me to apply was like, ‘What are you wearing?!’ But I got asked to join nonetheless as a computational algorithms engineer — a title that predated the data science trend — and I spent the next couple of years living in the engineering and product world, building data features and . . .things like our ETA engine, basically predicting how long it would take an Uber to get to you. One of my very first projects was working on tolls and tunnels because figuring out which tunnel an Uber went through and how to build time and distance was a common failure point. So I spent, like, three days driving the Big Dig in Boston out to Somerville and back to Logan with a bunch of phones, collecting GPS data.

I got to know a lot of very random facts about Uber cities, but my big claim to fame was dynamic pricing. . . and it turned out to be a really successful cornerstone for the strategy of making sure Ubers were available.

TC: How does that go over, when you tell people that you invented surge pricing?

KN: It’s a very quick litmus test to figure out like people’s underlying enthusiasm for behavioral econ and finance. The Wall Street crowd is like, ‘Oh my god, that’s so cool.’ And then a lot of people are like, ‘Oh, thank you, yeah, thank you so much, wonderful, you buy the next round of drinks’ type of thing. . . [Laughs.]

But data also became the incubation space for a lot of the early special projects like Uber pool and a lot of the ideas around, okay, how would you build a dispatching model that enables different people with pooled ride requests? How do you batch them together efficiently in space and time so that we can get the right match rate that [so this] project is profitable? We did a lot of work on the theory behind the hub-and-spoke Uber Eats delivery models and thinking through how we apply our learnings about ride-share to food. So I got the first person perspective on a lot of these products when it was literally three people scribbling on a notepad or riffing on a laptop over lunch, [and which] eventually went on to become these big, nationwide businesses.

TC: You were working on Uber Freight for the last nine months of your career with Uber, so there with this business with Anthony Levandowski was blowing up.

KN: Yeah, it was it was very interesting era for me because more than six years in, [I was already developing the] attitude of ‘I’ve done everything I wanted to do.’ I joined a 20-person company and, at the time, we were closing in on 20,000 people . . .and I kind of missed the small team dynamic and felt like I was hitting a natural stopping point. And then Uber’s 2017 happened and and there was Anthony, there was Susan Fowler, and Travis has this horrific accident in his personal life and his head is clearly not in the game. But I didn’t want to be the guy who was known for bailing in the worst quarter of the company’s history, so I ended up spending the next year basically keeping the band together and trying to figure out what I could do to keep whatever small part of the company I was running intact and motivated and empathetic and good in every sense of the word.

TC: You left at the end of that year and it seems you’ve been very busy since, including, now, launching this new fund with the backing of outsiders. Why call it Rackhouse ? I see you used the brand Jigsaw Venture Capital when you were investing your own money.

KN: Yeah, even a year in, I had formed an LLC, I was “marking” my portfolio to market, sending quarterly updates to myself and my accountant and my wife. It was one of these exercises that was a carryover from how I was training managers, in that I think you grow most efficiently and successfully if you can develop a few skills at a time. So I was trying to figure out what it would take to run my own back office, even if it was just moving my money from my checking account to my “investing account,” and writing my own portfolio update.

I was really excited about the possibility of launching my first externally facing fund with other people’s money under the Jigsaw banner, too, but there’s actually a fund in the UK [named Jigsaw] and as I started to talk to LPs and was saying ‘Look, I want to do this data fund and I want it to be early stage,’ I’d get calls from them being like, ‘We just saw that Jigsaw did this Series D in Crowdstrike.’ I realized I’d be competing with the other Jigsaw from a mindshare perspective, so figured before things go too big and crazy, I’d create my own distinct brand.

TC: Did you roll any of your angel-backed deals into the new fund? I see Rackhouse has 13 portfolio companies.

KN: There are a few that I’ve agreed to move forward and warehouse for the fund, and we’re just going through the technicalities of doing that right now.

TC: And the focus is on machine learning and AI.

KN: That’s right, and I think there are amazing opportunities outside of the traditional areas of industry focus that, to the extent that you can find like rigorous applications of AI,  are also going to be significantly less competitive. [Deals] that don’t fall in the strike zone of nearly as many [venture] firms is the game I want to be playing. I feel like that that opportunity — regardless of sector, regardless of geography — biases toward domain experts.

TC: I wonder if that also explains the size of your fund — your wanting to stay out of the strike zone of most venture firms.

KN: I want to make sure that I build a fund that enables me to be an active participant in the earliest stages of companies.

Matt Ocko and Zack Bogue [of Data Collective] are good friends of mine — they’re mentors, in fact, and small LPs in the fund and talked with me about how they got started. But now they have a billion-plus [dollars] in assets under management, and he people I [like to back] are two people who are moonlighting and getting ready to take the plunge and [firms the size of Data Collective] have basically priced themselves out of the formation and pre-seed stage, and I like that stage. It’s something where I have a lot of useful experience. I also think it’s the stage where, if you come from a place of domain expertise, you don’t need five quarters of financials to get conviction.

#ai, #cendana-capital, #data-collective, #data-science, #kevin-novak, #machine-learning, #pipe, #standard-cognition, #tc, #uber, #venture-capital

VCs discuss the opportunities – and challenges – in Pittsburgh’s startup ecosystem

Ahead of our TechCrunch City Spotlight: Pittsburgh event tomorrow, I spoke to current Mayor Bill Peduto and Dave Mawhinney, the executive director of Carnegie Mellon University’s Swartz Center for Entrepreneurship. Like many in the Steel City startup community, both share a focus on the historically difficult task of keeping startups in town.

For more on investing in Pittsburgh, be sure to tune in to our City Spotlight on Tuesday, June 29, where we will be joined by Peduto, Duolingo director of engineering Karin Tsai, and Carnegie Mellon University President Farnam Jahanian. Register for the free event here.

I asked Peduto and Mawhinney what the single biggest obstacle has been in building out Pittsburgh’s startup ecosystem. Both responded the same way: venture capital. Raising funding is, of course, a hurdle regardless of location, but many VCs have been reluctant to invest in startups outside of traditional hubs like San Francisco and New York.

“But one of the challenges is getting that capital to come into the community,” said Mawhinney, who leads CMU’s startup efforts. “If you look at how much Uber ATG brought in, how much Argo AI and Aurora  — collectively, those three companies, which have all licensed CMU technologies, they’ve all got over $7 billion in collective capital. Not all of it will be spent here, but a lot of it will be spent here. But that doesn’t necessarily trickle down to the next AI startup raising their first $3 million.”

Pittsburgh skyline

Image Credits: Eilis Garvey/Unsplash

Peduto said growing the VC pipeline has been a focus during his time as mayor.

“I think we’ve been able to convince investors from the coast that the companies don’t need to leave Pittsburgh in order to be highly successful and see their investment pay off,” he told TechCrunch. “However, I believe if we had more venture capital arriving here to help to take early-stage companies into that critical next stage of expansion, it would build off itself and it would excel growth in all of the industry cluster, significantly.”

#ai, #cmu, #ec-north-america, #events, #pittsburgh, #pittsburgh-city-spotlight, #robotics, #startups, #venture-capital

Google launches a new medical app—outside the United States

Google launches a new medical app—outside the United States

Enlarge (credit: Getty Images)

Billions of times each year, people turn to Google’s web search box for help figuring out what’s wrong with their skin. Now, Google is preparing to launch an app that uses image recognition algorithms to provide more expert and personalized help. A brief demo at the company’s developer conference last month showed the service suggesting several possible skin conditions based on uploaded photos.

Machines have matched or outperformed expert dermatologists in studies in which algorithms and doctors scrutinize images from past patients. But there’s little evidence from clinical trials deploying such technology, and no AI image analysis tools are approved for dermatologists to use in the US, says Roxana Daneshjou, a Stanford dermatologist and researcher in machine learning and health. “Many don’t pan out in the real world setting,” she says.

Google’s new app isn’t clinically validated yet either, but the company’s AI prowess and recent buildup of its health care division make its AI dermatology app notable. Still, the skin service will start small—and far from its home turf and largest market in the US. The service is not likely to analyze American skin blemishes anytime soon.

Read 12 remaining paragraphs | Comments

#ai, #dermatology, #eu, #fda, #google, #medicine, #science, #tech

A.I. drug discovery platform Insilico Medicine announces $255 million in Series C funding

Insilico Medicine, an A.I-based platform for drug development and discovery announced $255 million in Series C financing on Tuesday. The massive round is reflective of a recent breakthrough for the company: proof that it’s A.I based platform can create a new target for a disease, develop a bespoke molecule to address it, and begin the clinical trial process. 

It’s also yet another indicator that A.I and drug discovery continues to be especially attractive for investors. 

Insilico Medicine is a Hong Kong-based company founded in 2014 around one central premise: that A.I assisted systems can identify novel drug targets for untreated diseases, assist in the development of new treatments, and eventually predict how well those treatments may perform in clinical trials. Previously, the company had raised $51.3 million in funding, according to Crunchbase

Insilico Medicine’s aim to use A.I to drive drug development isn’t particularly new, but there is some data to suggest that the company might actually accomplish that gauntlet of discovery all the way through trial prediction. In 2020, the company identified a novel drug target for idiopathic pulmonary fibrosis, a disease in which tiny air sacs in the lungs become scarred, which makes breathing laborious. 

Two A.I-based platforms first identified 20 potential targets, narrowed it down to one, and then designed a small molecule treatment that showed promise in animal studies. The company is currently filing an investigational new drug application with the FDA and will begin human dosing this year, with aims to begin a clinical trial late this year or early next year. 

The focus here isn’t on the drug, though, it’s on the process. This project condensed the process of preclinical drug development that typically takes multiple years and hundreds of millions of dollars into just 18 months, for a total cost of about $2.6 million. Still, founder Alex Zhavoronkov doesn’t think that Insilico Medicine’s strengths lie primarily in accelerating preclinical drug development or reducing costs: its main appeal is in eliminating an element of guesswork in drug discovery, he suggests. 

“Currently we have 16 therapeutic assets, not just IPF,” he says. “It definitely raised some eyebrows.” 

“It’s about the probability of success,” he continues. “So the probability of success of connecting the right target to the right disease with a great molecule is very, very low. The fact that we managed to do it in IPF and other diseases I can’t talk about yet – it increases confidence in A.I in general.” 

Bolstered partially by the proof-of-concept developed by the IPF project and enthusiasm around A.I based drug development, Insilico Medicine attracted a long list of investors in this most recent round. 

The round is led by Warburg Pincus, but also includes investment from Qiming Venture Partners, Pavilion Capital, Eight Roads Ventures, Lilly Asia Ventures, Sinovation Ventures, BOLD Capital Partners, Formic Ventures, Baidu Ventures, and new investors. Those include CPE, OrbiMed, Mirae Asset Capital, B Capital Group, Deerfield Management, Maison Capital, Lake Bleu Capital, President International Development Corporation, Sequoia Capital China and Sage Partners. 

This current round was oversubscribed four-fold, according to Zhavoronkov. 

A 2018 study of 63 drugs approved by the FDA between 2009 and 2018 found that the median capitalized research and development investment needed to bring a drug to market was $985 million, which also includes the cost of failed clinical trials. 

Those costs and the low likelihood of getting a drug approved has initially slowed the process of drug development. R&D returns for biopharmaceuticals hit a low of 1.6 percent in 2019, and bounced back to a measly 2.5 percent in 2020 according to a 2021 Deloitte report

Ideally, Zhavoronkov imagines an A.I-based platform trained on rich data that can cut down on the amount of failed trials. There are two major pieces of that puzzle: PandaOmics, an A.I platform that can identify those targets; and Chemistry 42, a platform that can manufacture a molecule to bind to that target.

“We have a tool, which incorporates more than 60 philosophies for target discovery,” he says. 

“You are betting something that is novel, but at the same time you have some pockets of evidence that strengthen your hypothesis. That’s what our A.I does very well.” 

Although the IPF project has not been fully published in a peer-reviewed journal, a similar project published in Nature Biotechnology was. In that paper, Insilco’s deep learning model was able to identify potential compounds in just 21 days

The IPF project is a scale-up of this idea. Zhavoronkov doesn’t just want to identify molecules for known targets, he wants to find new ones and shepherd them all the way through clinical trials. And, indeed, also to continue to collect data during those clinical trials that might improve future drug discovery projects. 

“So far nobody has challenged us to solve a disease in partnership” he says. “If that happens, I’ll be a very happy man.” 

That said, Insilico Medicine’s approach to novel target discovery has been used piecemeal, too. For instance, Insilico Medicine has collaborated with Pfizer on novel target discovery, and Johnson and Johnson on small molecule design and done both with Taisho Pharmaceuticals. Today, the company also announced a new partnership with Teva Branded Pharmaceutical Products R&D, Inc. Teva will aim to use PandaOmics to identify new drug targets.

That said, it’s not just Insilico Medicine raking in money and partnerships. The whole field of A.I-based novel targets has been experiencing significant hype.

In 2019 Nature noted that at least 20 partnerships between major drug companies and A.I drug discovery tech companies had been reported. In 2020, investment in A.I companies pursuing drug development increased to $13.9 billion, a four-fold increase from 2019, per Stanford University’s Artificial Intelligence Index annual report. R&D cost 

Drug discovery projects received the greatest amount of private A.I investment in 2020, a trend that can partially be attributed to the pandemic’s need for rapid drug development. However, the roots of the hype predate Covid-19. 

Zhavorokov is aware that A.I based drug development is riding a bit of a hype wave right now. “Companies without substantial evidence supporting their A.I powered drug discovery claims manage to raise very quickly,” he notes. 

Insilico Medicine, he says, can distinguish itself based on the quality of its investors. “Our investors don’t gamble,” he says. 

But like so many other A.I-based drug discovery platforms, we’ll have to see whether they make it through the clinical trial churn. 

 

#ai, #artificial-intelligence, #clinical-trials, #drug-discovery, #machine-learning, #tc