Recycling robotics company AMP Robotics could raise up to $70M

AMP Robotics, the recycling robotics technology developer backed by investors including Sequoia Capital and Sidewalk Infrastructure Partners, is close to closing on as much as $70 million in new financing, according to multiple sources with knowledge of the company’s plans.

The new financing speaks to AMP Robotics’ continued success in pilot projects and with new partnerships that are exponentially expanding the company’s deployments.

Earlier this month the company announced a new deal that represented its largest purchase order for its trash sorting and recycling robots.

That order, for 24 machine learning-enabled robotic recycling systems with the waste handling company Waste Connections, was a showcase for the efficacy of the company’s recycling technology.

That comes on the back of a pilot program earlier in the year with one Toronto apartment complex, where the complex’s tenants were able to opt into a program that would share recycling habits monitored by AMP Robotics with the building’s renters in an effort to improve their recycling behavior.

The potential benefits of AMP Robotic’s machine learning enabled robots are undeniable. The company’s technology can sort waste streams in ways that traditional systems never could and at a cost that’s far lower than most waste handling facilities.

As TechCrunch reported earlier the tech can tell the difference between high-density polyethylene and polyethylene terephthalate, low-density polyethylene, polypropylene and polystyrene. The robots can also sort for color, clarity, opacity and shapes like lids, tubs, clamshells and cups — the robots can even identify the brands on packaging.

AMP’s robots already have been deployed in North America, Asia and Europe, with recent installations in Spain and across the U.S. in California, Colorado, Florida, Minnesota, Michigan, New York, Texas, Virginia and Wisconsin.

At the beginning of the year, AMP Robotics  worked with its investor, Sidewalk Labs on a pilot program that provided residents of a single apartment building representing 250 units in Toronto with detailed information about their recycling habits. Sidewalk Labs is transporting the waste to a Canada Fibers material recovery facility where trash is sorted by both Canada Fibers employees and AMP Robotics.

Once the waste is categorized, sorted and recorded, Sidewalk communicates with residents of the building about how they’re doing in their recycling efforts.

It was only last November that the Denver-based AMP Robotics raised a $16 million round from Sequoia Capital and others to finance the early commercialization of its technology.

 

As TechCrunch reported at the time, recycling businesses used to be able to rely on China to buy up any waste stream (no matter the quality of the material). However, about two years ago, China decided it would no longer serve as the world’s garbage dump and put strict standards in place for the kinds of raw materials it would be willing to receive from other countries.

The result has been higher costs at recycling facilities, which actually are now required to sort their garbage more effectively. At the time, unemployment rates put the squeeze on labor availability at facilities where trash was sorted. Over the past year, the COVID-19 pandemic has put even more pressure on those recycling and waste handling facilities, despite their identification as “essential workers”.

Given the economic reality, recyclers are turning to AMP’s technology — a combination of computer vision, machine learning and robotic automation to improve efficiencies at their facilities.

And, the power of AMP’s technology to identify waste products in a stream has other benefits, according to chief executive Matanya Horowitz.

“We can identify… whether it’s a Coke or Pepsi can or a Starbucks cup,” Horowitz told TechCrunch last year. “So that people can help design their product for circularity… we’re building out our reporting capabilities and that, to them, is something that is of high interest.”

AMP Robotics declined to comment for this article.

 

#amp, #amps, #articles, #asia, #california, #china, #colorado, #denver, #europe, #florida, #machine-learning, #materials, #matter, #michigan, #minnesota, #new-york, #north-america, #pepsi, #recycling, #robot, #robotics, #sequoia-capital, #sidewalk-infrastructure-partners, #sidewalk-labs, #spain, #starbucks, #tc, #texas, #toronto, #united-states, #virginia, #water-conservation, #wisconsin

0

Serenade snags $2.1M seed round to turn speech into code

Several years ago Serenade co-founder Matt Wiethoff was a developer at Quora when he was diagnosed with a severe repetitive stress injury to his hand and couldn’t code. He and co-founder Tommy MacWilliam decided to use AI to create a tool that let him speak the code instead and Serenade was born.

Today, the company announced a $2.1 million seed investment led by Amplify Partners and Neo. While it was at it, the startup also announced the first commercial version of the product, Serenade Pro.

“Serenade is an app that you’ll download onto your computer. It will plug into your existing editors like Visual Studio Code or IntelliJ, and then allows you to speak your code,” co-founder MacWilliam told me. At that point the startup’s AI engine takes over and translates what you say into syntactically correct code.

He says that while there are a bunch of generalized speech-to-text engines out there, they hadn’t been able to find anything that was tuned specifically for the requirements of someone entering code. While it may seem that this would have a pretty narrow market focus, the co-founders see this use case as simply a starting point with developers using this kind of technology even when not injured.

“Our vision is that this is just the future of programming. With machine learning, coding becomes faster and easier than ever before, and our AI eliminates a lot of the rote mechanical parts of programming. So rather than needing to remember keyboard shortcuts or syntax details of a language, you can just focus on expressing your idea naturally, and then our machine learning takes care of translating that into actual code for you,” MacWilliam explained.

The startup has five employees today, but has plans to build the company to 15-20 in the next year fueled by the introduction of the commercial product and the new funding. As they build the company, MacWilliam says being diverse is a big part of that.

“Our diversity strategy ranges throughout the process. I think it starts at the top of the funnel. We need to make sure that we’re going out and reaching great people — there are great people everywhere and it’s on us to find them and convince them why working at Serenade would be great,” he said. They are working with a variety of sources to find a diverse group of candidates that stretches beyond their own personal network, then looking at how they interview and judge candidates’ skill sets with the goal of building a more diverse employee base.

The company sees itself as a way to move beyond the keyboard to speaking your code, and it intends to use this money to continue building the product, while building a community of dedicated users. “We’ll be thinking about how we can showcase the value of coding by voice, how we can put together demos and build a community of product champions showing that [it’s faster to code using your voice],” he said.

#amplify-partners, #artificial-intelligence, #cloud, #developer, #developer-tools, #funding, #machine-learning, #serenade, #speech-to-text, #startups

0

Can artificial intelligence give elephants a winning edge?

Images of elephants roaming the African plains are imprinted on all of our minds and something easily recognized as a symbol of Africa. But the future of elephants today is uncertain. An elephant is currently being killed by poachers every 15 minutes, and humans, who love watching them so much, have declared war on their species. Most people are not poachers, ivory collectors or intentionally harming wildlife, but silence or indifference to the battle at hand is as deadly.

You can choose to read this article, feel bad for a moment and then move on to your next email and start your day.

Or, perhaps you will pause and think: Our opportunities to help save wildlife, especially elephants, are right in front of us and grow every day. And some of these opportunities are rooted in machine learning (ML) and the magical outcome we fondly call AI.

Open-source developers are giving elephants a neural edge

Six months ago, amid a COVID-infused world, Hackster.io, a large open-source community owned by Avnet, and Smart Parks, a Dutch-based organization focused on wildlife conservation, reached out to tech industry leaders, including Microsoft, u-blox and Taoglas, Nordic Semiconductors, Western Digital and Edge Impulse with an idea to fund the R&D, manufacturing and shipping of 10 of the most advanced elephant tracking collars ever built.

These modern tracking collars are designed to deploy advanced machine-learning (ML) algorithms with the most extended battery life ever delivered for similar devices and a networking range more expansive than ever seen before. To make this vision even more audacious, they called to fully open-source and freely share the outcome of this effort via OpenCollar.io, a conservation organization championing open-source tracking collar hardware and software for environmental and wildlife monitoring projects.

Our opportunities to help save wildlife — especially elephants — are right in front of us and grow every day.

The tracker, ElephantEdge, would be built by specialist engineering firm Irnas, with the Hackster community coming together to make fully deployable ML models by Edge Impulse and telemetry dashboards by Avnet that will run the newly built hardware. Such an ambitious project was never attempted before, and many doubted that such a collaborative and innovative project could be pulled off.

Creating the world’s best elephant-tracking device

Only they pulled it off. Brilliantly. The new ElephantEdge tracker is considered the most advanced of its kind, with eight years of battery life and hundreds of miles worth of LoRaWAN networking repeaters range, running TinyML models that will provide park rangers with a better understanding of elephant acoustics, motion, location, environmental anomalies and more. The tracker can communicate with an array of sensors, connected by LoRaWAN technology to park rangers’ phones and laptops.

This gives rangers a more accurate image and location to track than earlier systems that captured and reported on pictures of all wildlife, which ran down the trackers’ battery life. The advanced ML software that runs on these trackers is built explicitly for elephants and developed by the Hackster.io community in a public design challenge.

“Elephants are the gardeners of the ecosystems as their roaming in itself creates space for other species to thrive. Our ElephantEdge project brings in people from all over the world to create the best technology vital for the survival of these gentle giants. Every day they are threatened by habitat destruction and poaching. This innovation and partnerships allow us to gain more insight into their behavior so we can improve protection,” said Smart Parks co-founder Tim van Dam.

Open-source, community-powered, conservation-AI at work

With hardware built by Irnas and Smart Parks, the community was busy building the algorithms to make it sing. Software developer and data scientist Swapnil Verma and Mausam Jain in the U.K. and Japan created Elephant AI. Using Edge Impulse, the team developed two ML models that will tap the tracker’s onboard sensors and provide critical information for park rangers.

The first community-led project, called Human Presence Detection, will alert park rangers of poaching risk using audio sampling to detect human presence in areas where humans are not supposed to be. This algorithm uses audio sensors to record sound and sight while sending it over the LoRaWAN network directly to a ranger’s phone to create an immediate alert.

The second model they named “Elephant Activity Monitoring.” It detects general elephant activity, taking time-series input from the tracker’s accelerometer to spot and make sense of running, sleeping and grazing to provide conservation specialists with the critical information they need to protect the elephants.

Another brilliant community development came from the other side of the world. Sara Olsson, a Swedish software engineer who has a passion for the national world, created a TinyML and IoT monitoring dashboard to help park rangers with conservation efforts.

With little resources and support, Sara built a full telemetry dashboard combined with ML algorithms to monitor camera traps and watering holes, while reducing network traffic by processing data on the collar and considerably saving battery life. To validate her hypothesis, she used 1,155 data models and 311 tests!

Sara Olsson's TinyML and IoT monitoring dashboard

Sara Olsson’s TinyML and IoT monitoring dashboard. Image Credits: Sara Olsson

She completed her work in the Edge Impulse studio, creating the models and testing them with camera traps streams from Africam using an OpenMV camera from her home’s comfort.

Technology for good works, but human behavior must change

Project ElephantEdge is an example of how commercial and public interest can converge and result in a collaborative sustainability effort to advance wildlife conservation efforts. The new collar can generate critical data and equip park rangers with better data to make urgent life-saving decisions about protecting their territories. By the end of 2021, at least ten elephants will be sporting the new collars in selected parks across Africa, in partnership with the World Wildlife Fund and Vulcan’s EarthRanger, unleashing a new wave of conservation, learning and defending.

Naturally, this is great, the technology works, and it’s helping elephants like never before. But in reality, the root cause of the problem runs much more profound. Humans must change their relationship to the natural world for proper elephant habitat and population revival to occur.

“The threat to elephants is greater than it’s ever been,” said Richard Leakey, a leading palaeoanthropologist and conservationist scholar. The main argument for allowing trophy or ivory hunting is that it raises money for conservation and local communities. However, a recent report revealed that only 3% of Africa’s hunting revenue trickles down to communities in hunting areas. Animals don’t need to die to make money for the communities you live around.

With great technology, collaboration and a commitment to address the underlying cultural conditions and the ivory trade that leads to most elephant deaths, there’s a real chance to save these singular creatures.

#africa, #artificial-intelligence, #column, #developer, #greentech, #internet-of-things, #machine-learning, #science, #western-digital, #world-wildlife-fund

0

Why is GoCardless COO Carlos Gonzalez-Cadenas pivoting to become a full-time VC?

Index Ventures, a London- and San Francisco-headquartered venture capital firm that primarily invests in Europe and the U.S., recently announced its latest partner. Carlos Gonzalez-Cadenas, currently COO of London-based fintech GoCardless and previously the chief product officer of Skyscanner, will join Index in January.

Gonzalez-Cadenas is a seasoned entrepreneur and operator, but has also become a prolific angel investor in the U.K. and Europe over the last three years, making more than 50 angel investments in total. Well-regarded by founders and co-investors, his transition to a full-time role in venture capital feels like quite a natural one.

Earlier this week, TechCrunch caught up with Gonzalez-Cadenas over Zoom to learn more about his new role at Index and how he intends to source deals and support founders. Index’s latest hire also shared his insights on Europe’s venture market, describing this era as the “best moment in entrepreneurship in Europe.”

TechCrunch: Let me start by asking, why do you want to become a VC? You’re obviously a well-established entrepreneur and operator, are you sure venture capital is the career for you?

Carlos Gonzalez-Cadenas: I’ve been an angel investor for the last three years and this is something that has basically grown for me quite organically. I started doing just a handful and seeing if this is something I like and over time it has grown quite a lot and so has the number of entrepreneurs I’m partnered with. And this is something I’ve been increasingly more excited to do. So it has grown organically and something that emotionally has been getting closer and closer as time has passed.

And the things I like more specifically are: One, I’m quite a curious person, and for me, investing gives you the possibility of learning a lot about different sectors, about different entrepreneurs, different ways of building businesses, and that is something that I enjoy a lot.

The second bit is that I care a lot about helping entrepreneurs, especially the next generation of entrepreneurs, build great businesses in Europe. I’ve been very lucky, in the past, to learn from great people, like Gareth [Williams, Skyscanner co-founder] and Hiroki [Takeuchi. CEO at GoCardless], in my journey. I feel a duty of helping the next generation of entrepreneurs and sharing all the things that I’ve learnt. I care a lot about setting up founders as much as possible for success and sharing all those experiences I’ve learned [from].

These are the key two motivations that have led me to decide that it would be a great time now to move to the investing side.

How have you managed your deal flow while having a full-time job and where is that deal flow coming from?

It is typically coming in three buckets. A part of it is coming from my entrepreneur and operator network. So there are entrepreneurs and operators I know that are referring other entrepreneurs to me. Another bucket is other investors that I typically co-invest with. Another bucket is venture capitalists. I basically tend to invest quite a lot with VCs and in some cases they are referring deals to me.

In terms of managing it alongside GoCardless, it takes quite a lot of effort. It requires a lot of dedication and time invested during evenings and weekends.

The good thing is that my network typically tends to send me quite highly curated deals so essentially the deal flow I have luckily tends to be quite high quality, which makes things a bit more manageable. But don’t get me wrong, it still takes quite a lot of effort even if the deal flow is relatively high quality.

Presumably you haven’t been able to be all that hands-on as an angel investor, so how are you going to make that transition and what is it that you think you bring with the operational side to venture?

The way I think about this is, the entrepreneurs I typically invest in and their companies tend to be quite capable in their day-to-day perspective. Where they tend to find more value in interactions with me is what I call the “moments of truth.” Those key decisions, those key points in the journey where essentially it can influence the trajectory of the business in a fundamental way. It could be things like, I am fundraising and I don’t know how to position the business. Or I’m thinking about my strategy for the next 18 months and I will basically welcome an experienced person giving me a qualified opinion.

Or I have a big people problem and I don’t know how to solve that problem and I need that third person who has been in my shoes before. Or it could be that I’m thinking about how to organize my team as I move from startup to scale-up and I need help from someone who has scaled teams before. Or could be that I’m hiring three executives and I don’t know what a great CMO looks like. It’s those high-impact, high-leverage questions that the entrepreneurs tend to find helpful engaging with me, as opposed to very detailed day-to-day things that most of the entrepreneurs I work with tend to be quite capable of doing. And so far that model is working. The other thing is that the model is quite scalable because you are engaging 2-3 times per year but those times are high quality and highly impactful for the entrepreneur.

I typically also tend to have pretty regular and frequent communication with entrepreneurs on Slack. It’s more like quick questions that can be solved, and I tend to get quite a lot of that. So I think it’s that bimodel approach of high-frequency questions that we can solve by asynchronous means or high-impact moments a few times per year where, essentially, we need to sit down and we need to think together deeply about the problem.

And I tend to do nothing in the middle, where essentially, it’s stuff that is not so impactful but takes a huge amount of time for everyone, that doesn’t tend to be the most effective way of helping entrepreneurs. Obviously, I’m guided by what entrepreneurs want from perspective, so I’m always training the models in response to what they need.

#artificial-intelligence, #carlos-gonzalez-cadenas, #entrepreneurship, #europe, #london, #machine-learning, #small-business, #startups, #tc, #venture-capital

0

Neatsy wants to reduce sneaker returns with 3D foot scans

U.S.-based startup Neatsy AI is using the iPhone’s depth-sensing FaceID selfie camera as a foot scanner to capture 3D models for predicting a comfortable sneaker fit.

Its app, currently soft launched for iOS but due to launch officially next month, asks the user a few basic questions about sneaker fit preference before walking through a set of steps to capture a 3D scan of their feet using the iPhone’s front-facing camera. The scan is used to offer personalized fit predictions for a selection of sneakers offered for sale in-app — displaying an individualized fit score (out of five) in green text next to each sneaker model.

Shopping for shoes online can lead to high return rates once buyers actually get to slip on their chosen pair, since shoe sizing isn’t standardized across different brands. That’s the problem Neatsy wants its AI to tackle by incorporating another more individual fit signal into the process.

The startup, which was founded in March 2019, has raised $400K in pre-seed funding from angel investors to get its iOS app to market. The app is currently available in the US, UK, Germany, France, Italy, Spain, Netherlands, Canada and Russia. 

Neatsy analyzes app users’ foot scans using a machine learning model it’s devised to predict a comfy fit across a range of major sneaker brands — currently including Puma, Nike, Jordan Air and Adidas — based on scanning the insoles of sneakers, per CEO and founder Artem Semyanov.

He says they’re also factoring in the material shoes are made of and will be honing the algorithm on an ongoing basis based on fit feedback from users. (The startup says it’s secured a US patent for its 3D scanning tech for shoe recommendations.)

The team tested the algorithm’s efficiency via some commercial pilots this summer — and say they were able to demonstrate a 2.7x reduction in sneaker return rates based on size, and a 1.9x decrease in returns overall, for a focus group with 140 respondents.

Handling returns is clearly a major cost for online retailers — Neatsy estimates that sneaker returns specifically rack up $30BN annually for ecommerce outlets, factoring in logistics costs and other factors like damaged boxes and missing sneakers.

“All in all, shoe ecommerce returns vary among products and shops between 30% and 50%. The most common reasons for this category are fit & size mismatch,” says Semyanov, who headed up the machine learning team at Prism Labs prior to founding Neatsy.

“According to Zappos, customers who purchase its most expensive footwear ultimately return ~50% of everything they buy. 70% online shoppers make returns each year. Statista estimates return deliveries will cost businesses $550 billion by 2020,” he tells us responding to questions via email.

“A 2019 survey from UPS found that, for 73% of shoppers, the overall returns experience impacts how likely they are to purchase from a given retailer again, and 68% say the experience impacts their overall perceptions of the retailer. That’s the drama here!

“Retailers are forced to accept steep costs of returns because otherwise, customers won’t buy. Vs us who want to treat the main reasons of returns rather than treating the symptoms.”

While ecommerce giants like Amazon address this issue by focusing on logistics to reducing friction in the delivery process, speeding up deliveries and returns so customers spend less time waiting to get the right stuff, scores of startups have been trying to tackle size and fit with a variety of digital (and/or less high tech) tools over the past five+ years — from 3D body models to ‘smart’ sizing suits or even brand- and garment-specific sizing tape (Nudea‘s fit tape for bras) — though no one has managed to come up with a single solution that works for everything and everyone. And a number of these startups have deadpooled or been acquired by ecommerce platforms without a whole lot to show for it.

While Neatsy is attempting to tackle what plenty of other founders have tried to do on the fit front, it is at least targeting a specific niche (sneakers) — a relatively narrow focus that may help it hone a useful tool.

It’s also able to lean on mainstream availability of the iPhone’s sensing hardware to get a leg up. (Whereas a custom shoe design startup that’s been around for longer, Solely Original, has offered custom fit by charging a premium to send out an individual fit kit.)

But even zeroing in on sneaker comfort, Neatsy’s foot scanning process does require the user to correctly navigate quite a number of steps (see the full flow in the below video). Plus you need to have a pair of single-block colored socks handy (stripy sock lovers are in trouble). So it’s not a two second process, though the scan only has to be done once.

At the time of writing we hadn’t been able to test Neatsy’s scanning process for ourselves as it requires an iPhones with a FaceID depth-sensing camera. On this writer’s 2nd-gen iPhone SE, the app allowed me to swipe through each step of the scan instruction flow but then hung at what should have been the commencement of scanning — displaying a green outline template of a left foot against a black screen.

This is a bug the team said they’ll be fixing so the scanner gets turned off entirely for iPhone models that don’t have the necessary hardware. (Its App Store listing states its compatible with iPhone SE (2nd generation), though doesn’t specify the foot scan feature isn’t.) 

While the current version of Neatsy’s app is a direct to consumer ecommerce play, targeting select sneaker models at app savvy Gen Z/Millennials, it’s clearly intended as a shopfront for retailers to check out the technology.

When as ask about this Semyanov confirms its longer term ambition is for its custom fit model to become a standard piece of the ecommerce puzzle.

“Neatsy app is our fastest way to show the world our vision of what the future online shop should be,” he tells TechCrunch. “It attracts users to shops and we get revenue share when users buy sneakers via us. The app serves as a new low-return sales channel for a retailer and as a way to see the economic effect on returns by themselves.

“Speaking long term we think that our future is B2B and all ecommerce shops would eventually have a fitting tech, we bet it will be ours. It will be the same as having a credit card payment integration in your online shop.”

#artificial-intelligence, #computer-vision, #ecommerce, #iphone-faceid, #machine-learning, #neatsy, #prisma-labs

0

FireEye acquires Respond Software for $186M, announces $400M investment

The security sector is ever frothy and acquisitive. Just last week Palo Alto Networks grabbed Expanse for $800 million. Today it was FireEye’s turn snagging Respond Software, a company that helps customers investigate and understand security incidents, while reducing the need for highly trained and scarce security analysts. The deal has closed, according to the company.

FireEye had its eye on Respond’s Analyst product, which it plans to fold into to its Mandiant Solutions platform. Like many companies today, FireEye is focused on using machine learning to help bolster its solutions and bring a level of automation to sorting through the data, finding real issues and weeding out false positives. The acquisition gives them a quick influx of machine learning-fueled software.

FireEye sees a product that can help add speed to its existing tooling.”With Mandiant’s position on the front lines, we know what to look for in an attack, and Respond’s cloud-based machine learning productizes our expertise to deliver faster outcomes and protect more customers,” Kevin Mandia, FireEye CEO said in a statement announcing the deal.

Mike Armistead, CEO at Respond, wrote in a company blog post that today’s acquisition marks the end of a 4-year journey for the startup, but it believes it has landed in a good home with FireEye. “We are proud to announce that after many months of discussion, we are becoming part of the Mandiant Solutions portfolio, a solution organization inside FireEye,” Armistead wrote.

While FireEye was at it, it also announced a $400 million investment from Blackstone Tactical Opportunities fund and ClearSky (an investor in Respond), giving the public company a new influx of cash to make additional moves like the acquisition it made today.

It didn’t come cheap. “Under the terms of its investment, Blackstone and ClearSky will purchase $400 million in shares of a newly designated 4.5% Series A Convertible Preferred Stock of FireEye (the “Series A Preferred”), with a purchase price of $1,000 per share. The Series A Preferred will be convertible into shares of FireEye’s common stock at a conversion price of $18.00 per share,” the company explained in a statement. The stock closed at $14.24 today.

Respond, which was founded in 2016, raised $32 million including a $12 million Series A in 2017 led by CRV and Foundation Capital and a $20 million Series B led by ClearSky last year, according to Crunchbase data.

#enterprise, #exit, #fireeye, #fundings-exits, #ma, #machine-learning, #mergers-and-acquisitions, #respond-software, #security, #startups, #tc

0

Autodesk CEO Andrew Anagnost explains the strategy behind acquiring Spacemaker

Autodesk, the U.S. publicly listed software and services company that targets engineering and design industries, acquired Norway’s Spacemaker this week. The startup has developed AI-supported software for urban development, something Autodesk CEO Andrew Anagnost broadly calls generative design.

The price of the acquisition is $240 million in a mostly all-cash deal. Spacemaker’s VC backers included European firms Atomico and Northzone, which co-led the company’s $25 million Series A round in 2019. Other investors on the cap table include Nordic real estate innovator NREP, Nordic property developer OBOS, U.K. real estate technology fund Round Hill Ventures and Norway’s Construct Venture.

In an interview with TechCrunch, Anagnost shared more on Autodesk’s strategy since it transformed into a cloud-first company and what attracted him to the 115-person Spacemaker team. We also delved more into Spacemaker’s mission to augment the work of humans and not only speed up the urban development design and planning process but also improve outcomes, including around sustainability and quality of life for the people who will ultimately live in the resulting spaces.

I also asked if Spacemaker sold out too early? And why did U.S.-headquartered Autodesk acquire a startup based in Norway over numerous competitors closer to home? What follows is a transcript of our Zoom call, lightly edited for length and clarity.

TechCrunch: Let’s start high-level. What is the strategy behind Autodesk acquiring Spacemaker?

Andrew Anagnost: I think Autodesk, for a while … has had a very clearly stated strategy about using the power of the cloud; cheap compute in the cloud and machine learning/artificial intelligence to kind of evolve and change the way people design things. This is something strategically we’ve been working toward for quite a while both with the products we make internally, with the capabilities we roll out that are more cutting edge and with also our initiative when we look at companies we’re interested in acquiring.

As you probably know, Spacemaker really stands out in terms of our space, the architecture space, and the engineering and owner space, in terms of applying cloud computing, artificial intelligence, data science, to really helping people explore multiple options and come up with better decisions. So it’s completely in line with the strategy that we had. We’ve been looking at them for over a year in terms of whether or not they were the right kind of company for us.

Culturally, they’re the right company. Vision and strategy-wise, they’re the right company. Also, talent-wise, they’re the right company, They really do stand out. They’ve built a real, practical, usable application that helps a segment of our population use machine learning to really create better outcomes in a critical area, which is urban redevelopment and development.

So it’s totally aligned with what we’re trying to do. It’s not only a platform for the product they do today — they have a great product that’s getting increasing adoption — but we also see the team playing an important role in the future of where we’re taking our applications. We actually see what Spacemaker has done reaching closer and closer to what Revit does [an existing Autodesk product]. Having those two applications collaborate more closely together to evolve the way people assess not only these urban planning designs that they’re focused on right now, but also in the future, other types of building projects and building analysis and building option exploration.

How did you discover Spacemaker? I mean, I’m guessing you probably looked at other companies in the space.

We’ve been watching this space for a while; the application that Spacemaker has built we would characterize it, from our terminology, as generative design for urban planning, meaning the machine generating options and option explorations for urban planning type applications, and it overlaps both architecture and owners.

#artificial-intelligence, #autodesk, #building-information-modeling, #cloud-computing, #europe, #machine-learning, #proptech, #real-estate, #startups, #tc

0

Diagnoss launches its coding assistant for medical billing

Diagnoss, the Berkeley, Calif.-based startup backed by the machine learning-focused startup studio The House, has launched its coding assistant for medical billing, the company said.

The software provides real-time feedback on documentation and coding.

Coding problems can be the difference between success and failure for hospitals, according to Diagnoss. Healthcare providers were decimated by the COVID-19 outbreak, with hospitals operating below 60% capacity and one-fourth of them facing the potential for closing in a year if the pandemic continues to disrupt care.

The cost pressures mean that any coding error can be the financial push that forces a healthcare provider over the edge.

“For every patient encounter, a physician spends an average of 16 minutes on administration, which adds up to several hours every single day. In addition, codes entered are often wrong – up to a 30% error rate – resulting in missed or delayed reimbursements. We believe that, with the great progress we’ve seen with artificial intelligence and machine learning, we can finally address some of these inefficiencies that are leading to physician burnout and financial strain,”  said Abboud Chaballout, founder and chief executive of Diagnoss, in a statement.

Diagnoss acts like a grammar checking tool, but its natural language processing software is focused on reading doctor’s notes. The company’s tools can provide evaluation and management code for patient encounters; point out missing information in doctors’ notes; and provide predictions about the diagnosis and procedure codes that could apply after reviewing a doctor’s notes.

In a study of 39,000 de-identified EHR charts, the company found that its machine coding service was about 50% more accurate than human coders, according to a Diagnoss review.

Physician practices are already using Diagnoss’ service through a previously announced partnership with the mobile EHR vendor, DrChrono .

#assistant, #california, #coding, #drchrono, #electronic-health-records, #health, #healthcare, #knowledge, #machine-learning, #natural-language-processing, #physician, #tc

0

Nvidia developed a radically different way to compress video calls

Instead of transmitting an image for every frame, Maxine sends keypoint data that allows the receiving computer to re-create the face using a neural network.

Enlarge / Instead of transmitting an image for every frame, Maxine sends keypoint data that allows the receiving computer to re-create the face using a neural network. (credit: Nvidia)

Last month, Nvidia announced a new platform called Maxine that uses AI to enhance the performance and functionality of video conferencing software. The software uses a neural network to create a compact representation of a person’s face. This compact representation can then be sent across the network, where a second neural network reconstructs the original image—possibly with helpful modifications.

Nvidia says that its technique can reduce the bandwidth needs of video conferencing software by a factor of 10 compared to conventional compression techniques. It can also change how a person’s face is displayed. For example, if someone appears to be facing off-center due to the position of her camera, the software can rotate her face to look straight instead. Software can also replace someone’s real face with an animated avatar.

Maxine is a software development kit, not a consumer product. Nvidia is hoping third-party software developers will use Maxine to improve their own video conferencing software. And the software comes with an important limitation: the device receiving a video stream needs an Nvidia GPU with tensor core technology. To support devices without an appropriate graphics card, Nvidia recommends that video frames be generated in the cloud—an approach that may or may not work well in practice.

Read 27 remaining paragraphs | Comments

#generative-adversarial-networks, #machine-learning, #maxine, #neural-networks, #nvidia, #nvidia-maxine, #science, #tech

0

Portugal’s Faber reaches $24.3M for its second fund aimed at data-driven startups from Iberia

Portuguese VC Faber has hit the first close of its Faber Tech II fund at €20.5 million ($24.3 million). The fund will focus on early-stage data-driven startups starting from Southern Europe and the Iberian peninsula, with the aim of reaching a final close of €30 million in the coming months. The new fund targets pre-series A and early-stage startups in Artificial Intelligence, Machine Learning and Data Science.

The fund is backed by European Investment Fund (EIF) and the local Financial Development Institution (IFD), with a joint commitment of €15 million (backed by the Investment Plan for Europe – the Juncker Plan and through the Portugal Tech program), alongside other private institutional and individual investors.

Alexandre Barbosa, Faber’s Managing Partner, said “The success of the first close of our new fund allows us to foresee a growth in the demand for this type of investment, as we believe digital transformation through Intelligence Artificial, Machine Learning and data science are increasingly relevant for companies and their businesses, and we think Southern Europe will be the launchpad of a growing number.”

Faber has already ‘warehoused’ three initial investments. It co-financed a 15.6 million euros Series A for SWORD Health – portuguese startup that created the first digital physiotherapy system combining artificial intelligence and clinical teams. It led the pre-seed round of YData, a startup with a data-centric development platform that provides data science professionals tools to deal with accessing high-quality and meaningful data while protecting its privacy. It also co-financed the pre-seed round of Emotai, a neuroscience-powered analytics and performance-boosting platform for virtual sports.

Faber was a first local investor in the first wave of Portugal’s most promising startups, such as Seedrs (co-founded by Carlos Silva, one f Faber’s Partners) which recently announced its merger with CrowdCube); Unbabel; Codacy and Hole19, among others.

Faber’s main focus is deep-tech and data science startups and as such it’s assembled around 20 experts, researchers, Data Scientists, CTO’s, Founders, AI and Machine Learning professors, as part of its investment strategy.

In particular, it’s created the new role of Professor-in-residence, the first of whom is renowned professor Mário Figueiredo from Lisbon’s leading tech university Instituto Superior Técnico. His interests include signal processing, machine learning, AI and optimization, being a highly cited researcher in these fields.

Speaking to TechCrunch in an interview Barbosa added: “We’ve seen first-time, but also second and third-time entrepreneurs coming over to Lisbon, Porto, Barcelona, Valencia, Madrid and experimenting with their next startup and considering starting-up from Iberia in the first place. But also successful entrepreneurs considering extending their engineering teams to Portugal and building engineering hubs in Portugal or Spain.”

“We’ve been historically countercyclical, so we found that startups came to, and appears in Iberia back in 2012 / 2013. This time around mid-2020, we’re very bullish on what’s we can do for the entrepreneurial engine of the economy. We see a lot happening – especially around our thesis – which is basically the data stack, all things data AI-driven, machine learning, data science, and we see that as a very relevant core. A lot of the transformation and digitization is happening right now, so we see a lot of promising stuff going on and a lot of promising talent establishing and setting up companies in Portugal and Spain – so that’s why we think this story is relevant for Europe as a whole.”

#articles, #artificial-intelligence, #barcelona, #crowdcube, #cto, #entrepreneurship, #europe, #european-investment-fund, #machine-learning, #madrid, #managing-partner, #neuroscience, #portugal, #private-equity, #seedrs, #spain, #startup-company, #tc, #valencia

0

Abacus.AI raises another $22M and launches new AI modules

AI startup RealityEngines.AI changed its name to Abacus.AI in July. At the same time, it announced a $13 million Series A round. Today, only a few months later, it is not changing its name again, but it is announcing a $22 million Series B round, led by Coatue, with Decibel Ventures and Index Partners participating as well. With this, the company, which was co-founded by former AWS and Google exec Bindu Reddy, has now raised a total of $40.3 million.

Abacus co-founder Bindu Reddy, Arvind Sundararajan and Siddartha Naidu. Image Credits: Abacus.AI

In addition to the new funding, Abacus.AI is also launching a new product today, which it calls Abacus.AI Deconstructed. Originally, the idea behind RealityEngines/Abacus.AI was to provide its users with a platform that would simplify building AI models by using AI to automatically train and optimize them. That hasn’t changed, but as it turns out, a lot of (potential) customers had already invested into their own workflows for building and training deep learning models but were looking for help in putting them into production and managing them throughout their lifecycle.

“One of the big pain points [businesses] had was, ‘look, I have data scientists and I have my models that I’ve built in-house. My data scientists have built them on laptops, but I don’t know how to push them to production. I don’t know how to maintain and keep models in production.’ I think pretty much every startup now is thinking of that problem,” Reddy said.

Image Credits: Abacus.AI

Since Abacus.AI had already built those tools anyway, the company decided to now also break its service down into three parts that users can adapt without relying on the full platform. That means you can now bring your model to the service and have the company host and monitor the model for you, for example. The service will manage the model in production and, for example, monitor for model drift.

Another area Abacus.AI has long focused on is model explainability and de-biasing, so it’s making that available as a module as well, as well as its real-time machine learning feature store that helps organizations create, store and share their machine learning features and deploy them into production.

As for the funding, Reddy tells me the company didn’t really have to raise a new round at this point. After the company announced its first round earlier this year, there was quite a lot of interest from others to also invest. “So we decided that we may as well raise the next round because we were seeing adoption, we felt we were ready product-wise. But we didn’t have a large enough sales team. And raising a little early made sense to build up the sales team,” she said.

Reddy also stressed that unlike some of the company’s competitors, Abacus.AI is trying to build a full-stack self-service solution that can essentially compete with the offerings of the big cloud vendors. That — and the engineering talent to build it — doesn’t come cheap.

Image Credits: Abacus.AI

It’s no surprise then that Abacus.AI plans to use the new funding to increase its R&D team, but it will also increase its go-to-market team from two to ten in the coming months. While the company is betting on a self-service model — and is seeing good traction with small- and medium-sized companies — you still need a sales team to work with large enterprises.

Come January, the company also plans to launch support for more languages and more machine vision use cases.

“We are proud to be leading the Series B investment in Abacus.AI, because we think that Abacus.AI’s unique cloud service now makes state-of-the-art AI easily accessible for organizations of all sizes, including start-ups. Abacus.AI’s end-to-end autonomous AI service powered by their Neural Architecture Search invention helps organizations with no ML expertise easily deploy deep learning systems in production.”

 

#artificial-general-intelligence, #artificial-intelligence, #bindu-reddy, #cloud, #cloud-computing, #co-founder, #coatue, #enterprise, #entrepreneurship, #funding, #fundings-exits, #learning, #machine-learning, #ml, #recent-funding, #science-and-technology, #start-ups, #startups

0

Go Jauntly applies AI to seek scale via ‘greener’ walking routes

Unless you’ve been on very extended digital detox, you’ll have noticed algorithms don’t exactly have the greatest reputation these days, saddled as they are with pervasive questions of bias and inequity. Not to mention the dubious content amplification choices of social media platforms.

Nor has 2020 helped their automated cause — with, in just one example, outraged UK students leading chants of ‘fuck the algorithm‘ this summer as they were assigned exam grades using a flawed model after the government scrapped the sitting of exams during the coronavirus pandemic. (It was later forced into a U-turn on the issue — meaning students got their (human) teachers’ predicted grades instead.)

Given so much AI-fuelled ugliness and algorithmic mistrust, you’d be forgiven for thinking there are no more quick wins left. But walking routes app Go Jauntly may have found a redeeming use-case for AI to lift app users’ spirits in 2020.

It’s beta launched an algorithmically powered routing feature that recommends “green routes” within the user’s vicinity — meaning the leafiest and most pleasant/scenic (i.e. less polluted) urban walks possible — drawing on its understanding of users’ walking behaviour. The thinking being that COVID-19 lockdown-hit Brits could do with some nice new spots to stretch their legs locally and enjoy a change of air.

Go Jauntly’s app has been around since 2017, with more than 175,000 downloads of the (free) app to date, but it’s hoping the algorithmically powered green routes will be a game-changer for scale — given all walks in the app have been manually created by actual (human) boots on the ground up to now (including some user-submitted walks).

That said, the feature is only available to users of the app in the UK and Ireland (and only on iOS; Android is due to get it next Spring) — but the plan is to roll it out globally later in 2021. (The rest of Go Jauntly’s app is currently also available in Sweden, the US, Canada, New Zealand and Australia.)

As well as recommending the most scenic/least polluted route to walk between two destinations in the UK and in Ireland, the algorithm can suggest routes that start and end at a single location — for walks lasting from 10 minutes up to 2+ hours in length.

The machine learning tech powering the green routes feature is drawing on external sources of environmental data including the Tranquil City Index (which maps London based on measures associated with tranquility, e.g. lower pollution and noise), as well as OpenStreetMap and GraphHopper data for routing.

Go Jauntly is keen for beta testers to pull on their hiking boots and road test the algorithmically programmed walks to feed in data to help its models improve over time. So it’s quite possible that an AI’s (data-bounded) notion of ‘scenic’ may not live up to your human standards.

Trusting an AI’s urban walking route recommendation could also mean you end up passing through a less nice and/or welcoming neighbourhood than you’d expected.

Or you might find your route barred because the app is erroneously suggesting you walk through private property — much like a satnav trying to send a car the wrong way down a one-way street.

Ergo, green route guinea pigs should keep their eyes peeled — and definitely avoid straying into pastures new that contain cattle.

Go Jauntly says it hopes to continue to develop the algorithmic feature to incorporate more data sets in the future — such as accessibility information, toilets, and historical points of interest — to expand the types of route requirements it can support, working towards what it dubs a “full cross-platform digital ‘nature prescription’ in 2021”.  

It monetizes its hike-loving community of users via an optional premium subscription which gives access to extra content such as curated walking routes and guided tours, as well as the ability to download certain types of content such as walking trails for offline use.

#apps, #go-jauntly, #graphhopper, #machine-learning, #openstreetmap

0

AI-tool maker Seldon raises £7.1M Series A from AlbionVC and Cambridge Innovation Capital

Seldon is a U.K. startup that specializes in the rarified world of development tools to optimize Machine Learning. What does this mean? Well, dear reader, it means that the “AI” that companies are so fond of trumpeting, does actually end up working.

It’s now raised a £7.1M Series A round co-led by AlbionVC and Cambridge Innovation Capital . The round also includes significant participation from existing investors Amadeus Capital Partners and Global Brain, with follow-on investment from other existing shareholders. The £7.1M funding will be used to accelerate R&D and drive commercial expansion, take Seldon Deploy – a new enterprise solution – to market, and double the size of the team over the next 18 months.

More accurately, Seldon is a cloud-agnostic Machine Learning (ML) deployment specialist which works in partnership with industry leaders such as Google, Red Hat, IBM and Amazon Web Services.

Key to its success is that its open-source project Seldon Core has over 700,000 models deployed to date, drastically reducing friction for users deploying ML models. The startup says its customers are getting productivity gains of as much as 92% as a result of utilizing Seldon’s product portfolio.

Alex Housley, CEO and founder of Seldon said: Speaking to TechCrunch, Housley explained that companies are using machine learning across thousands of use cases today, “but the model actually only generates real value when it’s actually running inside a real-world application.”

“So what we’ve seen emerge over these last few years are companies that specialize in specific parts of the machine learning pipeline, such as training version control features. And in our case we’re focusing on deployment. So what this means is that organizations can now build a fully bespoke AI platform that suits their needs, so they can gain a competitive advantage,” he said.

In addition, he said Seldon’s Open Source model means that companies are not locked-in: “They want to avoid locking as well they want to use tools from various different vendors. So this kind of intersection between machine learning, DevOps and cloud-native tooling is really accelerating a lot of innovation across enterprise and also within startups and growth-stage companies.”

Nadine Torbey, Investor AlbionVC added: “Seldon is at the forefront of the next wave of tech innovation, and the leadership team are true visionaries. Seldon has been able to build an impressive open-source community and add immediate productivity value to some of the world’s leading companies.”

Vin Lingathoti, Partner at Cambridge Innovation Capital said: “Machine learning has rapidly shifted from a nice-to-have to a must-have for enterprises across all industries. Seldon’s open-source platform operationalizes ML model development and accelerates the time-to-market by eliminating the pain points involved in developing, deploying and monitoring Machine Learning models at scale.”

#albionvc, #amadeus-capital-partners, #amazon-web-services, #artificial-intelligence, #cambridge-innovation-capital, #cloud-computing, #cloud-infrastructure, #cybernetics, #europe, #google, #ibm, #learning, #machine-learning, #ml, #partner, #recent-funding, #red-hat, #seldon, #startups, #tc, #united-kingdom

0

Construction tech startups are poised to shake up a $1.3-trillion-dollar industry

In the wake of COVID-19 this spring, construction sites across the nation emptied out alongside neighboring restaurants, retail stores, offices and other commercial establishments. Debates ensued over whether the construction industry’s seven million employees should be considered “essential,” while regulations continued to shift on the operation of job sites. Meanwhile, project demand steadily shrank.

Amidst the chaos, construction firms faced an existential question: How will they survive? This question is as relevant today as it was in April. As one of the least-digitized sectors of our economy, construction is ripe for technology disruption.

Construction is a massive, $1.3 trillion industry in the United States — a complex ecosystem of lenders, owners, developers, architects, general contractors, subcontractors and more. While each construction project has a combination of these key roles, the construction process itself is highly variable depending on the asset type. Roughly 41% of domestic construction value is in residential property, 25% in commercial property and 34% in industrial projects. Because each asset type, and even subassets within these classes, tends to involve a different set of stakeholders and processes, most construction firms specialize in one or a few asset groups.

Regardless of asset type, there are four key challenges across construction projects:

High fragmentation: Beyond the developer, architect, engineer and general contractor, projects could involve hundreds of subcontractors with specialized expertise. As the scope of the project increases, coordination among parties becomes increasingly difficult and decision-making slows.

Poor communication: With so many different parties both in the field and in the office, it is often difficult to relay information from one party to the next. Miscommunication and poor project data accounts for 48% of all rework on U.S. construction job sites, costing the industry over $31 billion annually according to FMI research.

Lack of data transparency: Manual data collection and data entry are still common on construction sites. On top of being laborious and error-prone, the lack of real-time data is extremely limited, therefore decision-making is often based on outdated information.

Skilled labor shortage: The construction workforce is aging faster than the younger population that joins it, resulting in a shortage of labor particularly for skilled trades that may require years of training and certifications. The shortage drives up labor costs across the industry, particularly in the residential sector, which traditionally sees higher attrition due to its more variable project demand.

A construction tech boom

Too many of the key processes involved in managing multimillion-dollar construction projects are carried out on Excel or even with pen and paper. The lack of tech sophistication on construction sites materially contributes to job delays, missed budgets and increased job site safety risk. Technology startups are emerging to help solve these problems.

Here are the main categories in which we’re seeing construction tech startups emerge.

1. Project conception

  • How it works today: During a project’s conception, asset owners and/or developers develop site proposals and may work with lenders to manage the project financing.
  • Key challenges: Processes for managing construction loans are cumbersome and time intensive today given the complexity of the loan draw process.
  • How technology can address challenges: Design software such as Spacemaker AI can help developers create site proposals, while construction loan financing software such as Built Technologies and Rabbet are helping lenders and developers manage the draw process in a more efficient manner.

2. Design and engineering

  • How it works today: Developers work with design, architect and engineering teams to turn ideas into blueprints.
  • Key challenges: Because the design and engineering teams are often siloed from the contractors, it’s hard for designers and engineers to know the real-time impact of their decisions on the ultimate cost or timing of the project. Lack of coordination with construction teams can lead to time-consuming changes.
  • How technology can address challenges: Of all the elements of the construction process, the design and engineering process itself is the most technologically sophisticated today, with relatively high adoption of software like Autodesk to help with design documentation, specification development, quality assurance and more. Autodesk is moving downstream to offer a suite of solutions that includes construction management, providing more connectivity between the teams.

    #artificial-intelligence, #banking, #column, #construction, #coronavirus, #covid-19, #document-management, #financial-services, #labor, #machine-learning, #project-management, #real-estate, #startups, #venture-capital

0

Sequoia-backed recycling robot maker AMP Robotics gets its largest purchase order

AMP Robotics, the manufacturer of robotic recycling systems, has received its largest purchase order from the publicly traded North American waste handling company, Waste Connections.

The order, for 24 machine learning enabled robotic recycling systems, will be used on container, fiber and residue lines across numerous materials recovery facilities, the company said.

The AMP technology can be used to recover plastics, cardboard, paper, cans, cartons and many other containers and packaging types reclaimed for raw material processing.

The tech can tell the difference between high-density polyethylene and polyethylene terephthalate, low-density polyethylene, polypropylene, and polystyrene. The robots can also sort for color, clarity, opacity and shapes like lids, tubs, clamshells, and cups — the robots can even identify the brands on packaging.

So far, AMP’s robots have been deployed in North America, Asia, and Europe with recent installations in Spain, and across the US in California, Colorado, Florida, Minnesota, Michigan, New York, Texas, Virginia and Wisconsin.

In January, before the pandemic began, AMP Robotics worked with its investor, Sidewalk Labs on a pilot program that would provide residents of a single apartment building representing 250 units in Toronto with detailed information about their recycling habits.

Working with the building and a waste hauler, Sidewalk Labs  would transport the waste to a Canada Fibers material recovery facility where trash will be sorted by both Canada Fibers employees and AMP Robotics. Once the waste is categorized, sorted, and recorded Sidewalk will communicate with residents of the building about how they’re doing in their recycling efforts.

Sidewalk says that the tips will be communicated through email, an online portal, and signage throughout the building every two weeks over a three-month period.

For residents, it was an opportunity to have a better handle on what they can and can’t recycle and Sidewalk Labs is betting that the information will help residents improve their habits. And for folks who don’t want their trash to be monitored and sorted, they could opt out of the program.

Recyclers like Waste Connections should welcome the commercialization of robots tackling industry problems. Their once-stable business has been turned on its head by trade wars and low unemployment. About two years ago, China decided it would no longer serve as the world’s garbage dump and put strict standards in place for the kinds of raw materials it would be willing to receive from other countries. The result has been higher costs at recycling facilities, which actually are now required to sort their garbage more effectively.

At the same time, low unemployment rates are putting the squeeze on labor availability at facilities where humans are basically required to hand-sort garbage into recyclable materials and trash.

AMP Robotics is backed by Sequoia Capital,  BV, Closed Loop Partners, Congruent Ventures  and Sidewalk Infrastructure Partners, a spin-out from Alphabet that invests in technologies and new infrastructure projects.

#alphabet, #amp-robotics, #amps, #articles, #asia, #california, #china, #colorado, #congruent-ventures, #energy-conservation, #europe, #florida, #machine-learning, #materials, #matter, #michigan, #minnesota, #new-york, #north-america, #plastics, #recycling, #robot, #robotics, #sequoia-capital, #sidewalk-infrastructure-partners, #spain, #tc, #texas, #toronto, #united-states, #virginia, #water-conservation, #wisconsin

0

Arrikto raises $10M for its MLOps platform

Arrikto, a startup that wants to speed up the machine learning development lifecycle by allowing engineers and data scientists to treat data like code, is coming out of stealth today and announcing a $10 million Series A round. The round was led by Unusual Ventures, with Unusual’s John Vrionis joining the board.

“Our technology at Arrikto helps companies overcome the complexities of implementing and managing machine learning applications,” Arrikto CEO and co-founder Constantinos Venetsanopoulos explained. “We make it super easy to set up end-to-end machine learning pipelines. More specifically, we make it easy to build, train, deploy ML models into production using Kubernetes and intelligent intelligently manage all the data around it.”

Like so many developer-centric platforms today, Arrikto is all about “shift left.” Currently, the team argues, machine learning teams and developer teams don’t speak the same language and use different tools to build models and to put them into production.

Image Credits: Arrikto

“Much like DevOps shifted deployment left, to developers in the software development life cycle, Arrikto shifts deployment left to data scientists in the machine learning life cycle,” Venetsanopoulos explained.

Arrikto also aims to reduce the technical barriers that still make implementing machine learning so difficult for most enterprises. Venetsanopoulos noted that just like Kubernetes showed businesses what a simple and scalable infrastructure could look like, Arrikto can show them what a simpler ML production pipeline can look like — and do so in a Kubernetes-native way.

Arrikto CEO Constantinos Venetsanopoulos. Image Credits: Arrikto

At the core of Arrikto is Kubeflow, the Google -incubated open-source machine learning toolkit for Kubernetes — and in many ways, you can think of Arrikto as offering an enterprise-ready version of Kubeflow. Among other projects, the team also built MiniKF to run Kubeflow on a laptop and uses Kale, which lets engineers build Kubeflow pipelines from their JupyterLab notebooks.

As Venetsanopoulos noted, Arrikto’s technology does three things: it simplifies deploying and managing Kubeflow, allows data scientists to manage it using the tools they already know, and it creates a portable environment for data science that enables data versioning and data sharing across teams and clouds.

While Arrikto has stayed off the radar since it launched out of Athens, Greece in 2015, the founding team of Venetsanopoulos and CTO Vangelis Koukis already managed to get a number of large enterprises to adopt its platform. Arrikto currently has more than 100 customers and, while the company isn’t allowed to name any of them just yet, Venetsanopoulos said they include one of the largest oil and gas companies, for example.

And while you may not think of Athens as a startup hub, Venetsanopoulos argues that this is changing and there is a lot of talent there (though the company is also using the funding to build out its sales and marketing team in Silicon Valley). “There’s top-notch talent from top-notch universities that’s still untapped. It’s like we have an unfair advantage,” he said.

“We see a strong market opportunity as enterprises seek to leverage cloud-native solutions to unlock the benefits of machine learning,” Unusual’s Vrionis said. “Arrikto has taken an innovative and holistic approach to MLOps across the entire data, model and code lifecycle. Data scientists will be empowered to accelerate time to market through increased automation and collaboration without requiring engineering teams.”

Image Credits: Arrikto

#arrikto, #cloud, #cloud-computing, #cloud-infrastructure, #computing, #developer, #devops, #europe, #google, #john-vrionis, #kubeflow, #kubernetes, #machine-learning, #ml, #mlops, #recent-funding, #software-development, #startups, #unusual-ventures

0

Computer vision startup Chooch.ai scores $20M Series A

Chooch.ai, a startup that hopes to bring computer vision more broadly to companies to help them identify and tag elements at high speed, announced a $20 million Series A today.

Vickers Venture Partners led the round with participation from 212, Streamlined Ventures, Alumni Ventures Group, Waterman Ventures and several other unnamed investors. Today’s investment brings the total raised to $25.8 million, according to the company.

“Basically we set out to copy human visual intelligence in machines. That’s really what this whole journey is about,” CEO and co-founder Emrah Gultekin explained. As the company describes it, “Chooch Al can rapidly ingest and process visual data from any spectrum, generating AI models in hours that can detect objects, actions, processes, coordinates, states, and more.”

Chooch is trying to differentiate itself from other AI startups by taking a broader approach that could work in any setting, rather than concentrating on specific vertical applications. Using the pandemic as an example, Gultekin says you could use his company’s software to identify everyone who is not wearing a mask in the building or everyone who is not wearing a hard hat at construction site.

 

With 22 employees spread across the U.S., India and Turkey, Chooch is building a diverse company just by virtue of its geography, but as it doubles the workforce in the coming year, it wants to continue to build on that.

“We’re immigrants. We’ve been through a lot of different things, and we recognize some of the issues and are very sensitive to them. One of our senior members is a person of color and we
are very cognizant of the fact that we need to develop that part of our company,” he said. At a recent company meeting, he said that they were discussing how to build diversity into the policies and values of the company as they move forward.

The company currently has 18 enterprise clients and hopes to use the money to add engineers, data scientists and begin to build out a worldwide sales team to continue to build the product and expand its go-to-market effort.

Gultekin says that the company’s unusual name comes from a mix of the words choose and search. He says that it is also an old Italian insult. “It means dummy or idiot, which is what artificial intelligence is today. It’s a poor reflection of humanity or human intelligence in humans,” he said. His startup aims to change that.

#artificial-intelligence, #computer-vision, #data-labeling-tools, #enterprise, #funding, #machine-learning, #recent-funding, #startups, #tagging, #tc, #vickers-venture-partners

0

Amazon begins shifting Alexa’s cloud AI to its own silicon

Amazon engineers discuss the migration of 80 percent of Alexa’s workload to Inferentia ASICs in this three-minute clip.

On Thursday, an Amazon AWS blogpost announced that the company has moved most of the cloud processing for its Alexa personal assistant off of Nvidia GPUs and onto its own Inferentia Application Specific Integrated Circuit (ASIC). Amazon dev Sebastien Stormacq describes the Inferentia’s hardware design as follows:

AWS Inferentia is a custom chip, built by AWS, to accelerate machine learning inference workloads and optimize their cost. Each AWS Inferentia chip contains four NeuronCores. Each NeuronCore implements a high-performance systolic array matrix multiply engine, which massively speeds up typical deep learning operations such as convolution and transformers. NeuronCores are also equipped with a large on-chip cache, which helps cut down on external memory accesses, dramatically reducing latency and increasing throughput.

When an Amazon customer—usually someone who owns an Echo or Echo dot—makes use of the Alexa personal assistant, very little of the processing is done on the device itself. The workload for a typical Alexa request looks something like this:

  1. A human speaks to an Amazon Echo, saying: “Alexa, what’s the special ingredient in Earl Grey tea?”
  2. The Echo detects the wake word—Alexa—using its own on-board processing
  3. The Echo streams the request to Amazon data centers
  4. Within the Amazon data center, the voice stream is converted to phonemes (Inference AI workload)
  5. Still in the data center, phonemes are converted to words (Inference AI workload)
  6. Words are assembled into phrases (Inference AI workload)
  7. Phrases are distilled into intent (Inference AI workload)
  8. Intent is routed to an appropriate fulfillment service, which returns a response as a JSON document
  9. JSON document is parsed, including text for Alexa’s reply
  10. Text form of Alexa’s reply is converted into natural-sounding speech (Inference AI workload)
  11. Natural speech audio is streamed back to the Echo device for playback—”It’s bergamot orange oil.”

As you can see, almost all of the actual work done in fulfilling an Alexa request happens in the cloud—not in an Echo or Echo Dot device itself. And the vast majority of that cloud work is performed not by traditional if-then logic but inference—which is the answer-providing side of neural network processing.

Read 2 remaining paragraphs | Comments

#ai, #amazon, #aws, #gpu, #inference, #inferentia, #machine-learning, #nvidia, #tech, #uncategorized

0

Databricks launches SQL Analytics

AI and data analytics company Databricks today announced the launch of SQL Analytics, a new service that makes it easier for data analysts to run their standard SQL queries directly on data lakes. And with that, enterprises can now easily connect their business intelligence tools like Tableau and Microsoft’s Power BI to these data repositories as well.

SQL Analytics will be available in public preview on November 18.

In many ways, SQL Analytics is the product Databricks has long been looking to build and that brings its concept of a ‘lake house’ to life. It combines the performance of a data warehouse, where you store data after it has already been transformed and cleaned, with a data lake, where you store all of your data in its raw form. The data in the data lake, a concept that Databrick’s co-founder and CEO Ali Ghodsi has long championed, is typically only transformed when it gets used. That makes data lakes cheaper, but also a bit harder to handle for users.

Image Credits: Databricks

“We’ve been saying Unified Data Analytics, which means unify the data with the analytics. So data processing and analytics, those two should be merged. But no one picked that up,” Ghodsi told me. But ‘lake house’ caught on as a term.

“Databricks has always offered data science, machine learning. We’ve talked about that for years. And with Spark, we provide the data processing capability. You can do [extract, transform, load]. That has always been possible. SQL Analytics enables you to now do the data warehousing workloads directly, and concretely, the business intelligence and reporting workloads, directly on the data lake.”

The general idea here is that with just one copy of the data, you can enable both traditional data analyst use cases (think BI) and the data science workloads (think AI) Databricks was already known for. Ideally, that makes both use cases cheaper and simpler.

The service sits on top of an optimized version of Databricks’ open-source Delta Lake storage layer to enable the service to quickly complete queries. In addition, Delta Lake also provides auto-scaling endpoints to keep the query latency consistent, even under high loads.

While data analysts can query these data sets directly, using standard SQL, the company also built a set of connectors to BI tools. Its BI partners include Tableau, Qlik, Looker and Thoughtspot, as well as ingest partners like Fivetran, Fishtown Analytics, Talend and Matillion.

Image Credits: Databricks

“Now more than ever, organizations need a data strategy that enables speed and agility to be adaptable,” said Francois Ajenstat, Chief Product Officer at Tableau. “As organizations are rapidly moving their data to the cloud, we’re seeing growing interest in doing analytics on the data lake. The introduction of SQL Analytics delivers an entirely new experience for customers to tap into insights from massive volumes of data with the performance, reliability and scale they need.”

In a demo, Ghodsi showed me what the new SQL Analytics workspace looks like. It’s essentially a stripped-down version of the standard code-heavy experience that Databricks users are familiar with. Unsurprisingly, SQL Analytics provides a more graphical experience that focuses more on visualizations and not Python code.

While there are already some data analysts on the Databricks platform, this obviously opens up a large new market for the company — something that would surely bolster its plans for an IPO next year.

#ali-ghodsi, #analytics, #apache-spark, #artificial-intelligence, #business-intelligence, #cloud, #data-analysis, #data-lake, #data-management, #data-processing, #data-science, #data-warehouse, #databricks, #democrats, #enterprise, #fishtown-analytics, #fivetran, #information, #looker, #machine-learning, #python, #sql, #tableau, #talend

0

Qualcomm Ventures invests in four 5G startups

Qualcomm Ventures, Qualcomm’s investment arm, today announced four new strategic investments in 5G-related startups. These companies are private mobile network specialist Celona, mobile network automation platform Cellwize, the edge computing platform Azion and Pensando, another edge computing platform that combines its software stack with custom hardware.

The overall goal here is obviously to help jumpstart 5G use cases in the enterprise and — by extension — for consumers by investing in a wide range of companies that can build the necessary infrastructure to enable these.

“We invest globally in the wireless mobile ecosystem, with a goal of expanding our base of customers and partners — and one of the areas we’re particularly excited about is the area of 5G,” Quinn Li, a Senior VP at Qualcomm and the global head of Qualcomm Ventures, told me. “Within 5G, there are three buckets of areas we look to invest in: one is in use cases, second is in network transformation, third is applying 5G technology in enterprises.”

So far, Qualcomm Ventures has invested over $170 million in the 5G ecosystem, including this new batch. The firm did not disclose how much it invested in these four new startups, though.

Overall, this new set of companies touches upon the core areas Qualcomm Ventures is looking at, Li explained. Celona, for example, aims to make it as easy for enterprises to deploy private cellular infrastructure as it is to deploy Wi-Fi today.

“They built this platform with a cloud-based controller that leverages the available spectrum — CBRS — to be able to take the cellular technology, whether it’s LTE or 5G, into enterprises,” Li explained. “And then these enterprise use cases could be in manufacturing settings could be in schools, could be to be in hospitals, or it could be on campus for universities.”

Cellwize, meanwhile, helps automate wireless networks to make them more flexible and manageable, in part by using machine learning to tune the network based on the data it collects. One of the main investment theses for this fund, Li told me, is that wireless technology will become increasingly software-defined and Cellwize fits right into this trend. The potential customer here isn’t necessarily an individual enterprise, though, but wireless and mobile operators.

Edge computing, where Azion and Pensando play, is obviously also a hot category right now and when where 5G has some obvious advantages, so it’s maybe no surprise that Qualcomm Ventures is putting a bit of a focus on these today with its investments in Azion and Pensando.

“As we move forward, [you will] see a lot of the compute moving from the cloud into the edge of the network, which allows for processing happening at the edge of the network, which allows for low latency applications to run much faster and much more efficiently,” Li said.

In total, Qualcomm Ventures has deployed $1.5 billion and made 360 investments since its launch in 2000. Some of the more successful companies the firm has invested in include unicorns like Zoom, Cloudflare, Xiaomi, Cruise Automation and Fitbit.

#5g, #computing, #enterprise, #internet-of-things, #machine-learning, #mobile-technology, #qualcomm, #qualcomm-ventures, #quinn-li, #recent-funding, #startups, #telecommunications, #wireless, #wireless-networks, #wireless-technology

0

Provizio closes $6.2M seed round for its car safety platform using sensors and AI

Provizio, a combination hardware and software startup with technology to improve car safety, has closed a seed investment round of $6.2million. Investors include Bobby Hambrick (the founder of Autonomous Stuff); the founders of Movidius; the European Innovation Council (EIC); ACT Venture Capital.

The startup has a ‘five-dimensional’ sensory platform that – it says – perceives, predicts and prevents car accidents in real-time and beyond the line-of-sight. Its ‘Accident Prevention Technology Platform’ combines proprietary vision sensors, machine learning, radar and with ultra-long range and foresight capabilities to prevent collisions at high speed and in all weather conditions, says the company. The Provizio team is made up of experts in robotics, AI, and vision and radar sensor development.

Barry Lunn, CEO of Provizio Said: “One point three five road deaths to zero drives everything we do at Provizio. We have put together an incredible team that is growing daily. AI is the future of automotive accident prevention and Provizio 5D radars with AI on-the-edge are the first step towards that goal.”

Also involved in Provizio is also Dr. Scott Thayer and Prof Jeff Mishler formally of Carnegie Mellon robotics, famous for developing early autonomous technologies for Google/<a class=”crunchbase-link” href=”https://crunchbase.com/organization/waymo” target=”_blank” data-type=”organization” data-entity=”waymo”>Waymo, Argo, Aurora and Uber.

#articles, #artificial-intelligence, #aurora, #automotive, #car-accidents, #car-safety, #carnegie-mellon, #ceo, #companies, #emerging-technologies, #europe, #european-innovation-council, #founder, #google, #machine-learning, #movidius, #robotics, #science-and-technology, #self-driving-cars, #tc, #uber, #waymo

0

Mobile testing platform Kobiton raises $14M, acquires competitor Mobile Labs

Atlanta-based Kobiton, a mobile testing platform that allows developers and QA teams to test their apps on real devices, both on their own desks and through the company’s cloud-based service, today announced that it has acquired Mobile Labs, another Atlanta-based mobile testing service.

To finance the acquisition of its well-funded competitor, Kobiton raised a $14 million extension to its $5.2 million Series A from its existing investor BIP Capital and new investor Fulcrum Equity Partners.

As Kobiton CEO Kevin Lee told me, we shouldn’t take that as the acquisition price, but it’s probably a fair guess that the real price isn’t too far off. The companies declined to disclose the exact price, though. Mobile Labs, which was founded in 2011, had raised about $15 million before the acquisition, according to Crunchbase. The last time it raised outside funding was in 2014. Kobiton and Mobile Labs do not share any common investors.

Kobiton CEO Kevin Lee

It’s interesting that Kobiton, which launched in 2017 and which may seem like a smaller player at first glance, was able to acquire Mobile Labs. Lee argues that one of the reasons why Mobile Labs decided to sell is that while his company has long focused on using machine learning to help developers build the tests for their apps — and the open-source Appium testing framework — Mobile Labs had fallen behind in this area.

“They were a little slow to invest in [AI] and I think they realized — and rest of the market, I think will realize it — if you don’t invest heavily and early, you kind of get behind the eight ball,” Lee told me.

He also noted that there are a lot of obvious synergies between the two companies. Mobile Labs has a lot of clients in the gaming and financial services space, for example. A lot of those clients are relatively new to mobile, while Kobiton’s existing customer base is often mobile-first.

“They’ve been around for 10 years and [have] a lot of partners, a lot of stuff outside the US,” Lee noted. “They have mainly have focused on what I would call large established enterprises in regulated industries or industries that are really concerned about IP protection — so behind the firewalls — where they really succeeded well.”

Those Mobile Labs customers, Lee said, were also looking for AI/ML-based testing solutions and the acquisition will now allow the two companies to layers Kobiton’s technology on top of the Mobile Labs solution. There will be an upgrade path for these customers and they’ll be able to do so at their own pace. There’s no plan to sunset Mobile Labs’ existing services for the time being, though some of Mobile Labs’ individual brands may change names.

With this acquisition, Kobiton will more than double the number of its US-based employees, though that’s in part because a good portion of the company’s team is based in Vietnam.

#artificial-intelligence, #atlanta, #automation, #bip-capital, #ceo, #developer, #finance, #fulcrum-equity-partners, #kobiton, #machine-learning, #player, #sauce-labs, #software-engineering, #tc, #united-states, #vietnam

0

Intel enters the laptop discrete GPU market with Xe Max

gigapixel ai demo

Enlarge / This is Intel’s DG1 chipset, the heart of the Xe Max GPU. (credit: Intel)

This weekend, Intel released preliminary information on its newest laptop part—the Xe Max discrete GPU, which functions alongside and in tandem with Tiger Lake’s integrated Iris Xe GPU.

We first heard about Xe Max at Acer’s Next 2020 launch event, where it was listed as a part of the upcoming Swift 3x laptop—which will only be available in China. The new GPU will also be available in the Asus VivoBook Flip TP470 and the Dell Inspiron 15 7000 2-in-1.

Intel Xe Max vs. Nvidia MX350

During an extended product briefing, Intel stressed to us that the Xe Max beats Nvidia’s entry-level MX 350 chipset in just about every conceivable metric. In another year, this would have been exciting—but the Xe Max is only slated to appear in systems that feature Tiger Lake processors, whose Iris Xe integrated GPUs already handily outperform the Nvidia MX 350 in both Intel’s tests and our own.

Read 13 remaining paragraphs | Comments

#ai, #discrete-gpu, #gpu, #intel, #iris-xe, #machine-learning, #tech, #tiger-lake, #xe-max

0

AWS launches its next-gen GPU instances

AWS today announced the launch of its newest GPU-equipped instances. Dubbed P4, these new instances are launching a decade after AWS launched its first set of Cluster GPU instances. This new generation is powered by Intel Cascade Lake processors and eight of NVIDIA’s A100 Tensor Core GPUs. These instances, AWS promises, offer up to 2.5x the deep learning performance of the previous generation — and training a comparable model should be about 60% cheaper with these new instances.

Image Credits: AWS

For now, there is only one size available, the p4d.12xlarge instance, in AWS slang and the eight A100 GPUs are connected over NVIDIA’s NVLink communication interface and offer support for the company’s GPUDirect interface as well.

With 320 GB of high-bandwidth GPU memory and 400 Gbps networking, this is obviously a very powerful machine. Add to that the 96 CPU cores, 1.1 TB of system memory and 8 TB of SSD storage and it’s maybe no surprise that the on-demand price is $32.77 per hour (though that price goes down to less than $20/hour for 1-year reserved instances and $11.57 for three-year reserved ones.

Image Credits: AWS

On the extreme end, you can combine 4,000 or more GPUs into an EC2 UltraCluster, as AWS calls these machines, for high-performance computing workloads at what is essentially a supercomputer-scale machine. Given the price, you’re not likely to spin up one of these clusters to train your a model for your toy app anytime soon, but AWS has already been working with a number of enterprise customers to test these instances and clusters, including Toyota Research Institute, GE Healthcare and Aon.

“At [Toyota Research Institute], we’re working to build a future where everyone has the freedom to move,” said Mike Garrison, Technical Lead, Infrastructure Engineering at TRI. “The previous generation P3 instances helped us reduce our time to train machine learning models from days to hours and we are looking forward to utilizing P4d instances, as the additional GPU memory and more efficient float formats will allow our machine learning team to train with more complex models at an even faster speed.”

#amazon-web-services, #artificial-intelligence, #cloud, #cloud-computing, #cloud-infrastructure, #computing, #developer, #enterprise, #ge-healthcare, #gpgpu, #gpu, #intel, #machine-learning, #nvidia, #toyota-research-institute

0

Alphabet’s X details Project Amber, a quest for a single biomarker for depression that fell short of its goal

Alphabet’s X (the Google-owner’s so-called ‘Moonshot Factory’) published a new blog post today about Project Amber, a project it’s been working on over the past three years – the results of which it’s now making available open source for the rest of the mental health research community to learn from, and hopefully build upon. The X project sought to identify a specific biomarker for depression – it did not accomplish that (and the researchers now believe that a single biomarker for depression and anxiety likely didn’t exist), but X is still hoping that its work on using electroencephalography (EEG) combined with machine learning to try to find one will be of benefit to others.

X’s researchers were hoping that depression, like other ailments and disorders, might have a clear biomarker that would help healthcare professionals more easily and objectively diagnose depression, which would also then hopefully make it more easily and consistency treatable. With EEG, there was some precedent, via studies done in labs using games designed specifically for the purpose, in which people with depression seemed to consistently demonstrate a lower measure of EEG activity in response to effectively ‘winning’ the games.

These studies seemed to offer a path to a potential biomarker, but in order to make them actually useful in real-world diagnostic settings, like a clinic or a public health lab, the team at X set about improving the process of EEG collection and interpretation to make it more accessible, both to users and to technicians.

What is perhaps most notable about this pursuit, and the post today that Alphabet released detailing its efforts, is that it’s essentially a story of a years-long investigation that didn’t work out – not the side of the moonshot story you typically hear from big tech companies.

In fact, this is perhaps one of the best examples yet of what critics of many of the approaches of large tech companies fail to understand – that some problems are not solvable by solutions with analogs in the world of software and engineering. The team at X sums its learning’s from the years-long research project up in three main bullet point about its user research, and each of them touch in some way on the insufficiency of a pure objective biomarker detection method (even if it had worked) particularly when it comes to mental illness. From the researchers:

  1. Mental health measurement remains an unsolved problem. Despite the availability of many mental health surveys and scales, they are not widely used, especially in primary care and counseling settings. Reasons range from burden (“I don’t have time for this”) to skepticism (“Using a scale is no better than using my clinical judgement”) to lack of trust (“I don’t think my client is filling this in truthfully” and ”I don’t want to reveal this much to my counsellor”). These findings were in line with the literature on measurement-based mental health care. Any new measurement tool would have to overcome these barriers by creating clear value for both the person with lived experience and the clinician.
  2. There is value in combining subjective and objective data. People with lived experience and clinicians both welcomed the introduction of objective metrics, but not as a replacement for subjective assessment and asking people about their experience and feelings. The combination of subjective and objective metrics was seen as especially powerful. Objective metrics might validate the subjective experience; or if the two diverge, that in itself is an interesting insight which provides the starting point for a conversation.
  3. There are multiple use cases for new measurement technology. Our initial hypothesis was that clinicians might use a “brainwave test” as a diagnostic aid. However, this concept got a lukewarm reception. Mental health experts such as psychiatrists and clinical psychologists felt confident in their ability to diagnose via clinical interview. Primary care physicians thought an EEG test could be useful, but only if it was conducted by a medical assistant before their consultation with the patient, similar to a blood pressure test. Counsellors and social workers don’t do diagnosis in their practice, so it was irrelevant to them. Some people with lived experience did not like the idea of being labelled as depressed by a machine. By contrast, there was a notably strong interest in using technology as a tool for ongoing monitoring — capturing changes in mental health state over time — to learn what happens between visits. Many clinicians asked if they could send the EEG system home so their patients and clients could repeat the test on their own. They were also very interested in EEG’s potential predictive qualities, e.g. predicting who is likely to get more depressed in future. More research is needed to determine how a tool such as EEG would be best deployed in clinical and counseling settings, including how it could be combined with other measurement technologies such as digital phenotyping.

X is making Amber’ hardware and software open-source on Github, and also issuing a ‘patent pledge’ that ensures X will not bring any legal action against users of the EEG Patents related to Amber through use of the open-sourced material. It’s unclear (though unlikely) that this would’ve been the result had Amber succeeded at finding a single biomarker for depression, but perhaps in the hands of the broader community the work the team did on rendering EEG more accessible beyond specialized testing facilities will lead to other interesting discoveries.

#biotech, #counseling, #depression, #github, #machine-learning, #medicine, #mental-health, #mental-illness, #science, #tc

0

Cough-scrutinizing AI shows major promise as an early warning system for COVID-19

Asymptomatic spread of COVID-19 is a huge contributor to the pandemic, but of course if there are no symptoms, how can anyone tell they should isolate or get a test? MIT research has found that hidden in the sound of coughs is a pattern that subtly, but reliably, marks a person as likely to be in the early stages of infection. It could make for a much-needed early warning system for the virus.

The sound of one’s cough can be very revealing, as doctors have known for many years. AI models have been built to detect conditions like pneumonia, asthma, and even neuromuscular diseases, all of which alter how a person coughs in different ways.

Before the pandemic, researcher Brian Subirana had shown that coughs may even help predict Alzheimer’s — mirroring results from IBM research published just a week ago. More recently, Subirana thought if the AI was capable of telling so much from so little, perhaps COVID-19 might be something it could suss out as well. In fact, he isn’t the first to think so.

He and his team set up a site where people could contribute coughs, and ended up assembling “the largest research cough dataset that we know of.” Thousands of samples were used to train up the AI model, which they document in an open access IEEE journal.

The model seems to have detected subtle patterns in vocal strength, sentiment, lung and respiratory performance, and muscular degradation, to the point where it was able to identify 100 percent of coughs by asymptomatic COVID-19 carriers and 98.5 percent of symptomatic ones, with a specificity of 83 and 94 percent respectively, meaning it doesn’t have large numbers of false positives or negatives.

“We think this shows that the way you produce sound, changes when you have COVID, even if you’re asymptomatic,” said Subirana of the surprising finding. However he cautioned that although the system was good at detecting non-healthy coughs, it should not be used as a diagnosis tool for people with symptoms but unsure of the underlying cause.

I asked Subirana for a bit more clarity on this point.

“The tool is detecting features that allow it to discriminate the subjects that have COVID from the ones that don’t,” he wrote in an email. “Previous research has shown you can pick up other conditions too. One could design a system that would discriminate between many conditions but our focus was on picking out COVID from the rest.”

For the statistics-minded out there, the incredibly high success rate may raise some red flags. Machine learning models are great at a lot of things, but 100 percent isn’t a number you see a lot, and when you do you start thinking of other ways it might have been produced by accident. No doubt the findings will need to be proven on other datasets and verified by other researchers, but it’s also possible that there’s simply a reliable tell in COVID-induced coughs that a computer listening system can hear quite easily.

The team is collaborating with several hospitals to build a more diverse dataset, but is also working with a private company to put together an app to distribute the tool for wider use, if it can get FDA approval.

#artificial-intelligence, #coronavirus, #covid-19, #health, #machine-learning, #mit, #science, #tc

0

CoreCare raises $3 million for managing billing and payments from public health benefit providers

CoreCare, a provider of revenue management services for healthcare companies dealing with public health benefit providers, has raised $3 million in a seed financing round.

The company, which uses machine learning, automates large swaths of billing and revenue cycle management to reduce the burden on hospitals, according to chief executive, Dennis Antonelos.

Already, companies like Creative Solutions in Healthcare, a nursing facility operator in Texas, which operates nearly 80 locations has signed up for the service.

Antonelos started the company in January, had the first product up by March and was accepted to Y Combinator in April. It now boasts over a dozen customers in Texas.

With the new $3 million in hand from investors including Primetime Partners, Goat Capital, Funders Club and Liquid2Ventures, Antonelos said the company would look to expand its sales and marketing and product capabilities.

CoreCare automates processing of billing and paperwork and clinical notes by linking electronic health records and medicare and medicaid information services and payers.

“We’re going through the organization and eliminating administrative waste so the organization can invest newly found resources into patient care,” Antonelos said.

The company uses a standard software as a service payment model and charges somewhere between $300 to $500 per-facility, per-month, according to Antonelos.

“These initial results are outstanding,” said Gary Blake, president, and co-founder of Creative Solutions in Healthcare, and one of CoreCare’s early customers. “In only a matter of months working with CoreCare’s CoreAccess software, we’ve seen a notable impact on our financial position. It has truly exceeded our expectations. CoreCare has changed the way we work with Managed Care, from top to bottom. We have been able to streamline our entire billing process, reduce admin costs, shorten the number of accounts receivable (AR) days and free up cash for growth. Every healthcare provider that works with managed care should work with CoreCare.”

#articles, #goat-capital, #health, #health-insurance, #healthcare, #machine-learning, #medicare, #tc, #texas, #y-combinator

0

Lightspeed Venture Partners backs Theta Lake’s video conferencing security tech with $12.7 million

Theta Lake, a provider of compliance and security tools for conferencing software like Cisco Webex, Microsoft Teams, RingCentral, Zoom and others, said it has raised $12.7 million in a new round of funding.

Lightspeed Venture Partners led the round with commitments from Cisco Investments, angel investors from the collaboration and security space, and previous investors, Neotribe Ventures, Firebolt Ventures and WestWave Capital, the company said.

The company’s financing comes as the COVID-19 pandemic has created a surge of demand for remote work conferencing technologies — and services that can ensure the security of those communications.

Citing a Research and Markets report, the company estimates that the market will grow from $8.9 billion in 2019 to $23 billion by the end of this year.

Theta Lake said that the funding would be used to increase its sales and marketing capabilities and for research and development on new product features, according to a statement. 

The company’s tech already uses machine learning to detect security risks in video, visual, voice, chat and document content shared over video and collaboration tools.

As a result of its investment, Arif Janmohamed, a partner at Lightspeed Venture Partners, will join the Theta Lake Board of Directors, the company said. 

“The need for security and compliance solutions that fully cover modern collaboration tools should be obvious to everyone,” said Devin Redmond, Theta Lake’s co-founder and chief executive, in a statement. “That need pre-existed the pandemic, but now is more pressing than ever. The shift from physical work sites and employer-owned networks with tightly managed devices and applications, to a distributed workplace that lives inside your collaboration tools means organizations need new security and compliance coverage that lives inside that new workplace. 

 

#artificial-intelligence, #cisco-investments, #cisco-systems, #collaboration-tools, #companies, #computing, #lightspeed, #lightspeed-venture-partners, #machine-learning, #microsoft, #neotribe-ventures, #partner, #ringcentral, #security-tools, #tc, #telecommunications, #web-conferencing

0

Deci raises $9.1M to optimize AI models with AI

Deci, a Tel Aviv-based startup that is building a new platform that uses AI to optimized AI models and get them ready for production, today announced that it has raised a $9.1 million seed round led by Emerge and Square Peg.

The general idea here is to make it easier and faster for businesses to take AI workloads into production — and to optimize those production models for improved accuracy and performance. To enable this, the company built an end-to-end solution that allows engineers to bring in their pre-trained models and then have Deci manage, benchmark and optimize them before they package them up for deployment. Using its runtime container or Edge SDK, Deci users can also then serve those models on virtually any modern platform and cloud.

Deci’s insights screen combines all indicators of a deep learning model’s expected behavior in production, resulting in the Deci Score – a single metric summarizing the overall performance of the model.

The company was co-founded by co-founded by deep learning scientist Yonatan Geifman, technology entrepreneur Jonathan Elial, and professor Ran El-Yaniv, a computer scientist and machine learning expert at the Technion – Israel Institute of Technology.

“Deci is leading a paradigm shift in AI to empower data scientists and deep learning engineers with the tools needed to create and deploy effective and powerful solutions,” says Yonatan Geifman, CEO and co-founder of Deci. “The rapidly increasing complexity and diversity of neural network models make it hard for companies to achieve top performance. We realized that the optimal strategy is to harness the AI itself to tackle this challenge. Using AI, Deci’s goal is to help every AI practitioner to solve the world’s most complex problems.”

Deci’s lab screen enables users to manage their deep learning models’ lifecycles, optimize inference performance, and prepare models for deployment. Image Credits: Deci

The company promises is that, on the same hardware and with comparable accuracy, Deci-optimized models will run between five and ten times f