Juniper Networks acquires Boston-area AI SD-WAN startup 128 Technology for $450M

Today Juniper Networks announced it was acquiring smart wide area networking startup 128 Technology for $450 million.

This marks the second AI-fueled networking company Juniper has acquired in the last year and a half after purchasing Mist Systems in March 2019 for $405 million. With 128 Technology, the company gets more AI SD-WAN technology. SD-WAN is short for software-defined wide area networks, which means networks that cover a wide geographical area such as satellite offices, rather than a network in a defined space.

Today, instead of having simply software-defined networking, the newer systems use artificial intelligence to help automate session and policy details as needed, rather than dealing with static policies, which might not fit every situation perfectly.

Writing in a company blog post announcing the deal, executive vice president and chief product officer Manoj Leelanivas sees 128 Technology adding great flexibility to the portfolio as it tries to transition from legacy networking approaches to modern ones driven by AI, especially in conjunction with the Mist purchase.

“Combining 128 Technology’s groundbreaking software with Juniper SD-WAN, WAN Assurance and Marvis Virtual Network Assistant (driven by Mist AI) gives customers the clearest and quickest path to full AI-driven WAN operations — from initial configuration to ongoing AIOps, including customizable service levels (down to the individual user), simple policy enforcement, proactive anomaly detection, fault isolation with recommended corrective actions, self-driving network operations and AI-driven support,” Leelanivas wrote in the blog post.

128 Technologies was founded in 2014 and raised over $97 million, according to Crunchbase data. Its most recent round was a $30 million Series D investment in September 2019 led by G20 Ventures and The Perkins Fund.

In addition to the $450 million, Juniper has asked 128 Technology to issue retention stock bonuses to encourage the startup’s employees to stay on during the transition to the new owners. Juniper has promised to honor this stock under the terms of the deal. The deal is expected to close in Juniper’s fiscal fourth quarter subject to normal regulatory review.

#128-technology, #artificial-intelligence, #boston-startups, #enterprise, #exit, #fundings-exits, #juniper-networks, #ma, #mergers-and-acquisitions, #networking, #startups

0

Pimloc gets $1.8M for its AI-based visual search and redaction tool

UK-based Pimloc has closed a £1.4 million (~$1.8M) seed funding round led by Amadeus Capital Partners. Existing investor Speedinvest and other unnamed shareholders also participated in the round.

The 2016-founded computer vision startup launched a AI -powered photo classifier service called Pholio in 2017 — pitching the service as a way for smartphone users to reclaim agency over their digital memories without having to hand their data over to cloud giants like Google.

It has since pivoted to position Pholio as a “specialist search and discovery platform” for large image and video collections and live streams (such as those owned by art galleries or broadcasters) — and also launched a second tool powered by its deep learning platform. This product, Secure Redact, offers privacy-focused content moderation tools — enabling its users to find and redact personal data in visual content.

An example use-case it gives is for law enforcement to anonymize bodycam footage so it can be repurposed for training videos or prepared for submitting as evidence.

Pimloc has been working with diverse image and video content for several years supporting businesses with a host of classification, moderation and data protection challenges (image libraries, art galleries, broadcasters and CCTV providers),” CEO Simon Randall tells TechCrunch.

“Through our work on the visual privacy side we identified a critical gap in the market for services that allow businesses and governments to manage visual data protection at scale on security footage. Pimloc has worked in this area for a couple of years building capability and product, as a result Pimloc has now focussed the business solely around this mission.”

Secure Redact has two components: A first (automated) step that detects personal data (e.g. faces, heads, bodies) within video content. On top of that is what Randall calls a layer of “intelligent tools” — letting users quickly review and edit results.

“All detections and tracks are auditable and editable by users prior to accepting and redacting,” he explains, adding: “Personal data extends wider than just faces into other objects and scene content including ID cards, tattoos, phone screens (body worn cameras have a habit of picking up messages on the wearer’s phone screen as they are typing, or sensitive notes on their laptop or notebook).”

One specific user of redaction the tool he mentions is the University of Bristol. There a research group, led by Dr Dima Damen, an associate professor in computer vision, is participating in an international consortium of 12 universities which is aiming to amass the largest dataset on egocentric vision — and needs to be able to anonymise the video data set before making it available for academic/open source use.

On the legal side, Randall says Pimloc offers a range of data processing models — thereby catering to differences in how/where data can be processed. “Some customers are happy for Pimloc to act as data processor and use the Secure Redact SaaS solution — they manage their account, they upload footage, and can review/edit/update detections prior to redaction and usage. Some customers run the Secure Redact system on their servers where they are both data controller and processor,” he notes.

“We have over 100 users signed up for the SaaS service covering mobility, entertainment, insurance, health and security. We are also in the process of setting up a host of on-premise implementations,” he adds.

Asked which sectors Pimloc sees driving the most growth for its platform in the coming years, he lists the following: smart cities/mobility platforms (with safety/analytics demand coming from the likes of councils, retailers, AVs); the insurance industry, which he notes is “capturing and using an increasing amount of visual data for claims and risk monitoring” and thus “looking at responsible systems for data management and processing”; video/telehealth, with traditional consultations moving into video and driving demand for visual diagnosis; and law enforcement, where security goals need to be supported by “visual privacy designed in by default” (at least where forces are subject to European data protection law).

On the competitive front, he notes that startups are increasingly focusing on specialist application areas for AI — arguing they have an opportunity to build compelling end-to-end propositions which are harder for larger tech companies to focus on.

For Pimlock specifically he argues it has an edge in its particular security-focused niche — given “deep expertise” and specific domain experience.

“There are low barriers to entry to create a low quality product but very high technical barriers to create a service that is good enough to use at scale with real ‘in the wild’ footage,” he argues, adding: The generalist services of the larger tech players do not match-up with domain specific provisions of Pimloc/Secure Redact. Video security footage is a difficult domain for AI, systems trained on lifestyle/celebrity or other general data sets perform poorly on real security footage.”

Commenting on the seed funding in a statement, Alex van Someren, MD of Amadeus Capital Partners, said: “There is a critical need for privacy by design and large-scale solutions, as video grows as a data source for mobility, insurance, commerce and smart cities, while our reliance on video for remote working increases. We are very excited about the potential of Pimloc’s products to meet this challenge.”

“Consumers around the world are rightfully concerned with how enterprises are handling the growing volume of visual data being captured 24/7. We believe Pimloc has developed an industry leading approach to visual security and privacy that will allow businesses and governments to manage the usage of visual data whilst protecting consumers. We are excited to support their vision as they expand into the wider Enterprise and SaaS markets,” added Rick Hao, principal at Speedinvest, in another supporting statement.

#ai, #amadeus-capital-partners, #artificial-intelligence, #computer-vision, #pimloc, #privacy, #recent-funding, #startups, #visual-search

0

Google Cloud launches Lending DocAI, its first dedicated mortgage industry tool

Google Cloud today announced the launch of Lending DocAI, its first dedicated service for the mortgage industry. The tool, which is now in preview, is meant to help mortgage companies speed up the process of evaluating a borrower’s income and asset documents, using specialized machine learning models to automate routine document reviews.

Some of this may sound familiar, because, with Document AI, Google Cloud already offers a more general tool for performing OCR over complex documents and then extracting data from those. Lending DocAI is essentially the first vertically specialized Google Cloud service to use this technology.

“Our goal is to give you the right tools to help borrowers and lenders have a better experience and to close mortgage loans in shorter time frames, benefiting all parties involved,” writes Google product manager Sudheera Vanguri. “With Lending DocAI, you will reduce mortgage processing time and costs, streamline data capture, and support regulatory and compliance requirements.”

Google argues that its tool will have speed up the mortgage workflow process and improve the experience for borrowers, too. If you’ve ever gone through the mortgage process, you know how much time it takes to compile all of the necessary documents and how much lag there is before your bank or mortgage broker tells you that everything is in order (or not).

In addition, Google Cloud also argues that this technology can help “reduce risk and enhance compliance posture by leveraging a technology stack (e.g. data access controls and transparency, data residency, customer managed encryption keys) that reduces the risk of implementing an AI strategy.”

In many ways, this new product is a good example for Google Cloud’s current strategy under the leadership of its CEO Thomas Kurian. While it continues to develop a plethora of general services for developers at every level, it now also bundles these together to sell as complete solutions to enterprises in various verticals. That’s where Google Cloud believes it can generate the most benefit for these companies — and hence generate the most revenue. With industry solutions for retailers, telcos, gaming companies and more — and industry partners to help them get up to speed — Kurian and his team believe that they can offer solutions while its competitors focus on offering tools. So far, that strategy seems to be working out alright, with Google Cloud’s revenue growing over 43 percent in the last quarter.

#artificial-intelligence, #cloud, #developer, #economy, #finance, #google, #loans, #machine-learning, #mortgage, #ocr

0

Duplex, Google’s conversational A.I, has updated 3M+ business listings since pandemic

Google today offered an update on the status of Duplex, its A.I. technology that uses natural conversations to get things done — like making restaurant reservations, booking appointments, or updating a Google Business listing, for example. When the pandemic began, Google expanded its use of Duplex for business updates to eight countries, and has since made over 3 million updates to business listings — including pharmacies, restaurants and grocery stores.

These updates have been seen over 20 billion times across Maps and Search, the company says.

The A.I. technology, first introduced at the Google I/O developer conference in 2018, is able to place calls to businesses and interact with the people who answer the phone. In the case of reservations or appointment setting, it can request dates and times, respond to questions, and even make sounds to make the A.I. seem more like a person. For instance, it can insert subtle vocal breaks, like “mm-hm” and “um,” into its conversations.

Since launching, Duplex in Google Assistant has completed over a million bookings, Google announced today.

The company also noted it began to use Duplex to automatically update business information on Google Maps and Search in the U.S. last year, saving business owners from having to manually update details like store hours, or whether they offer takeout, among other things.

Last year, Google also brought Duplex to the web in the U.S., to help users book things like movie tickets and rental cars. Today, Google says it will begin piloting the same experience with other things, like shopping and ordering food for a faster checkout experience.

Just a few weeks ago, Google also introduced another Duplex-powered feature, “Hold for Me,” which lets you use Google Assistant to wait on hold on your phone call, then alert you when someone joins the line.

Thanks to advances in neural speech recognition and synthesis, and in Google’s own new language understanding models, the company says today that 99% of Duplex calls are entirely automated.

The Duplex update was one of several announcements Google made today at its Search On 2020 event, where it introduced a number of search improvements, including the ability to search for songs by humming, better guess at misspellings, point users to the correct part of a page to answer their question, tag key moments in videos, and more.

#a-i, #artificial-intelligence, #duplex, #google, #google-assistant, #google-search, #google-maps, #google-voice, #speech-recognition, #tc

0

Google launches a slew of Search updates

Google today announced a number of improvements to its core search engine, with a strong focus on how the company is using AI to help its users. These include the ability to better answer questions with very specific answers, very broad questions and a new algorithm to better handle the typos in your queries. The company also announced updates to Google Lens and other Search-related tools. Most of these are meant to be useful but some are also just fun. You will now be able to hum a song and the Google Assistant will try to find the right song for you, for example.

As Google noted, 1 in 10 search queries is misspelled. The company already does a pretty good job dealing with those through its ‘did you mean’ feature. Now, the company is launching an improvement to this algorithm that uses a deep neural net with 680 million parameters to better understand the context of your search query.

Image Credits: Google

Another nifty new feature is an integration with various data sources, which were previously only available as part of Google’s Open Data Commons, into Search. Now, if you ask questions about something like “employment in Chicago,” Google’s Knowledge Graph will trigger and show you graphs with this data right on the Search results page.

Image Credits: Google

Another update the company announced today in its systems ability to index parts of pages to better answer niche queries like “how do I determine if my windows have UV glass?” The system can now point you right to a paragraph on a DIY forum. In total, this new system will improve about 7% of queries, Google said.

For broader questions, Google is now also using its AI system to better understand the nuances of what a page is about to better answer these queries.

Image Credits: Google

These days, a lot of content can be found in videos, too. Google is now using advanced computer recognition and speech recognition to tag key moments in videos — that’s something you can already find in Search these days, but this new algorithm should make that even easier, especially for videos where the creators haven’t already tagged the content.

Other updates include an update to Google Lens that lets you ask the app to read out a passage from a photo of a book — no matter the language. Lens can now also understand math formulas — and then show you step-by-step guides and videos to solve it. This doesn’t just work for math, but also chemistry, biology and physics.

Image Credits: Google

Given that the holiday shopping season is coming up, it’s maybe no surprise that Google also launched a number of updates to its shopping services. Specifically, the company is launching a new feature in Chrome and the Google App where you can now long-tap on any image and then find related products. And for the fashion-challenged, the service will also show you related items that tend to show up in related images.

If you’re shopping for a car, you will now also be able to get an AR view of them so you can see what they look like in your driveway.

Image Credits: Google

In Google Maps, you will now also be able to point at a restaurant or other local business when you are using the AR walking directions to see their opening hours, for example.

Another new Maps feature is that Google will now also show live busyness information right on the map, so you don’t have to specifically search for a place to see how busy it currently is. That’s a useful feature in 2020.

Image Credits: Google

During the event (or really, video premiere, because this is 2020), which was set to the most calming of music, Google’s head of search, Prabhakar Raghavan, also noted that its 2019 BERT update to the natural language understanding part of its Search system is now used for almost every query and available in more languages, including Spanish, Portuguese, Hindi, Arabic, German and Amharic. That’s part of the more than 3,600 updates the company made to its search product in 2019.

All of these announcements are happening against the backdrop of various governments looking into Google’s business practices, so it’s probably no surprise that the event also put an emphasis on Google’s privacy practices and that Raghavan regularly talked about “open access” and that Google Search is free for everyone and everywhere, with ranking policies applied “fairly” to all websites. I’m sure Yelp and other Google competitors wouldn’t quite agree with this last assertion.

#artificial-intelligence, #computing, #google, #google-search, #prabhakar-raghavan, #search-engine, #speech-recognition, #tc, #websites, #world-wide-web, #yelp

0

Whisper announces $35M Series B to change hearing aids with AI and subscription model

A few years ago, Whisper president and co-founder Andrew Song was talking to his grandfather about his hearing aids. Even though he spent thousands of dollars on a medical device designed to improve his hearing, and in the process his quality of life, he wasn’t wearing them. Song’s co-founders had had similar experiences with grandparents, and as engineers and entrepreneurs, they decided to do something about it, to try and build a better, more modern hearing aid.

Today, the company emerged from stealth with a new hearing aid built from the ground up. It uses artificial intelligence to learn and adjust in an automated way to different hearing situations like a noisy restaurant or watching TV. And you don’t pay thousands of dollars up front, you pay a monthly fee on a three year subscription, and you get free software updates along the way.

While it was at it, the company also announced a $35 million Series B investment led by Quiet Ventures with participation from previous investors Sequoia Capital and First Round Capital. The startup has raised a total of $53 million to build the hearing aid system that it is announcing today.

Those discussions with his grandfather prior to starting the company led Song to wonder why he wasn’t wearing those hearing aids, what were the challenges he was having and why that wasn’t working for him — and that led to eventually forming launching a startup.

“That really inspired us to build, I think, a new kind of product, one that could get better over time and better support the needs of people who use hearing aids, and be a hearing aid that gets better, but also one that could use artificial intelligence to actually improve the sound that somebody gets,” Song explained.

While the founding team had a background in technology and engineering, they did not have expertise in hearing science, so they brought on Dr. Robert Sweetow from the UCSF audiology department to help them.

The technology they’ve built consists of three main components. For starters you have the hearing aids themselves that fit on the ear along with a pocket-sized external box that they call the Whisper Brain, which the company says, “works wirelessly with the earpieces to enable a proprietary AI-based Sound Separation Engine,” and finally there is a smart phone app to update the software on the system.

It is this AI that Song says separates them from other hearing aids. “In the day-to-day rough and tumble when you encounter a more challenging experience, what we call our sound separation engine, which is the kind of AI model that we’ve built to help with that, and that’s what’s going to be there to help do that signal processing — and we think that’s really unique,” he said.

What’s more, just like a self-driving car learns over time and benefits from the data being fed back to the company from all drivers, Song says that the same dynamic is at work with the hearing aid, which learns how to process signals better over time, based on an individual’s experience, but also all of the other Whisper hearing aid users.

The company is offering these hearing aids through a network of hearing aid professionals, rather than over the counter, because Song said that the company recognized that these are complex instruments and it is important to keep audiologists in the loop to help fit and support the hearing aids and work with Whisper customers over the life of the product.

Whisper offers these hearing aids on a subscription basis for $179 per month on a three-year contract, which includes all of the hardware, the software updates, on-going support from the hearing care pro, a 3-year loss and damage insurance and an industry-standard equipment warranty. They are offering an introductory price of $139 per month for a limited time.

At $179 per month, it comes to a total of $6444 over the three year period to essentially rent the aids. At the end of the subscription, customers can renew and get updated hardware or give the hardware back. They do not own the hearing aids.

It’s worth noting that other hearing aid companies also use AI in their hearing aids including Widex and Starkey, neither of which require an external hub. Many hearing aid companies also offer a variety of payment and subscription plans, but Whisper is an attempt to offer a different approach to hearing aids.

#artificial-intelligence, #funding, #hardware, #hearing-aids, #quiet-capital, #recent-funding, #startups, #subscription-model, #tc, #whisper

0

Nvidia will power world’s fastest AI supercomputer, to be located in Europe

Nvidia is is going to be powering the world’s fastest AI supercomputer, a new system dubbed ‘Leonardo’ that’s being built by the Italian multi-university consortium CINECA, a global supercomutin leader. The Leonardo system will offer as much as 10 exaflops of FP16 AI performance capabilities, and be made up of more than 14,000 Nvidia Ampere-based GPUS once completed.

Leonardo will be one of four new supercomputers supported by a cross-European effort to advance high-performance computing capabilities in the region, that will eventually offer advanced AI capabilities for processing applications across both science and industry. Nvidia will also be supplying its Mellanox HDR InfiniBand networks to the project in order to enable performance across the clusters with low-latency broadband connections.

The other computes in the cluster include MeluXina in Luxembourg and Vega in Solvevnia, as well as a new supercooling coming online in the Czech Republic. The pan-European consortium also plans four more Supercomputers for Bulgaria, Finland, Portugal and Spain, though those will follow later and specifics around their performance and locations aren’t yet available.

Some applications that CINECA and the other supercomputers will be used for include analyzing genomes and discovering new therapeutic pathways; tackling data from multiple different sources for space exploration and extraterrestrial planetary research; and modelling weather patterns, including extreme weather events.

#artificial-intelligence, #broadband, #bulgaria, #computing, #czech-republic, #europe, #finland, #flops, #gpu, #luxembourg, #mellanox, #nvidia, #portugal, #science, #spain, #supercomputers, #tc

0

Spain’s Savana Medica raises $15 million to bring its AI toolkit turning clinical notes into care insights to the US

Savana, a machine learning-based service that turns clinical notes into structured patient information for physicians and pharmacists, has raised $15 million to take its technology from Spain to the U.S., the company said.

The investment was led by Cathay Innovation with participation from the Spanish investment firm Seaya Ventures, which led the company’s previous round, and new investors like MACSF, a French insurance provider for doctors. 

The company has already processed 400 million electronic medical records in English, Spanish, German, and French.

Founded in Madrid in 2014, the company is relocating to New York and is already working with the world’s largest pharmaceutical companies and over 100 healthcare facilities.

“Our mission is to predict the occurrence of disease at the patient level. This focuses our resources on discovering new ways of providing medical knowledge almost in real time — which is more urgent than ever in the context of the pandemic,” said Savana chief executive Jorge Tello. “Healthcare challenges are increasingly global, and we know that the application of AI across health data at scale is essential to accelerate health science.”

Company co-founder and chief medical officer, Dr. Ignacio Hernandez Medrano, also emphasized that while the company is collecting hundreds of millions of electronic records, it’s doing its best to keep that information private.

“One of our main value propositions is that the information remains controlled by the hospital, with privacy guaranteed by the de-identification of patient data before we process it,” he said. 

 

#articles, #artificial-intelligence, #disease, #electronic-health-records, #health, #machine-learning, #madrid, #new-york, #pharmaceutical, #pharmacy, #seaya-ventures, #spain, #tc, #united-states

0

Rosita Longevity wants to teach seniors how to live long, healthy lives

Longevity, as far as startups are concerned, tends to be a moonshot-y space where technologies like biotech and AI are experimentally applied in a sort of modern day alchemical quest — and the great hope is to (somehow) ‘hack’ biology and substantially extend the human lifespan. Or even end death altogether.

Coming considerably closer to Earth is Spanish startup Hearts Radiant, which says it’s in the “longevity tech” business but is taking a far more grounded and practical approach to addressing ageing. In short it believes it’s nailed a formula for helping people live to a ripe old age.

And — here’s the key — to do so healthily.

So its moonshot isn’t to help people get to a biblical 150 or even 120. It’s about supporting seniors to live well, up to a ‘good innings’ like 95, while (hopefully) retaining their independence and vitality through the application of technology that creates a structured and engaging lifestyle routine which works to combat age-related conditions such as frailty and social isolation.

Gently does it

The startup is coming out of stealth today to disclose a first tranche of pre-seed funding and chat to TechCrunch about its dream of supporting seniors to live a more active, fulfilling and independent life.

The €450k pre-seed round, which is led by JME.vc with participation from Kfund, Seedcamp and NextVentures, will be used for research and continued development of its Rosita Longevity digital coach. The app has been in beta testing in a limited form since January — currently only for Android devices, given seniors tend to have their relatives’ hand-me-down smartphone hardware (but iOS is on the roadmap) — offering livestreamed and on-demand video classes like cardio flamenco and age-appropriate yoga for its target 60+ year-olds. 

Rosita’s co-founders are husband and wife team, Juan Cartagena (CEO) and Clara Fernández (CCO), along with CTO David Gil. Their premise is that what humans really need, as they age, is guidance and motivation to stay as active as they can, for as long as they can — and that a digital platform is the best way to make personalized, ‘healthy habit’ forming therapy for seniors widely accessible.

“We believe that we have to be a habit engine,” says Cartagena, offering “health longevity” as another descriptor for the scope of what they’re aiming to achieve.

Fernández is drawing directly on her years of experience as CEO of Balneario de Cofrentes, a family business in Valencia, which she describes as a “longevity school” or camp for seniors — and which the website suggests is a combination of spa/hotel, physical therapy/rehabilitation and education center. There she’s been responsible for overseeing activity and education programs tailored to seniors, offering guided exercise and advice on things like disease avoidance and good nutrition.

“Over the last ten years we have developed a very comprehensive strategy on how to educate, how to create habits in the senior community so that they can increase their healthy lifespan,” she explains. “We have a specific methodology. We start with teaching seniors how to manage their current health situation and we progressively start educating them with lifestyle, prevention of the main diseases, and also education about the latest discoveries in the field of science.”

“I realized that the main way to expand this was taking it online,” she adds on the decision to package the program into a digital coaching app — “where a bigger percentage of the senior population could benefit”.

Lifestyle is a key part of the proposition. But they’re most comfortable with the badge of ‘longevity tech’.

“We are trying not to play in fitness for many reasons,” adds Cartagena. “It’s limited in scope. And we are trying to go beyond that — it’s just the starting point [for reducing frailty] and the issues related to that, including the final ‘disease’ which would be dependence.”

Since the premise underlying the Rosita app hinges on the proven health benefits of regular, moderate exercise as a means of combating a range of age-related conditions — such as muscle mass loss and reduced bone density leading to frailty (which in turn can lead to a fall, a broken hip, and a senior who’s suddenly dependent on personal care) — or, beyond that, as a general bolster for mental and brain health — they are squatting on established (rather than moonshotty) science.

Although they do still need to demonstrate that digitally delivered, personalized programs of lifestyle coaching — featuring familiar but still sometimes clunky technologies like AI and chatbots — can actually help reverse frailty (in the first instance) for seniors participating remotely, with no human physiotherapists on hand to help.

Screenshots of the digital coaching app (Image credit: Hearts Radiant/Rosita Longevity)

Hence some of the funding will go on researching how their bricks-and-mortar ‘longevity school’ program translates to a digital platform. And, more specifically, whether personalised digital coaching for 60+ year olds will yield tangible reductions in frailty (and thus gains in active years) in the same way that in-person group exercises have already been shown to. (One area that certainly merits close study is whether social human contact derived from a purely digital experience vs in-person group therapy makes a difference to treatment outcomes.)

It’s true that no smartphone in the world can transform a bog-standard bathroom into a full on luxury spa. But other elements of the Balneario’s program simply need digitizing and structuring to serve up similar benefits, is the thinking.

The sorts of digital activity programs they’re devising for the app are designed to be fun for seniors as well as beneficial and appropriate for a particular frailty level. Examples of classes currently offered include reduced mobility dance, burpee-free ‘cross fit’, and osteoarthritis-safe karate.

The onboarding process involves an assessment to determine a senior’s frailty level in order that users are offered content at an activity level that’s appropriate for their physical condition.

Long is the road

Cartagena notes they’re working with Dr. José Viña, a professor at the University of Valencia, who is renowned in the longevity field. “He has proven he can revert frailty in the earliest stages by applying a certain methodology to specific muscles with a treatment of exercise-fusion — with some lifestyle habits. Now what has not been proven is whether that is applicable to a remote environment where people do it on their own,” he adds. “And this what we are doing right now. This pre-seed round is basically to take that uncertainty, put that in front of a few thousand [app] users, take that research… and see if in the next 12 months we improve [their frailty level].”

The actual Balneario is closed at the moment, in this health-stricken year of the novel coronavirus, but the plan is to reopen in March 2021 — and then introduce the annual intake to Rosita — garnering ongoing feedback on whether or not it’s steering them toward health-supporting habits.

“It’s all about understanding the customer so well and that’s where the competitive advantage of this company really comes from,” argues Cartagena. “By having 15,000 seniors per year coming to the school, every year we understand the customer very well, their habits, what they do, what they don’t. They come every year so we can ask them what did you do last year?

“That will be for us the way to have a massive focus group — let’s say a sliding window of focus group that we can see for ten days using the product — and we can iterate much faster by seeing not people just through our analytics but people who are using the product in front of us. One hundred or 500 people a day in our resort. And I think that will be a fundamental way in which we can actually build something that people really need and use and care about.”

The current version of the app doesn’t yet include AI-powered personalized coaching. But that’s again where the pre-seed funding comes in. “The initial coach for education and frailty itineraries should be ready in three weeks (together with our iOS app),” says Cartagena. “This solves a pressing problem our users have today.

“The personalized coach (pathologies, followups, context, atomization of exercises, etc) has a lot of logic behind and testing this properly will take more time. We will release that intelligence slowly and we should feel ‘proud’ by Christmas. That will become our Habits Engine. Together with our geroscience research plan, those are the uncertainties to get right with our current funding.”

Targeting chronic pain is another key aim for the app, although he concedes there may be some types of pain they won’t be able to address. The co-founders add that the app is intended to supplement not replace traditional healthcare — pointing out it’s being designed to be more forward-looking; aka that prevention of age-related problems is exactly the strategy to live better for longer.

“Telehealth is more about managing a disease — we’re more about preventing,” adds Fernández. “We’re more about discovering what are the indicators and the tools to make sure that the senior population… understand what it happening to their body, what is going to happen over the next ten years and start to slowly develop those habits so that they can minimize, reduce the evolution, the natural ageing process.”

Cartagena notes they are also working with researchers on developing sensor hardware that could go alongside the app to enhance their ability to predict frailty — suggesting it will allow them to define a wider/more nuanced range of user categories (the first version of the app has three categories but he says they want to be able to offer nine).

Smartphone and sensor hardware combined with AI technology has, for some years now, been enabling a new generation of guided physical therapy apps that seek to offer an alternative to pharmaceutical-based management for chronic pain — such as Kaia Health and Hinge Health, to name two. And of course mindfulness/guided mediation has become a huge app business. While the broader concept of ‘digital health’ has, over the past half decade or so, seen CBT-style therapy programs packaged up to be put on tap in people’s pockets. So there’s nothing inherently strange or exotic about the idea of a longevity coach for seniors.

Albeit, getting the user experience right could well be the biggest challenge. Cartagena says the app’s tone is important — talking in terms of not wanting to be “patronizing” or make seniors feel like Rosita is giving them “homework” — so they really click with the virtual coach and stay engaged.

Fernández too emphasizes the goal is to sustain good habits. Ergo, this is a (gentle) marathon not a sprint. 

If they can design a safe and engaging experience that seniors don’t find off-putting, tedious or confusing the potential to expand access to therapies, activities and information that can improve people’s quality of life looks huge. Frailty is also only the team’s first focus. As they develop the product and grow usage they want to be able to support their users to form healthy habits that could help stave off neurodegenerative conditions like dementia, for example. Combating loneliness and social isolation is another goal. So there’s a whole range of health plans they’re hoping Rosita will be able to deliver.

“What we’re doing right now is focused especially on frailty — we’re developing the personalized AI coach on top of that — and what we’re going to do is start adding the layers of all the different health plans that we’re going to be establishing off the longevity coach,” says Fernández. “Nutrition, cognitive stimulation, relaxation and breathing, and on top of that we will put all the prevention strategies — and all the classes that we’re preparing for longevity.

“One of the things that we have tested in the clinic that is very important is to educate the user. Not just on what they need to do today — but on what is happening to their ageing process, what is happening to their metabolism, what is happening to their musculoskeletal system. How and why your body is ageing is fundamental so you can make small decisions. By empowering users through education they can understand and relate to why this specific thing that you’re telling them today is useful in the long run.”

“One of the most successful strategies that we have built is creating this whole course on longevity which is what is happening to your body — what science knows today about the field of longevity,” she adds. “And how you can minimize those symptoms. And those things we’re translating completely into the [app].”

Cartagena also points to the risk of a COVID-19 ‘4th wave’ of deaths that could result from seniors becoming more frail than they otherwise would after being forced into a more sedentary existence as a result of lockdown measures and concerns about their risk of exposure to the coronavirus.

Or, in other words, sitting at home on the sofa might help seniors stay free of virus but if abrupt inactivity risks their vitality that too could cut short a healthy lifespan. So tools to help older people stay active are looking more important than ever. And to that end he says the app will remain free throughout the pandemic — envisaging that could stretch into 2022.

The plan for the business model is b2c, likely focused on selling premium content — such as connecting users directly with a therapist to chat through their progress. In the meanwhile they’re relying on VC to get their digital “motivation engine” to market.

Right now they have 5,000 “pre-registrations” for the app and 1,000 seniors actively testing the product (all aged 60 to 80, in Spain). They’ve also just pushed out an update, moving the software out of the ‘early access’ phase — as they progress toward launching their “personalized AI coach for longevity.”

And while Rosita’s coaching is currently only available in Spanish — with the team having recorded “hundreds” of videos so far for different levels and chronic pathologies — the aim is to scale up in Europe (and perhaps beyond), starting with the U.K. market. Which makes English the next natural language for them to build out content.

#apps, #artificial-intelligence, #digital-health, #europe, #health, #hearts-radiant, #jme-vc, #longevity, #rosita-longevity, #seniors, #tc

0

What the iPhone 12 tells us about the state of the smartphone industry in 2020

The smartphone industry was in transition well before COVID-19 was a blip on anyone’s radar. More than 13 years after the launch of the original iPhone, these products have long since transitioned from luxury items to commodities, losing some of their luster in the process. The past several years have seen slower upgrade cycles as consumers grew reluctant to pay $1,000 or more for new devices.

And while the iPhone 12 was no doubt in development long before the current pandemic, the pandemic’s global shutdown has only exacerbated many existing problems for smartphone makers. The clearest representation of Apple’s reaction is in the sheer number of iPhones announced at today’s “Hi Speed” event. Long gone are the days when a company could rest on a single flagship or two.

Today’s event brought a grand total of four new iPhone models, ranging in price from $699 to $1,099: the 12, 12 mini, 12 Pro and 12 Pro Max. As with the Apple Watch, the company is keeping last year’s iPhone 11 around and has cut the price to $599. That puts the older model in the high-mid-range for Android devices, but represents a far cheaper entry point than we’re accustomed to for Apple phones.

#5g, #apple, #apple-iphone-event-2020, #artificial-intelligence, #covid-19, #hardware, #iphone, #iphone-12, #mobile

0

Google launches a suite of tech-powered tools for reporters, Journalist Studio

Google is putting A.I. and machine learning technologies into the hands of journalists. The company this morning announced a suite of new tools, Journalist Studio, that will allow reporters to do their work more easily. At launch, the suite includes a host of existing tools as well as two new products aimed at helping reporters search across large documents and visualizing data.

The first tool is called Pinpoint and is designed to help reporters work with large file sets — like those that contain hundreds of thousands of documents.

Pinpoint will work as an alternative to using the “Ctrl + F” function to manually seek out specific keywords in the documents. Instead, the tool takes advantage of Google Search and its A.I.-powered Knowledge Graph, along with optical character recognition and speech-to-text technologies.

It’s capable of sorting through scanned PDFs, images, handwritten notes, and audio files to automatically identify the key people, organizations, and locations that are mentioned. Pinpoint will highlight these terms and even their synonyms across the files for easy access to the key data.

Image Credits: Google

The tool has already been put to use by journalists at USA Today, for its report on 40,600 COVID-19-related deaths tied to nursing homes. Reveal also used Pinpoint look into the COVID-19 “testing disaster” in ICE detention centers. And The Washington Post used it for a piece about the opioid crisis.

Because it’s also useful for speeding up research, Google notes Pinpoint can be used for shorter-term projects, as well — like Philippines-based Rappler’s examination of CIA reports from the 1970s or Mexico-based Verificado MX’s fast fact checking of the government’s daily pandemic updates.

Pinpoint is available now to interested journalists, who can sign up to request access. The tool currently supports seven languages: English, French, German, Italian, Polish, Portuguese, and Spanish.

Google has also partnered with The Center for Public IntegrityDocument Cloud, Stanford University’s Big Local News program and The Washington Post to create shared public collections that are available to all users.

The second new tool being introduced today is The Common Knowledge Project, still in beta.

The tool allows journalists to explore, visualize and share data about important issues in their local communities by creating their own interactive charts using thousands of data points in a matter minutes, the company says.

Image Credits: Google

These charts can then be embedded in reporters’ stories on the web or published to social media.

This particular tool was built by the visual journalism team at Polygraph, supported by the Google News Initiative. The data for use in The Common Knowledge Project comes from Data Commons, which includes thousands of public datasets from organizations like the U.S. Census and the CDC.

At launch, the tool offers U.S. data on issues including demographics, economy, housing, education, and crime.

As it’s still in beta testing, Google is asking journalists to submit their ideas for how it can be improved.

Google will demonstrate and discuss these new tools in more detail during a series of upcoming virtual events, including the Online News Association’s conference on Thursday, October 15. The Google News Initiative training will also soon host a six-part series focused on tools for reporters in seven different languages across nine regions, starting the week of October 20.

The new programs are available on the Journalist Studio website, which also organizes other tools resources for reporters, including Google’s account security system, the Advanced Protection Program; direct access to the Data Commons; DataSet Search; a Fact Check Explorer; a tool for visualizing data using customizable templates, Flourish; the Google Data GIF Maker; Google Public Data Explorer; Google Trends; DIY VPN Outline; DDoS defense tool, Project Shield; and tiled cartogram maker Tilegrams.

The site additionally points to other services from Google, like Google Drive, Google Scholar, Google Earth, Google News, and others, as well as training resources.

 

#ai, #artificial-intelligence, #data-visualization, #google, #journalist, #machine-learning, #media, #optical-character-recognition, #reporters, #social-media

0

Dataloop raises $11M Series A round for its AI data management platform

Dataloop, a Tel Aviv-based startup that specializes in helping businesses manage the entire data lifecycle for their AI projects, including helping them annotate their datasets, today announced that it has now raised a total of $16 million. This includes a $5 seed round that was previously unreported, as well as an $11 million Series A round that recently closed.

The Series A round was led by Amiti Ventures with participation from F2 Venture Capital, crowdfunding platform OurCrowd, NextLeap Ventures and SeedIL Ventures.

“Many organizations continue to struggle with moving their AI and ML projects into production as a result of data labeling limitations and a lack of real time validation that can only be achieved with human input into the system,” said Dataloop CEO Eran Shlomo. “With this investment, we are committed, along with our partners, to overcoming these roadblocks and providing next generation data management tools that will transform the AI industry and meet the rising demand for innovation in global markets.”

Image Credits: Dataloop

For the most part, Dataloop specializes in helping businesses manage and annotate their visual data. It’s agnostic to the vertical its customers are in, but we’re talking about anything from robotics and drones to retail and autonomous driving.

The platform itself centers around the ‘humans in the loop’ model that complements the automated systems with the ability for humans to train and correct the model as needed. It combines the hosted annotation platform with a Python SDK and REST API for developers, as well as a serverless Functions-as-a-Service environment that runs on top of a Kubernetes cluster for automating dataflows.

Image Credits: Dataloop

The company was founded in 2017. It’ll use the new funding to grow its presence in the U.S. and European markets, something that’s pretty standard for Israeli startups, and build out its engineering team as well.

#artificial-intelligence, #ceo, #enterprise, #free-software, #ml, #ourcrowd, #python, #serverless-computing, #tc, #tel-aviv, #united-states

0

Edge computing startup Edgify secures $6.5M Seed from Octopus, Mangrove and semiconductor

Edgify, which builds AI for edge computing, has secured a $6.5m seed funding round backed by Octopus Ventures, Mangrove Capital Partners and an unnamed semiconductor giant. The name was not released but TechCrunch understands it nay be Intel Corp. or Qualcomm Inc.

Edgify’s technology allows ‘edge devices’ (devices at the edge of the internet) to interpret vast amounts of data, train an AI model locally, and then share that learning across its network of similar devices. This then trains all the other devices in anything from computer vision, NLP, voice recognition, or any other form of AI. 

The technology can be applied to anything from MRI machines, connected cars, checkout lanes, mobile devices and anything that has a CPU, GPU or NPU. Edgify’s technology is already being used in supermarkets, for instance.

Ofri Ben-Porat, CEO and co-founder of Edgify, commented in a statement: “Edgify allows companies, from any industry, to train complete deep learning and machine learning models, directly on their own edge devices. This mitigates the need for any data transfer to the Cloud and also grants them close to perfect accuracy every time, and without the need to retrain centrally.” 

Mangrove partner Hans-Jürgen Schmitz who will join Edgify’s Board comments: “We expect a surge in AI adoption across multiple industries with significant long-term potential for Edgify in medical and manufacturing, just to name a few.” 

Simon King, Partner and Deep Tech Investor at Octopus Ventures added: “As the interconnected world we live in produces more and more data, AI at the edge is becoming increasingly important to process large volumes of information.”

So-called ‘edge computing’ is seen as being one of the forefronts of deeptech right now.

#articles, #artificial-intelligence, #cloud-computing, #computing, #cybernetics, #deep-learning, #edge-computing, #emerging-technologies, #europe, #internet-of-things, #machine-learning, #mangrove-capital-partners, #manufacturing, #mobile-devices, #mri, #octopus-ventures, #science-and-technology, #semiconductor, #tc, #voice-recognition

0

South Korea pushes for AI semiconductors as global demand grows

The South Korean government has made no secret of its ambition to be a key player in the global artificial intelligence industry, including making the semiconductors powering AI functionalities.

This week, the country’s information and communications technology regulator announced plans to develop up to 50 types of AI -focused system semiconductors by 2030, Yonhap News Agency reported. The government will be on the hunt for thousands of local experts to lead the new wave of innovation.

South Korean has made several promises to support next-generation chip companies in recent times. Earlier this year, for example, it announced plans to spend about 1 trillion won ($870 million) on AI chips commercialization and production before 2029. Last year, President Moon Jae-in announced his “Presidential Initiative for AI” to raise public awareness on the industry.

These efforts come amid growing demand for AI-related chips, which, by McKinsey estimates, could account for almost 20% of all semiconductor demand and generate about $67 billion in revenue by 2025.

South Korea is already home to two of the world’s largest memory chip makers — Samsung and SK hynix. While that’s a lucrative industry, it’s one relying more on “the manufacturing process rather than core technologies,” observed Seewan Toong, an independent IT industry expert.

“It’s about making the chip smaller, denser, more efficient, and putting more memory on one chip,” he added.

The country wants to make its semiconductors smarter and vow to own 20% of the global AI chip market by 2030, according to Yonhap.

Samsung dabbled in next-gen chips as it became the mass-production partner for Baidu’s AI chips late last year. In July, the conglomerate announced hiring 1,000 new staff to work on chips and AI. SK hynix has picked its own Chinese ally by backing Horizon Robotics, an AI chip designer last valued at $3 billion.

China, which has long focused on the application of AI rather than fundamental research, has similarly shelled out state funds for home-grown semiconductor companies as the country suffers from U.S. sanctions on core technologies. The question is how many startups, under state support, will survive to compete with global behemoths like Nvidia and Qualcomm.

#ai, #ai-chips, #artificial-intelligence, #asia, #korea, #south-korea

0

Alphabet’s latest moonshot is a field-roving, plant-inspecting robo-buggy

Alphabet (you know… Google) has taken the wraps off the latest “moonshot” from its X labs: A robotic buggy that cruises over crops, inspecting each plant individually and, perhaps, generating the kind of “big data” that agriculture needs to keep up with the demands of a hungry world.

Mineral is the name of the project, and there’s no hidden meaning there. The team just thinks minerals are really important to agriculture.

Announced with little fanfare in a blog post and site, Mineral is still very much in the experimental phase. It was born when the team saw that efforts to digitize agriculture had not found as much success as expected at a time when sustainable food production is growing in importance every year.

“These new streams of data are either overwhelming or don’t measure up to the complexity of agriculture, so they defer back to things like tradition, instinct or habit,” writes Mineral head Elliott Grant. What’s needed is something both more comprehensive and more accessible.

Much as Google originally began with the idea of indexing the entire web and organizing that information, Grant and the team imagined what might be possible if every plant in a field were to be measured and adjusted for individually.

A robotic plant inspector from Mineral.

Image Credits: Mineral

The way to do this, they decided, was the “Plant buggy,” a machine that can intelligently and indefatigably navigate fields and do those tedious and repetitive inspections without pause. With reliable data at a plant-to-plant scale, growers can initiate solutions at that scale as well — a dollop of fertilizer here, a spritz of a very specific insecticide there.

They’re not the first to think so. FarmWise raised quite a bit of money last year to expand from autonomous weed-pulling to a full-featured plant intelligence platform.

As with previous X projects at the outset, there’s a lot of talk about what could happen in the future, and how they got where they are, but rather little when it comes to “our robo-buggy lowered waste on a hundred acres of soy by 10 percent” and such like concrete information. No doubt we’ll hear more as the project digs in.

#agriculture, #alphabet, #artificial-intelligence, #farming, #farmwise, #google, #google-x, #greentech, #hardware, #robotics

0

Microsoft and partners aim to shrink the ‘data desert’ limiting accessible AI

AI-based tools like computer vision and voice interfaces have the potential to be life-changing for people with disabilities, but the truth is those AI models are usually built with very little data sourced from those people. Microsoft is working with several nonprofit partners to help make these tools reflect the needs and everyday realities of people living with conditions like blindness and limited mobility.

Consider for example a computer vision system that recognizes objects and can describe what is, for example, on a table. Chances are that algorithm was trained with data collected by able people, from their point of view — likely standing.

A person in a wheelchair looking to do the same thing might find the system isn’t nearly as effective from that lower angle. Similarly a blind person will not know to hold the camera in the right position for long enough for the algorithm to do its work, so they must do so by trial and error.

Or consider a face recognition algorithm that’s meant to tell when you’re paying attention to the screen for some metric or another. What’s the likelihood that among the faces used to train that system, any significant amount have things like a ventilator, or a puff-and-blow controller, or a headstrap obscuring part of it? These “confounders” can significantly affect accuracy if the system has never seen anything like them.

Facial recognition software that fails on people with dark skin, or has lower accuracy on women, is a common example of this sort of “garbage in, garbage out.” Less commonly discussed but no less important is the visual representation of people with disabilities, or of their point of view.

Microsoft today announced a handful of efforts co-led by advocacy organizations that hope to do something about this “data desert” limiting the inclusivity of AI.

The first is a collaboration with Team Gleason, an organization formed to improve awareness around the neuromotor degenerative disease amyotrophic lateral sclerosis, or ALS (it’s named after former NFL star Steve Gleason, who was diagnosed with the disease some years back).

Their concern is the one above regarding facial recognition. People living with ALS have a huge variety of symptoms and assistive technologies, and those can interfere with algorithms that have never seen them before. That becomes an issue if, for example, a company wanted to ship gaze tracking software that relied on face recognition, as Microsoft would surely like to do.

“Computer vision and machine learning don’t represent the use cases and looks of people with ALS and other conditions,” said Team Gleason’s Blair Casey. “Everybody’s situation is different and the way they use technology is different. People find the most creative ways to be efficient and comfortable.”

Project Insight is the name of a new joint effort with Microsoft that will collect face imagery of volunteer users with ALS as they go about their business. In time that face data will be integrated with Microsoft’s existing cognitive services, but also released freely so others can improve their own algorithms with it.

They aim to have a release in late 2021. If the timeframe seems a little long, Microsoft’s Mary Bellard, from the company’s AI for Accessibility effort, pointed out that they’re basically starting from scratch and getting it right is important.

“Research leads to insights, insights lead to models that engineers bring into products. But we have to have data to make it accurate enough to be in a product in the first place,” she said. “The data will be shared — for sure this is not about making any one product better, it’s about accelerating research around these complex opportunities. And that’s work we don’t want to do alone.”

Another opportunity for improvement is in sourcing images from users who don’t use an app the same way as most. Like the person with impaired vision or in a wheelchair mentioned above, there’s a want of data from their perspective. There are two efforts aiming to address this.

Images taken by people needing objects in them to be identified or located.

Image Credits: ORBIT

One with City University of London is the expansion and eventual public release of the Object Recognition for Blind Image Training project, which is assembling a dataset for everyday for identifying everyday objects — a can of pop, a keyring — using a smartphone camera. Unlike other datasets, though, this will be sourced entirely from blind users, meaning the algorithm will learn from the start to work with the kind of data it will be given later anyway.

AI captioned images

Image Credits: Microsoft

The other is an expansion of VizWiz to better encompass this kind of data. The tool is used by people who need help right away in telling, say, whether a cup of yogurt is expired or if there’s a car in the driveway. Microsoft worked with the app’s creator, Danna Gurari, to improve the app’s existing database of tens of thousands of images with associated questions and captions. They’re also working to alert a user when their image is too dark or blurry to analyze or submit.

Inclusivity is complex because it’s about people and systems that, perhaps without even realizing it, define “normal” and then don’t work outside of those norms. If AI is going to be inclusive, “normal” needs to be redefined and that’s going to take a lot of hard work. Until recently, people weren’t even talking about it. But that’s changing.

“This is stuff the ALS community wanted years ago,” said Casey. “This is technology that exists — it’s sitting on a shelf. Let’s put it to use. When we talk about it, people will do more, and that’s something the community needs as a whole.”

#accessibility, #als, #artificial-intelligence, #computer-vision, #disabilities, #face-recognition, #facial-recognition, #microsoft, #tc, #team-gleason

0

Nest launches its $129 thermostat with a new design, swipe and touch interface on the side

Google’s Nest unit today launched its newest thermostat. At $129, the Nest Thermostat is the company’s most affordable one yet, but it’s also the first to feature a new swipe and tap interface on its side, as well as Google’s Soli radar technology to sense room occupancy and when you are near the device.

Soli, it is worth noting, is not being used for enabling gesture controls. Instead, because the design team wanted a solid mirror finish on the front, Nest decided to use it purely for motion sensing.

The new thermostat, which is made from 49 percent recycled plastic, will come in four colors, Snow, Charcoal, Sand and Fog. The company is also launching a $14.99 trim kit to help you hide any imperfections in your pain when you install the new thermostat.

Image Credits: Nest

“It has this inviting form with this intuitive swipe up and down control, which lets you interact with this product really naturally, instead of pressing these tiny little buttons that most traditional thermostats have,” Nest product lead Ruchi Desai told me.

It’s worth noting that this new version is mostly meant for users in smaller apartments or condos, as it doesn’t support Nest’s remote sensors. To get support for those, you’ll need a Nest Thermostat E (which can occasionally be found for around $139) or the fully-fledged Nest Learning Thermostat .

Talking about learning, among the feature the team is highlighting with this release is the thermostat’s ability to help you schedule your custom temperature settings for different times of the day — and different days. Nest calls this Quick Schedule.

“Unlike the Nest Learning Thermostat, which has the auto-schedule [feature], this one actually offers the ability to create temperature presets, which gives you the ability to set up a schedule based on your lifestyle, based on your preferences,” Desai said. “It will also give you the flexibility of holding temperatures, which means it’ll override the schedule that you have in times when you need the control and flexibility.”

Image Credits: Nest

That sounds a lot like what you’d find in most of today’s smart thermostats from the likes of Ecobee and other Nest competitors, but it’s a first for Nest.

With its Savings Finder feature, the thermostat can also look for small optimizations and suggest minor tweaks that can result in additional energy savings.

Thanks to the new built-in Soli radar chip, the device can automatically lower the temperature when you’re not home. It’s a shame the team isn’t using the chip for any gesture controls, something Google did with its Pixel 4 phone, but the team tells me that it decided not to do this because it didn’t fit the user profile.

“I think that was a very conscious decision we made while designing this product, because for this product we really have the user in mind and we really wanted to focus on the features that were really important to this user. And these are brand new to smart home, they really wanted app control — it seems so basic to us but it’s a massive upgrade for them, right. And all these energy-saving features that come with the thermostat were something that they valued a lot. So we wanted to focus on the features that these users valued for this product,” Desai explained.

Maybe we’ll see Nest do more with this technology in the next iterations of its more expensive thermostats. For now, it feels like a bit of a missed opportunity, though in all fairness, Soli in the Pixel 4 mostly felt like a gimmick and at least the Nest team is putting it to practical use here.

Image Credits: Nest

Like before, Nest promises that it will only take about half an hour or so to install the new thermostat. The app walks you through the individual steps, which should make the process pretty straightforward, assuming your heating and cooling system follows modern standards.

To control the thermostat remotely, you’ll use the Google Home app, where you’ll also find all of the smart features to help you save more energy.

The new thermostat is now available in the U.S. (for $129.99) and Canada (for $179.99 CAD). In Canada, the trim kit will retail for $19.99 CAD). As the team noted, between various utility rebates and rewards, a lot of users may be able to get theirs for only a few dollars, depending on where they live.

Image Credits: Nest

#artificial-intelligence, #cad, #ecobee, #google, #google-nest, #hardware, #home-automation, #nest, #pixel-4, #smart-thermostat, #tc, #thermostat, #united-states

0

Waymo and TuSimple autonomous trucking leaders on the difficulty of building a highway-safe AI

TuSimple and Waymo are in the lead in the emerging sector of autonomous trucking; TuSimple founder Xiaodi Hou and Waymo trucking head Boris Sofman had an in-depth discussion of their industry and the tech they’re building at TC Mobility 2020. Interestingly, while they’re solving for the same problems, they have very different backgrounds and approaches.

Hou and Sofman started out by talking about why they were pursuing the trucking market in the first place. (Quotes have been lightly edited for clarity.)

“The market is massive; I think in the United States, $700-$800 billion a year is spent on the trucking industry. It’s continuing to grow every single year,” said Sofman, who joined Waymo from Anki last year to lead the effort in freight. “And there’s a huge shortage of drivers today, which is only going to increase over the next period of time. It’s just such a clear need. But it’s not going to be overnight — there’s still a really long tail of challenges that you can’t avoid. So the way we talk about it is the things that are hardest are just different.”

“It’s really the cost and reward analysis, thinking about building the operating system,” said Hou. “The cost is the number of features that you develop, and the reward is basically how many miles are you driving — you charge on a per mile basis. From that cost-reward analysis, trucking is simply the natural way to go for us. The total number of issues that you need to solve is probably 10 times less, but maybe, you know, five times harder.”

“It’s really hard to quantify those numbers, though,” he concluded, “but you get my point.”

The two also discussed the complexity of creating a perceptual framework good enough to drive with.

“Even if you have perfect knowledge of the world, you have to predict what other objects and agents are going to do in that environment, and then make a decision yourself and the combination knows is very challenging,” said Sofman.

“What’s really helped us is a realization from the car side of the of the company many, many years ago that in order to help us solve this problem in the easiest way possible, and facilitate the challenges downstream, we had to create our own sensors,” he continued. “And so we have our own lidar, our own radar, our own cameras, and they have incredibly unique properties that were custom designed through five generations of hardware that try to really lean into the kind of most challenging situations that you just can’t avoid on the road.”

Hou explained that while many autonomous systems are descended from the approaches used in the famous DARPA Grand Challenge 15 years ago, TuSimple’s is a little more anthropomorphic.

“I think I’m heavily influenced by my background, which has a tinge of neuroscience. So I’m always thinking about building a machine that can see and think, as humans do,” he said. “In the DARPA challenge, people’s idea would be: Okay, write a dynamic system equation and solve this equation. For me, I’m trying to answer the question of, how do we reconstruct the world? Which is more about understanding the objects, understanding their attributes, even though some of the attributes may not directly contribute to the entire self-driving system.”

“We’re combining all the different, seemingly useless features together, so that we can reconstruct the so-called ‘qualia’ of the perception of the world,” continued Hou. “By doing that we find we have all the ingredients that we need to do whatever missions that we have.”

The two found themselves in disagreement over the idea that due to the major differences between highway driving and street-level driving, there are essentially two distinct problems to be solved.

Hou was of the opinion that “the overlap is rather small. Human society has declared certain types of rules for driving on the highway … this is a much more regulated system. But for local driving there’s actually no rules for interaction … in fact very different implicit social constructs to drive in different areas of the world. These are things that are very hard to model.”

Sofman, on the other hand, felt that while the problems are different, solving one contributes substantially to solving the other: “If you break up the problem into the many, many building blocks of an AV system, there’s a pretty huge leverage where even if you don’t solve the problem 100% it takes away 85%-90% of the complexity. We use the exact same sensors, exact same compute infrastructures, simulation framework, the perception system carries over, very largely, even if we have to retrain some of the models. The core of all of our algorithms are, we’re working to keep them the same.”

You can see the rest of that last exchange in the video above. This panel and many more from TC Sessions: Mobility 2020 are available to watch here for Extra Crunch subscribers.

#artificial-intelligence, #autonomous-trucks, #autonomous-vehicles, #boris-sofman, #logistics, #robotics, #startups, #tc, #transportation, #tusimple, #waymo

0