Apple announced a batch of accessibility features at WWDC 2021 that cover a wide variety of needs, among them a few for people who can’t touch or speak to their devices in the ordinary way. With Assistive Touch, Sound Control, and other improvements, these folks have new options for interacting with an iPhone or Apple Watch.
We covered Assistive Touch when it was first announced, but recently got a few more details. This feature lets anyone with an Apple Watch operate it with one hand by means of a variety of gestures. It came about when Apple heard from the community of people with limb differences — whether they’re missing an arm, or unable to use it reliably, or anything else — that as much as they liked the Apple Watch, they were tired of answering calls with their noses.
The research team cooked up a way to reliably detect the gestures of pinching one finger to the thumb, or clenching the hand into a fist, based on how doing them causes the watch to move — it’s not detecting nervous system signals or anything. These gestures, as well as double versions of them, can be set to a variety of quick actions. Among them is opening the “motion cursor,” a little dot that mimics the movements of the user’s wrist.
Considering how many people don’t have the use of a hand, this could be a really helpful way to get basic messaging, calling, and health-tracking tasks done without needing to resort to voice control.
Speaking of voice, that’s also something not everyone has at their disposal. Many of those who can’t speak fluently, however, can make a bunch of basic sounds, which can carry meaning for those who have learned — not so much Siri. But a new accessibility option called “Sound Control” lets these sounds be used as voice commands. You access it through Switch Control, not audio or voice, and add an audio switch.
Image Credits: Apple
The setup menu lets the user choose from a variety of possible sounds: click, cluck, e, eh, k, la, muh, oo, pop, sh, and more. Picking one brings up a quick training process to let the user make sure the system understands the sound correctly, and then it can be set to any of a wide selection of actions, from launching apps to asking commonly spoken questions or invoking other tools.
For those who prefer to interact with their Apple devices through a switch system, the company has a big surprise: Game controllers, once only able to be used for gaming, now work for general purposes as well. Specifically noted is the amazing Xbox Adaptive Controller, a hub and group of buttons, switches, and other accessories that improves the accessibility of console games. This powerful tool is used by many, and no doubt they will appreciate not having to switch control methods entirely when they’re done with Fortnite and want to listen to a podcast.
Image Credits: Apple
One more interesting capability in iOS that sits at the edge of accessibility is Walking Steadiness. This feature, available to anyone with an iPhone, tracks (as you might guess) the steadiness of the user’s walk. This metric, tracked throughout a day or week, can potentially give real insight into how and when a person’s locomotion is better and worse. It’s based on a bunch of data collected in the Apple Heart and Movement study, including actual falls and the unsteady movement that led to them.
If the user is someone who recently was fitted for a prosthesis, or had foot surgery, or suffers from vertigo, knowing when and why they are at risk of falling can be very important. They may not realize it, but perhaps their movements are less steady towards the end of the day, or after climbing a flight of steps, or after waiting in line for a long time. It could also show steady improvements as they get used to an artificial limb or chronic pain declines.
Exactly how this data may be used by an actual physical therapist or doctor is an open question, but importantly it’s something that can easily be tracked and understood by the users themselves.
Image Credits: Apple
Among Apple’s other assistive features are new languages for voice control, improved headphone acoustic accommodation, support for bidirectional hearing aids, and of course the addition of cochlear implants and oxygen tubes for memoji. As an Apple representative put it, they don’t want to embrace differences just in features, but on the personalization and fun side as well.
You weren’t expecting to make it through this year’s WWDC without some big watchOS news, were you? Apple’s wearable isn’t quite doing iPhone numbers, but Watch has been massively successful for the company, utterly dominating the smartwatch market.
Surprising absolutely no one, the company is taking a more focused approach to mindfulness. Apple’s not ready to kill Calm or Headspace just yet, but the popular breathe feature is getting a much needed upgrade with new animations reminding users to reflect and be more mindful.
Also new is respiratory tracking, which thus far had been more of a background feature. It’s being surfaced in the watchOS experience, for tracking over time and more notifications.
It wouldn’t be a watchOS update without some new faces, of course. Here the company is adding a portrait mode to faces to offer more depth for shots as the lock screen. The Watch Photos app is getting a new layout, as well, along with the share images from the watch to first-party apps like Mail and Messages.
Fitness+ is getting a bunch of new content, as well, including artist musical spotlights for workouts including Alicia Keys, Lady Gaga and Keith Urban. Tai chi and pilates are also being added to the list of workouts tracked by the wearable.
And last, but certainly not least for many, watchOS is also getting GIF support.
The wearables category of consumer devices—which includes smartwatches, fitness trackers, and augmented reality glasses—shipped more than 100 million units in the first quarter for the first time, according to research firm IDC. Q2 2021 saw a 34.4 percent increase in sales over the same quarter in 2020.
To be clear: wearables have sold that many (and more) units in a quarter before, but never in the first quarter, which tends to be a slow period following a spree of holiday-related buying in Q4.
For the past several years, wearables like the Fitbit Versa have made up one of the fastest-growing categories of personal electronics, but the devices still lag far behind smartphones in terms of total units moved each quarter or year.
The relationship between Spotify and Apple has been…understandably contentious at times. After all, Apple runs the streaming service’s biggest competitor. At the end of the day though, the Apple Watch and Spotify maintain the No. 1 spot in their respective categories by a wide margin. And playing nice ultimately benefits a wide swath of users in that overlapping Venn diagram.
Today Spotify announced that it’s finally bringing to the smartwatch what’s no doubt been one of its most requested features. Starting today, Premium subscribers can download music and podcasts to the wearable for offline listening. That means users will be able to leave their phone at home when they go for a jog.
The new feature works more or less like standard downloading and sharing. Users click the three ellipses next to an album, playlist or podcast and click “Download to Apple Watch.” Once downloaded, green arrows will populate next to the title. With headphones paired, you’ll be able to stream directly from the watch.
Samsung has already offered the feature on some of the competition, including Samsung’s Galaxy Watch line. The service is also coming to Google Wear OS watches soon, per an announcement at I/O. Apple Music, of course, has offered offline listening on the Watch for a while, as has Pandora. Deezer also beat Spotify to the popular wearable by a matter of days.
This year’s I/O event from Google was heavy on the “we’re building something cool” and light on the “here’s something you can use or buy tomorrow.” But there were also some interesting surprises from the semi-live event held in and around the company’s Mountain View campus. Read on for all the interesting bits.
Android 12 gets a fresh new look and some quality of life features
We’ve known Android 12 was on its way for months, but today was our first real look at the next big change for the world’s most popular operating system. A new look, called Material You (yes), focuses on users, apps, and things like time of day or weather to change the UI’s colors and other aspects dynamically. Some security features like new camera and microphone use indicators are coming, as well as some “private compute core” features that use AI processes on your phone to customize replies and notifications. There’s a beta out today for the adventurous!
Wow, Android powers 3 billion devices now
Subhed says it all (but read more here). Up from 2 billion in 2017.
Smart Canvas smushes Docs, productivity, and video calls together
Millions of people and businesses use Google’s suite of productivity and collaboration tools, but the company felt it would be better if they weren’t so isolated. Now with Smart Canvas you can have a video call as you work on a shared doc together and bring in information and content from your Drive and elsewhere. Looks complicated, but potentially convenient.
AI conversations get more conversational with LaMDA
It’s a little too easy to stump AIs if you go off script, asking something in a way that to you seems normal but to the language model is totally incomprehensible. Google’s LaMDA is a new natural language processing technique that makes conversations with AI models more resilient to unusual or unexpected queries, making it more like a real person and less like a voice interface for a search function. They demonstrated it by showing conversations with anthropomorphized versions of Pluto and a paper airplane. And yes, it was exactly as weird as it sounds.
Google built a futuristic 3D video calling booth
One of the most surprising things at the keynote had to be Project Starline, a high-tech 3D video call setup that uses Google’s previous research and Lytro DNA to show realistic 3D avatars of people on both sides of the system. It’s still experimental but looks very promising.
Wear OS gets a revamp and lots of health-focused apps
Image Credits: Google
Few people want to watch a movie on their smartwatch, but lots of people like to use it to track their steps, meditation, and other health-related practices. Wear OS is getting a bunch of Fitbit DNA infused, with integrated health tracking stuff and a lot of third party apps like Calm and Flo.
Samsung and Google announce a unified smartwatch platform
These two mobile giants have been fast friends in the phone world for years, but when it comes to wearables, they’ve remained rivals. In the face of Apple’s utter dominance in the smartwatch space, however, the two have put aside their differences and announced they’ll work on a “unified platform” so developers can make apps that work on both Tizen and Wear OS.
And they’re working together on foldables too
Apparently Google and Samsung realized that no one is going to buy foldable devices unless they do some really cool things, and that collaboration is the best way forward there. So the two companies will also be working together to improve how folding screens interact with Android.
Android TV hits 80 million devices and adds phone remote
Image Credits: Google
The smart TV space is a competitive one, and after a few starts Google has really made it happen with Android TV, which the company announced had reached 80 million monthly active devices — putting it, Roku, and Amazon (the latter two with around 50 million monthly active accounts) all in the same league. The company also showed off a powerful new phone-based remote app that will (among other things) make putting in passwords way better than using the d-pad on the clicker. Developers will be glad to hear there’s a new Google TV emulator and Firebase Test Lab will have Android TV support.
Your Android phone is now (also) your car key
Well, assuming you have a really new Android device with a UWB chip in it. Google is working with BMW first, and other automakers soon most likely, to make a new method for unlocking the car when you get near it, or exchanging basic commands without the use of a fob or Bluetooth. Why not Bluetooth you ask? Well, Bluetooth is old. UWB is new.
Vertex collects machine learning development tools in one place
Google and its sibling companies are both leaders in AI research and popular platforms for others to do their own AI work. But its machine learning development tools have been a bit scattershot — useful but disconnected. Vertex is a new development platform for enterprise AI that puts many of these tools in one place and integrates closely with optional services and standards.
There’s a new generation of Google’s custom AI chips
Google does a lot of machine learning stuff. Like, a LOT a lot. So they are constantly working to make better, more efficient computing hardware to handle the massive processing load these AI systems create. TPUv4 is the latest, twice as fast as the old ones, and will soon be packaged into 4,096-strong pods. Why 4,096 and not an even 4,000? The same reason any other number exists in computing: powers of 2.
And they’re powering some new Photos features including one that’s horrifying
NO THANK YOU
Google Photos is a great service, and the company is trying to leverage the huge library of shots most users have to find patterns like “selfies with the family on the couch” and “traveling with my lucky hat” as fun ways to dive back into the archives. Great! But they’re also taking two photos taken a second apart and having an AI hallucinate what comes between them, leading to a truly weird looking form of motion that shoots deep, deep into the uncanny valley, from which hopefully it shall never emerge.
Forget your password? Googlebot to the rescue
Google’s “AI makes a hair appointment for you” service Duplex didn’t exactly set the world on fire, but the company has found a new way to apply it. If you forget your password, Duplex will automatically fill in your old password, pick a new one and let you copy it before submitting it to the site, all by interacting with the website’s normal reset interface. It’s only going to work on Twitter and a handful of other sites via Chrome for now, but hey, if it happens to you a lot, maybe it’ll save you some trouble.
Enter the Shopping Graph
Image Credits: Google I/O 2021
The aged among our readers may remember Froogle, Google’s ill-fated shopping interface. Well, it’s back… kind of. The plan is to include lots of product information, from price to star rating, availability and other info, right in the Google interface when you search for something. It sucks up this information from retail sites, including whether you have something in your cart there. How all this benefits anyone more than Google is hard to imagine, but naturally they’re positioning it as wins all around. Especially for new partner Shopify. (Me, I use DuckDuckGo.)
Flutter cross-platform devkit gets an update
A lot of developers have embraced Google’s Flutter cross-platform UI toolkit. The latest version, announced today, adds some safety settings, performance improvements, and workflow updates. There’s lots more coming, too.
Firebase gets an update too
Popular developer platform Firebase got a bunch of new and updated features as well. Remote Config gets a nice update allowing developers to customize the app experience to individual user types, and App Check provides a basic level of security against external threats. There’s plenty here for devs to chew on.
The next version of Android Studio is Arctic Fox
Image Credits: Google
The beta for the next version of Google’s Android Studio environment is coming soon, and it’s called Arctic Fox. It’s got a brand new UI building toolkit called Jetpack Compose, and a bunch of accessibility testing built in to help developers make their apps more accessible to people with disabilities. Connecting to devices to test on them should be way easier now too. Oh, and there’s going to be a version of Android Studio for Apple Silicon.
With a long-standing history of working together on the mobile side, it’s always been a bit of surprise that Samsung hasn’t had much patience for Google’s wearables play. The hardware giant had flirted with Android Wear in the past, but for the last several years, it’s been invested in building out its own version of open-source operating system, Tizen.
Today, both companies announced a partnership featuring a “unified platform” between the two some time competitors. The goal of the deal is to essentially create a way for devs to build apps for both Wear OS and Tizen at once. The deal makes sense from that perspective. Third-party apps have been something of a sticking point for both companies.
Even more to the point, it’s an opportunity for two smaller players in the space to join forces and take on Apple, which has been utterly dominant in the smartwatch category, more or less since the first Apple Watch arrived.
Wear OS has already gone through a number of cycles, including a big rebrand from Android OS a while back, but nothing has really stuck over the years, leaving the wearable operating as something of an also-ran. For now, at least, this is far from a full-throated embrace of Wear OS on Samsung’s part and appears to be something more akin to an “the enemy of my enemy situation.” Along with developing a unified API, the companies are joining forces to pluck the best from each operating system, including longer battery life — perhaps the largest hurdle facing smartwatches at the moment.
“We know that health and wellness are at the forefront of consumers’ minds, and we’re excited to continue building the industry-leading health experience on our new unified platform with Google,” Samsung company said in a blog post. “As our consumers turn to wearable technology to monitor their wellbeing, we’re meeting these needs head on. By creating world-class health technology, we hope to elevate how users approach to their wellbeing, and enable them to make positive changes in their everyday lives.
Samsung added that the next version of the Galaxy Watch will be the first to leverage this partnership, but offered little additional information on the hardware front. I’d anticipate big news on the Wear OS front in the next year. If nothing else, the company’s acquisition of Google is a sign that it’s ready to go for broke with the platform.
Of the many frustrations of having a severe motor impairment, the difficulty of communicating must surely be among the worst. The tech world has not offered much succor to those affected by things like locked-in syndrome, ALS, and severe strokes, but startup Cognixion aims to with a novel form of brain monitoring that, combined with a modern interface, could make speaking and interaction far simpler and faster.
The company’s One headset tracks brain activity closely in such a way that the wearer can direct a cursor — reflected on a visor like a heads-up display — in multiple directions or select from various menus and options. No physical movement is needed, and with the help of modern voice interfaces like Alexa, the user can not only communicate efficiently but freely access all kinds of information and content most people take for granted.
But it’s not a miracle machine, and it isn’t a silver bullet. Here’s where how it got started.
Overhauling decades-old brain tech
Everyone with a motor impairment has different needs and capabilities, and there are a variety of assistive technologies that cater to many of these needs. But many of these techs and interfaces are years or decades old — medical equipment that hasn’t been updated for an era of smartphones and high-speed mobile connections.
Some of the most dated interfaces, unfortunately, are those used by people with the most serious limitations: those whose movements are limited to their heads, faces, eyes — or even a single eyelid, like Jean-Dominique Bauby, the famous author of “The Diving Bell and the Butterfly.”
One of the tools in the toolbox is the electroencephalogram, or EEG, which involves detecting activity in the brain via patches on the scalp that record electrical signals. But while they’re useful in medicine and research in many ways, EEGs are noisy and imprecise — more for finding which areas of the brain are active than, say, which sub-region of the sensory cortex or the like. And of course you have to wear a shower cap wired with electrodes (often greasy with conductive gel) — it’s not the kind of thing anyone wants to do for more than an hour, let alone all day every day.
Yet even among those with the most profound physical disabilities, cognition is often unimpaired — as indeed EEG studies have helped demonstrate. It made Andreas Forsland, co-founder and CEO of Cognixion, curious about further possibilities for the venerable technology: “Could a brain-computer interface using EEG be a viable communication system?”
He first used EEG for assistive purposes in a research study some five years ago. They were looking into alternative methods of letting a person control an on-screen cursor, among them an accelerometer for detecting head movements, and tried integrating EEG readings as another signal. But it was far from a breakthrough.
A modern lab with an EEG cap wired to a receiver and laptop – this is an example of how EEG is commonly used.
He ran down the difficulties: “With a read-only system, the way EEG is used today is no good; other headsets have slow sample rates and they’re not accurate enough for a real-time interface. The best BCIs are in a lab, connected to wet electrodes — it’s messy, it’s really a non-starter. So how do we replicate that with dry, passive electrodes? We’re trying to solve some very hard engineering problems here.”
The limitations, Forsland and his colleagues found, were not so much with the EEG itself as with the way it was carried out. This type of brain monitoring is meant for diagnosis and study, not real-time feedback. It would be like taking a tractor to a drag race. Not only do EEGs often work with a slow, thorough check of multiple regions of the brain that may last several seconds, but the signal it produces is analyzed by dated statistical methods. So Cognixion started by questioning both practices.
Improving the speed of the scan is more complicated than overclocking the sensors or something. Activity in the brain must be inferred by collecting a certain amount of data. But that data is collected passively, so Forsland tried bringing an active element into it: a rhythmic electric stimulation that is in a way reflected by the brain region, but changed slightly depending on its state — almost like echolocation.
The Cognixion One headset with its dry EEG terminals visible.
They detect these signals with a custom set of six EEG channels in the visual cortex area (up and around the back of your head), and use a machine learning model to interpret the incoming data. Running a convolutional neural network locally on an iPhone — something that wasn’t really possible a couple years ago — the system can not only tease out a signal in short order but make accurate predictions, making for faster and smoother interactions.
The result is sub-second latency with 95-100 percent accuracy in a wireless headset powered by a mobile phone. “The speed, accuracy and reliability are getting to commercial levels — we can match the best in class of the current paradigm of EEGs,” said Forsland.
Dr. William Goldie, a clinical neurologist who has used and studied EEGs and other brain monitoring techniques for decades (and who has been voluntarily helping Cognixion develop and test the headset), offered a positive evaluation of the technology.
“There’s absolutely evidence that brainwave activity responds to thinking patterns in predictable ways,” he noted. This type of stimulation and response was studied years ago. “It was fascinating, but back then it was sort of in the mystery magic world. Now it’s resurfacing with these special techniques and the computerization we have these days. To me it’s an area that’s opening up in a manner that I think clinically could be dramatically effective.”
BCI, meet UI
The first thing Forsland told me was “We’re a UI company.” And indeed even such a step forward in neural interfaces as he later described means little if it can’t be applied to the problem at hand: helping people with severe motor impairment to express themselves quickly and easily.
Sad to say, it’s not hard to imagine improving on the “competition,” things like puff-and-blow tubes and switches that let users laboriously move a cursor right, right a little more, up, up a little more, then click: a letter! Gaze detection is of course a big improvement over this, but it’s not always an option (eyes don’t always work as well as one would like) and the best eye-tracking solutions (like a Tobii Dynavox tablet) aren’t portable.
Why shouldn’t these interfaces be as modern and fluid as any other? The team set about making a UI with this and the capabilities of their next-generation EEG in mind.
Image Credits: Cognixion
Their solution takes bits from the old paradigm and combines them with modern virtual assistants and a radial design that prioritizes quick responses and common needs. It all runs in an app on an iPhone, the display of which is reflected in a visor, acting as a HUD and outward-facing display.
In easy reach of, not to say a single thought but at least a moment’s concentration or a tilt of the head, are everyday questions and responses — yes, no, thank you, etc. Then there are slots to put prepared speech into — names, menu orders, and so on. And then there’s a keyboard with word- and sentence-level prediction that allows common words to be popped in without spelling them out.
“We’ve tested the system with people who rely on switches, who might take 30 minutes to make 2 selections. We put the headset on a person with cerebral palsy, and she typed our her name and hit play in 2 minutes,” Forsland said. “It was ridiculous, everyone was crying.”
Goldie noted that there’s something of a learning curve. “When I put it on, I found that it would recognize patterns and follow through on them, but it also sort of taught patterns to me. You’re training the system, and it’s training you — it’s a feedback loop.”
“I can be the loudest person in the room”
One person who has found it extremely useful is Chris Benedict, a DJ, public speaker, and disability advocate who himself has Dyskinetic Cerebral Palsy. It limits his movements and ability to speak, but doesn’t stop him from spinning (digital) records at various engagements, however, or from explaining his experience with Cognixion’s One headset over email. (And you can see him demonstrating it in person in the video above.)
Image Credits: Cognixion
“Even though it’s not a tool that I’d need all the time it’s definitely helpful in aiding my communication,” he told me. “Especially when I need to respond quickly or am somewhere that is noisy, which happens often when you are a DJ. If I wear it with a Bluetooth speaker I can be the loudest person in the room.” (He always has a speaker on hand, since “you never know when you might need some music.”)
The benefits offered by the headset give some idea of what is lacking from existing assistive technology (and what many people take for granted).
“I can use it to communicate, but at the same time I can make eye contact with the person I’m talking to, because of the visor. I don’t have to stare at a screen between me and someone else. This really helps me connect with people,” Benedict explained.
“Because it’s a headset I don’t have to worry about getting in and out of places, there is no extra bulk added to my chair that I have to worry about getting damaged in a doorway. The headset is balanced too, so it doesn’t make my head lean back or forward or weigh my neck down,” he continued. “When I set it up to use the first time it had me calibrate, and it measured my personal range of motion so the keyboard and choices fit on the screen specifically for me. It can also be recalibrated at any time, which is important because not every day is my range of motion the same.”
Alexa, which has been extremely helpful to people with a variety of disabilities due to its low cost and wide range of compatible devices, is also part of the Cognixion interface, something Benedict appreciates, having himself adopted the system for smart home and other purposes. “With other systems this isn’t something you can do, or if it is an option, it’s really complicated,” he said.
As Benedict demonstrates, there are people for whom a device like Cognixion’s makes a lot of sense, and the hope is it will be embraced as part of the necessarily diverse ecosystem of assistive technology.
Forsland said that the company is working closely with the community, from users to clinical advisors like Goldie and other specialists, like speech therapists, to make the One headset as good as it can be. But the hurdle, as with so many devices in this class, is how to actually put it on people’s heads — financially and logistically speaking.
Cognixion is applying for FDA clearance to get the cost of the headset — which, being powered by a phone, is not as high as it would be with an integrated screen and processor — covered by insurance. But in the meantime the company is working with clinical and corporate labs that are doing neurological and psychological research. Places where you might find an ordinary, cumbersome EEG setup, in other words.
The company has raised funding and is looking for more (hardware development and medical pursuits don’t come cheap), and has also collected a number of grants.
The One headset may still be some years away from wider use (the FDA is never in a hurry), but that allows the company time to refine the device and include new advances. Unlike many other assistive devices, for example a switch or joystick, this one is largely software-limited, meaning better algorithms and UI work will significantly improve it. While many wait for companies like Neuralink to create a brain-computer interface for the modern era, Cognixion has already done so for a group of people who have much more to gain from it.
You can learn more about the Cognixion One headset and sign up to receive the latest at its site here.
There’s a stark contrast between Oura’s deck and the others we pored through on Extra Crunch Live. The slides CEO Harpreet Rai brought to the event were the clear output of a more mature and confident company seeking out its Series B. It’s a company with a focus, aware of where it wants the product to go and do (and it went there, announcing a massive followup round on Tuesday).
Then there’s that giant image of the Duke and Duchess of Sussex, with the company’s smart ring adorning Harry’s right hand. From there, it’s a parade of celebrity faces: Will Smith, Lance Armstrong, Bill Gates, Arianna Huffington and Seth Rogen, to name a few.
It’s clearly been a wild half-dozen years since the company was founded. Rai joined up in 2018, not long before the company embarked on its $28 million Series B. Forerunner General Partner Eurie Kim got on board during the round.
“[I] enthusiastically took the meeting and Harpreet shared his story and the story of Oura. The deck is what we talked through,” says Kim. “Because I was a consumer, it was just a no-brainer that I knew what he was trying to build. So we were very excited to lead the round.”
Kim and Rai joined us on Extra Crunch Live to discuss the process of taking Oura to the next level — and beyond — as the product found a second (or third) life during the pandemic through partnerships with sports leagues like the NBA. And as we’re wont to do, we asked the pair to take a look and a handful of user-submitted pitch decks. If you’d like your deck to be reviewed by experienced founders and investors on a future episode, you can submit it here.
On the hardness of hardware
By the time Oura sought out its Series B, the startup had already progressed pretty far. Kim compares the first-generation product (circa 2016 — predating both Rai and Kim’s time with the company ) to a “Power Rangers ring.” You’ve got to start somewhere, of course — and if nothing else, the admittedly bulky original edition of the product served as a powerful proof of concept.
It’s been a wild couple of years for Oura. Last year, in particular, proved to be a major driver for the wearable fitness manufacturer. With the pandemic bringing professional sports to a screeching halt in 2020, a number of major leagues have adopted the ring, including the NBA, WNBA, UFC and NASCAR.
The company has also been making a major push into health research courtesy of UCSF, which has published peer-review studies around the ring’s temperature monitor. That feature in particular has made it a big draw for the aforementioned leagues, as temperature spikes could point to larger issues, including the early stages of COVID-19.
Today the company is announcing a $100 million Series C. The round, led by The Chernin Group and Elysian Park (the Dodgers’ investment arm), brings the wearable company’s total funding up to $148.3 million. New investors include Temasek, JAZZ Venture Partners and Eisai, joining existing investors Forerunner Ventures, Square, MSD Capital, Marc Benioff, Lifeline Ventures, Metaplanet Holdings and Next Ventures.
The company initially set itself apart with its form factor, joining a crowded field that largely revolved around the wrist. Clearly, however, it’s come into its own over the last few years. To date, it’s sold more than 500,000 rings.
“The wearables industry is transitioning from activity trackers to health platforms that can improve people’s lives,” CEO Harpreet Singh Rai said in a press release tied to the news. “Oura focused first on sleep because it’s a daily habit, and lack of sleep has been linked to worsening health conditions including diabetes, cardiac disease, Alzheimer’s, cancer, poor mental health, and more.”
The company says the round will go toward R&D (both hardware and software development) and hiring, including additional marketing and customer experience. The round also sees the hiring of a number of key roles, including head of Science, Shyamal Patel; site leader Tommi Heinonen and Daniel Welch, who has been promoted to CFO.
“This year has shined a spotlight on gaps in our healthcare industry, and the increasing need for each of us to take control over our own health,” Forerunner Managing Director Eurie Kim said in the release. “Oura is emerging as the trusted leader and community in the space by empowering people with personalized data that provides actionable insights for health improvement.”
Personalized nutrition startup Zoe — named not for a person but after the Greek word for ‘life’ — has topped up its Series B round with $20M, bringing the total raised to $53M.
The latest close of the B round was led by Ahren Innovation Capital, which the startup notes counts two Nobel laureates as science partners. Also participating are two former American football players, Eli Manning and Ositadimma “Osi” Umenyiora; Boston, US-based seed fund Accomplice; healthcare-focused VC firm THVC and early stage European VC, Daphni.
The U.K.- and U.S.-based startup was founded back in 2017 but operated in stealth mode for three years, while it was conducting research into the microbiome — working with scientists from Massachusetts General Hospital, Stanford Medicine, Harvard T.H. Chan School of Public Health, and King’s College London.
One of the founders, professor Tim Spector of King’s College — who is also the author of a number of popular science books focused on food — became interested in the role of food (generally) and the microbiome (in particular) on overall health after spending decades researching twins to try to understand the role of genetics (nature) vs nurture (environmental and lifestyle factors) on human health.
Zoe used data from two large-scale microbiome studies to build its first algorithm which it began commercializing last September — launching its first product into the U.S. market: A home testing kit that enables program participants to learn how their body responds to different foods and get personalized nutrition advice.
The program costs around $360 (which Zoe takes in six instalments) and requires participants to (self) administer a number of tests so that it can analyze their biology, gleaning information about their metabolic and gut health by looking at changes in blood lipids, blood sugar levels and the types of bacteria in their gut.
Zoe uses big data and machine learning to come up with predictive insights on how people will respond to different foods so that it can offer individuals guided advice on what and how to eat, with the goal of improving gut health and reducing inflammatory responses caused by diet.
The combination of biological responses it analyzes sets it apart from other personalized nutrition startups with products focused on measuring one element (such as blood sugar) — is the claim.
But, to be clear, Zoe’s first product is not a regulated medical device — and its FAQ clearly states that it does not offer medical diagnosis or treatment for specific conditions. Instead it says only that it’s “a tool that is meant for general wellness purposes only”. So — for now — users have to take it on trust that the nutrition advice it dishes up is actually helpful for them.
The field of scientific research into the microbiome is undoubtedly early — Zoe’s co-founder states that very clearly when we talk — so there’s a strong component here, as is often the case when startups seek to use data and AI to generate valuable personalized predictions, whereby early adopters are helping to further Zoe’s research by contributing their data. Potentially ahead of the sought for individual efficacy, given so much is still unknown around how what we eat affects our health.
For those willing to take a punt (and pay up), they get an individual report detailing their biological responses to specific foods that compares them to thousands of others. The startup also provides them with individualized ‘Zoe’ scores for specific foods in order to support meal planning that’s touted as healthier for them.
“Reduce your dietary inflammation and improve gut health with a 4 week plan tailored to your unique biology and life,” runs the blurb on Zoe’s website. “Built around your food scores, our app will teach you how to make smart swaps, week by week.”
The marketing also claims no food is “off limits” — implying there’s a difference between Zoe’s custom food scores and (weight-loss focused) diets that perhaps require people to cut out a food group (or groups) entirely.
“Our aim is to empower you with the information and tools you need to make the best decisions for your body,” is Zoe’s smooth claim.
The underlying premise is that each person’s biology responds differently to different foods. Or, to put it another way, while we all most likely know at least one person who stays rake-thin and (seemingly) healthy regardless of what (or even how much) they eat, if we ate the same diet we’d probably expect much less pleasing results.
“What we’re able to start scientifically putting some evidence behind is something that people have talked about for a long time,” says co-founder George Hadjigeorgiou. “It’s early [for scientific research into the microbiome] but we have shown now to the world that even twins have different gut microbiomes, we can change our gut microbiomes through diet, lifestyle and how we live — and also that there are associations around particular [gut] bacteria and foods and a way to improve them which people can actually do through our product.”
Users of Zoe’s first product need to be willing (and able) to get pretty involved with their own biology — collecting stool samples, performing finger prick tests and wearing a blood glucose monitor to feed in data so it can analyze how their body responds to different foods and offer up personalized nutrition advice.
Another component of its study of biological responses to food has involved thousands of people eating “special scientific muffins”, which it makes to standardized recipes, so it can benchmark and compare nutritional responses to a particular blend of calories, carbohydrate, fat, and protein.
While eating muffins for science sounds pretty fine, the level of intervention required to make use of Zoe’s first at-home test kit product is unlikely to appeal to those with only a casual interest in improving their nutrition.
Hadjigeorgiou readily agrees the program, as it is now, is for those with a particular problem to solve that can be linked to diet/nutrition (whether obesity, high cholesterol or a disease like type 2 diabetes, and so on). But he says Zoe’s goal is to be able to open up access to personalized nutrition advice much more widely as it keeps gathering more data and insights.
“The idea is, as always, we start with a focused set of people with problems to solve who we believe will have a life-changing experience,” he tells TechCrunch. “At this point we are not trying to create a product for everyone — and we understand that that has limitations in terms of how much we scale in the beginning. Although even still within this focused group of people I can assure you there’s tonnes of people!
“But absolutely the whole idea is that after we get a first [set of users]… then with more data and with more experience we can simplify and start making this simpler and more accessible — both in terms of its simplicity and also it’s price. So more and more people. Because at the end of the day everyone has this right to be able to optimize and understand and be in control — and we want to make that available to everyone.
“Regardless of background and regardless of socio-economic status. And, in fact, many of the people who have the biggest problems around health etc are the ones who have maybe less means and ability to do that.”
Zoe isn’t disclosing how many early users it’s onboarded so far but Hadjigeorgiou says demand is high (it’s currently operating a wait-list for new sign ups).
He also touts promising early results from interim trial with its first users — saying participants experienced more energy (90%), felt less hunger (80%) and lost an average of 11 pounds after three months of following their AI-aided, personalized nutrition plan. Albeit, without data on how many people are involved in the trials it’s not possible to quantify the value of those metrics.
The extra Series B funding will be used to accelerate the rollout of availability of the program, with a U.K. launch planned for this year — and other geographies on the cards for 2022. Spending will also go on continued recruitment in engineering and science, it says.
Zoe already grabbed some eyeballs last year, as the coronavirus pandemic hit the West, when it launched a COVID-19 symptom self-reporting app. It has used that data to help scientists and policy makers understand how the virus affects people.
The Zoe COVID-19 app has had some 5M users over the last year, per Hadjigeorgiou — who points to that (not-for-profit) effort as an example of the kind of transformative intervention the company hopes to drive in the nutrition space down the line.
“Overnight we got millions and millions of people contributing to help uncover new insights around science around COVID-19,” he says, highlighting that it’s been able to publish a number of research papers based on data contributed by app users. “For example the lack of smell and taste… was something that we first [were able to prove] scientifically, and then it became — because of that — an official symptom in the list of the government in the U.K.
“So that was a great example how through the participation of people — in a very, very fast way, which we couldn’t predict when we launched it — we managed to have a big impact.”
Returning to diet, aren’t there some pretty simple ‘rules of thumb’ that anyone can apply to eat more healthily — i.e. without the need to shell out for a bespoke nutrition plan? Basic stuff like eat your greens, avoid processed foods and cut down (or out) sugar?
“There are definitely rules of thumb,” Hadjigeorgiou agrees. “We’ll be crazy to say they’re not. I think it all comes back to the point that although there are rules of thumb and over time — and also through our research, for example — they can become better, the fact of the matter is that most people are becoming less and less healthy. And the fact of the matter is that life is messy and people do not eat even according to these rules of thumb so I think part of the challenge is… [to] educate and empower people for their messy lives and their lifestyle to actually make better choices and apply them in a way that’s sustainable and motivating so they can be healthier.
“And that’s what we’re finding with our customers. We are helping them to make these choices in an empowering way — they don’t need to count calories, they don’t need to restrict themselves through a Keto [diet] regime or something like that. We basically empower them to understand this is the impact food has on your body — real time, how your blood sugar levels change, how your bacteria change, how your blood fat levels changes. And through that empowerment through insight then we say hey, now we’ll give you this course, it’s very simple, it’s like a game — and we’ll given you all these tools to combine different foods, make foods work for you. No food is off limits — but try to eat most days a 75 score [based on the food points Zoe’s app assigns].
“In that very empowering way we see people get very excited, they see a fun game that is also impacting their gut and metabolism and they start feeling these amazing effects — in terms of less hunger, more energy, losing weight and over time as well evolving their health. That’s why they say it’s life changing as well.”
Gamifying research for the goal of a greater good? To the average person that surely sounds more appetitizing than ‘eat your greens’.
Though, as Hadjigeorgiou concedes, research in the field of microbiome — where Zoe’s commercial interests and research USP lie — is “early”. Which means that gathering more data to do more research will remain a key component of the business for the foreseeable future. And with so much still to be understood about the complex interactions between food, exercise and other lifestyle factors and human health, the mission is indeed massive.
In the meanwhile, Zoe will be taking it one suggestive nudge at a time.
“Sugar is bad, kale’s great but the whole kind of magic happens in the middle,” Hadjigeorgiou goes on. “Is oatmeal good for you? Is rice good for you? Is wholewheat pasta good for you? How do you combine wholewheat pasta and butter? How much do you have? This is where basically most of our life happens.
“Because people don’t eat ice-cream the whole day and people don’t eat kale the whole day. They eat all these other foods in the middle and that’s where the magic is — knowing how much to have, how to combine them to make it better, how to combine it with exercise to make it better? How to eat a food that doesn’t dip your sugar levels three hours after you eat it which causes hunger for you. Theses are all the things we’re able to predict and present in a simple and compelling way through a score system to people — and in turn help them [understand their] metabolic response to food.”
There are lot of companies out there making robotic exoskeletons. In fact, it’s one of the more active categories in the space — and for good reason. These sorts of technologies have the ability to profoundly impact the future of how people work, move and rehabilitate.
The category also houses a surprisingly broad range of solutions, with something like the sci-fi Sarcos at one end and Roam on the other. Roam’s solution is really putting the wearable back in wearable robotics. Specifically, the company makes assistive devices out of fabrics, rather than metal or plastic.
Ultimately, that means the loss of some of the strength of more industrial solutions, but it also means they’re more suited for every day use. That’s precisely why something like the robotic smart knee orthosis makes a lot of sense. The product, which recently cleared the FDA as a Class I medical device, utilizes AI for an adapted technology that senses the wearer’s movements and adjusts accordingly.
Image Credits: Roam Robotics
“Roam is focused on a massively underserved market. More than 20% of the global population is limited by their body’s mobility, and as medical advancements help people live longer that number is only going to increase,” co-founder and CEO Tim Swift said in a release tied to the news. “Our approach to wearable robotics works seamlessly with the human body to help people lead healthier, happier and more active lives, unhindered by physical limitations.”
The product, which joins the company’s skier and military-focused offers, sports embedded sensors that can detect things like movement up and down stairs and standing up from a seated position. It utilizes a power source and air compressor to create motion to assist in movement.
The device is up for preorder and starts shipping later this summer.
It’s been a strange few years for Fitbit. After defining the fitness tracking space, the company was a bit late to the smartwatch trend, but was still able to ride that wave to a rebound. But while watches have received most of the press the now Google-owned company has garnered in recent years, bands still comprise a substantial part of its business.
Today Fitbit announced the arrival of the Luxe. It’s a weird product. There’s certainly a market for it, but it’s hard to say how much of a niche were talking about here. The company called its target demo, “a unique set of buyers whose needs weren’t being met.” Specifically, the product is a “fashioned-forward” tracker for people looking for something a bit nicer than a plastic band to wear out and about.
Frankly, it’s hard not to see some reflections of Misfit’s take on the category. Perhaps the Fossil-owned company was ahead of its time. At $149, the device is priced between the Charge and the Versa, but decidedly closer to the former. That is to say, it’s pricey in the grand scheme of fitness trackers, but more or less in line with other Fitbit products.
It is, indeed, a nice-looking tracker, as far of these things go, featuring a color touchscreen surrounded by a stainless steel case. That’s coupled with a broad range of accessories, ranging from leather to gold-colored stainless steel.
Here’s Fitbit co-founder and GM (under Google) James Park:
Over the past year, we’ve had to think differently about our health – from keeping an eye out for possible COVID-19 symptoms to managing the ongoing stress and anxiety of today’s world. Even though we are starting to see positive changes, it has never been more important to manage your holistic health. That’s why we’ve been resolute in introducing products to support you in staying mentally well and physically active. We’ve made major technological advancements with Luxe, creating a smaller, slimmer, beautifully designed tracker packed with advanced features – some that were previously only available with our smartwatches – making these tools accessible to even more people around the globe.
Certainly physical and mental health have been top of mind over the past year, even as step counts have dramatically plummeted. The device sports the usual array of Fitbit sensors, tracking activity, sleep and stress. It also works with a number of different mindfulness/meditation apps, including the company’s recently announced partnership with Deepak Chopra.
The band is up for preorder starting today and starts shipping in the spring.
Fitbit has just announced its first fashion-focused, bangle-style tracker—the Fitbit Luxe. True to its luxurious name, the stainless-steel Luxe will come in a $200 special edition, styled with gorjana jewelry as its band (coming in June) but the $150 silicon band version is available for pre-order now.
Both will come with six months of the Fitbit Premium membership (usually $10 a month or $80 per year), which affords users some guided fitness programs, over 200 workout videos, deeper sleep analysis, about 60 nutrition articles and recipes, and other resources to learn about and improve health and wellness. Of course, getting people healthier has always been the name of the game for Fitbit, so with the Luxe, the company is attempting to strike a better balance between style, price, and casual activity tracking.
The Fitbit Luxe is the company’s jewelry-inspired fitness tracker.
Sporting a color touchscreen, the Luxe is the company’s first fitness band, not smartwatch, to add this bit of flair and functionality. You can swipe through your latest activity metrics, notifications from your phone (you can receive texts, calls, emails, etc. but cannot reply to them on the band), stress stats, menstrual cycle information, or do guided breath work and start tracking a workout.
It’s been about a year and a half since Amazon released the first Echo Buds. I reviewed them when they arrived, and they were, I don’t know, fine, I guess. They were a bit on the cheap side, facing some stiff competition in the category and, honestly, the idea of wearing Alexa on my head still isn’t super exciting to me.
But for a first attempt at the space, they weren’t bad. And now the company’s giving it a second go, with some tweaks to the original formula. Top of the list is a redesign that shrinks them 20% and makes them a bit lighter weight. The nozzle is smaller, which should make them more comfortable for longer periods, coupled with four ear tip sizes. The headphones are rated IPX4 for sweat and weather resistance.
Image Credits: Amazon
Amazon has moved on from the predecessor’s Bose noise canceling to its own proprietary tech, which it says can effectively double V1. There’s also an optional case that supports wireless charging via Qi, à la AirPods. The white case, in particular, looks…rather familiar.
That case runs an extra $20 over the $120 asking price for the USB-C case. Though Amazon’s running a limited-time deal to get the standard for $100 and the wireless charging version for $120. They’re also throwing in six months of Amazon Music Unlimited and Audible Plus. The new buds are also available in white. They’re up for preorder today and start shipping in May.
Image Credits: Amazon
Future software updates will bring a new VIP Filter to the headphones. Introduced on the Echo Frames, the feature lets users filter notifications from select senders. In addition to Alexa, the buds can also be set to access Siri or Google Assistant.
A new study on the effectiveness of the Apple Watch and iPhone as tools for measuring functional capacity in patients with cardiovascular disease (CVD) has been published by researchers at Stanford University.
The study, which involved 110 participants, found that the health-monitoring capabilities in these products could supplement or replace in-clinic tests for “frailty” in patients with CVD.
Frailty in this case is measured in terms of the distance a patient can travel in a six-minute walk. This is normally tested with a six-minute walk test (6MWT), and frailty was defined in the study “as walking <300m on an in-clinic 6MWT.”
Well before the rumors, we knew this day was coming. After several generations of smartwatches and a handful of headphones, the smartwatch was the next logical step for OnePlus.
At today’s big launch event, the company made its first major wearable official, sporting the decidedly pared-down name, OnePlus Watch. As the title implies, the smartwatch is not a particularly flashy one. It has a minimalist design and, at $159, a price to match.
What’s perhaps most interesting here is the operating system. The Watch will be running the equally straightforwardly named OnePlus Watch OS. Google’s Wear OS was obviously the low-hanging fruit here. And then, after that, Tizen, which Samsung has used for several generations of Galaxy Watches.
Instead, the company opted to build its own operating system on top of RTOS (real-time operating system). CEO Pete Lau addressed the subject not long ago in a OnePlus forum, writing:
We chose to go with a smart wear operating system developed based on RTOS because we believe it provides you a smooth and reliable experience while offering a great battery life, covering some of the biggest concerns we’ve been hearing from people looking to buy a smartwatch.
Image Credits: OnePlus
The battery aspect certainly rings true. That continues to be one of the largest hang-ups for smartwatches, particularly as companies continue to add features. According to the press materials, the 402 mAh battery gives the Watch anywhere between one to two weeks of life on a charge, which is a pretty impressive claim (we have yet to get our hands on a review unit to test this out). That would put it on par with many fitness bands.
There are a slew of sensors on board, measuring different health metrics, including heart rate and blood oxygen, accessible through OnePlus’s Health app. There’s 4GB of storage, half of which can be used for things like music. The Watch is rated IP68 waterproof.
In addition to the standard configuration, the company is also introducing a Cobalt model, which swaps the stainless steel body for a strong cobalt alloy, coupled with scratch-resistant sapphire glass.
The wearable will go on sale in North America on April 14.
It first appeared on March 9 as a tweet on Andrew Bosworth’s timeline, the tiny corner of the Internet that offers a rare glimpse into the mind of a Facebook executive these days. Bosworth, who leads Facebook’s augmented and virtual reality research labs, had just shared a blog post outlining the company’s 10-year vision for the future of human-computer interaction. Then, in a follow-up tweet, he shared a photo of an as yet unseen wearable device. Facebook’s vision for the future of interacting with computers apparently would involve strapping something that looks like an iPod Mini to your wrist.
Facebook already owns our social experience and some of the world’s most popular messaging apps—for better or notably worse. Anytime the company dips into hardware, then, whether that’s a very good VR headset or a video chatting device that follows your every move, it gets noticed. And it not only sparks intrigue, but questions too: Why does Facebook want to own this new computing paradigm?
In addition to AirPlay support for Fitness+, today’s iOS 14.5 developer beta is bringing some key new features to mobile operating system. At the top of the list is undoubtedly Apple Watch unlock for users wearing face coverings.
The long-awaited feature arrives a year or so into a pandemic that has made face masks a reality in parts of the world that previously had not seen wide scale adoption. The Apple Watch has, of course, long had the ability to unlock Macs, so this integration seems like a pretty sensible addition.
Starting with iOS 14.5, Apple Watch wearers will be able to opt-in to iPhone unlock under the phone’s Face ID & Passcode settings. Once enabled, the Watch will give a haptic buzz to notify the wearer that the handset has been unlocked. The Watch needs to be unlocked, on a wrist and in close proximity to the iPhone in order to work.
It beats having to pull your mask down in public (even if some folks are still feeling nostalgic for Touch ID).
The addition should be included in the consumer version of the software when it launches. Also included are the ability to ask Siri to call emergency contacts and app tracking controls that require permissions from developers. Support for new Xbox and PlayStation game controllers has been added, as well.
You’d be forgiven for being skeptical about the iPhone 12’s stellar performance this past quarter. It’s been a rough couple of years for smartphones — a phenomenon from which not even Apple was immune.
Frankly, after staring down these macro trends over the last couple of years, it seemed like the days of phone-fueled earnings reports were behind the company as its expanding services portfolio started to become its primary financial driver.
For the final quarter of 2020, Apple earnings surpassed $100 billion — a first.
I capped off my mobile coverage last year with an article titled, “Not even 5G could rescue smartphone sales in 2020.” Among the figures cited were two year-over-year drops of 20% for the first two quarters, followed by a global decline of 5.7% for Q3. As we noted at the time, a mere 5.7% drop constituted good news in 2020.
The straightforward premise of the piece was that COVID-19 subverted industry expectations that 5G would finally reverse declining smartphone sales, even if only temporarily. That all came with the important caveat that Apple’s numbers would likely have a big impact the following quarter.
Ahead of yesterday’s earnings, Morgan Stanley noted, “In our view, the iPhone 12 has been Apple’s most successful product launch in the last five years.” Such a sentiment may have seemed like hyperbole in the lead-up to the news, but in hindsight, it’s hard to argue, with five years having passed since the launch of the first Apple Watch.
The iPhone X was more of a radical departure for the company, but the 12 is proving to be a massive hit. The recent launch of Apple Silicon Macs juiced sales in that product category rising 21% year over year, but ultimately the company’s computer business is a drop in the bucket compared to phone sales.
I suspect it will be a while before I get excited over wireless earbuds. It’s not for a lack of trying on the part of manufacturers. In fact, quite the contrary. The category actually matured quite quickly, compared to various other verticals in the consumer electronics space. The truth is, most major hardware makers have gotten pretty decent at making a pair of wireless buds — many for pretty cheap.
Samsung’s been in that category for a while now. I’ve liked the last several models I’ve tried from the company. The sound quality has been good, they’re generally pretty comfortable — a good experience, all around. In fact, one of the issues I’ve raised the last couple of times is the fact that Samsung didn’t offer its own equivalent to products like the AirPods Pro and Sony WF-1000XM3 (though that latter reference is starting to become a bit dated).
It’s a hole in the lineup now filled by the Galaxy Buds Pro, which slot in the high end, above the Galaxy Buds Live and Galaxy Buds+. The naming conventions could be streamlined a bit, but it’s a small complaint in the grand scheme. At $199, the Pros are $30 more than the Live and $50 more than the Pluses. More importantly, it puts them at $50 less than the AirPods Pro – their clearest analogue.
Image Credits: Brian Heater
And like Apple’s Pro buds, the Galaxy Buds are very specifically designed to operate with Samsung’s devices. You can still pair them with other Android handsets, but you’re going to lose key parts of the software integration. This honestly seems to be the way things are headed, with practically every smartphone company also manufacturing their own headphones. And certainly Samsung’s got enough market share that such a play makes sense.
If you do want to use them on another Android device, you can pair them by downloading the Galaxy Wearables app. You can pair them manually without the app, but you’ll lose a bunch more features in the process. Like past Galaxy Buds models, there’s no physical button on the case for pairing.
After several generations of devices, Samsung’s certainly got the foundation in place. And its purchase of Harman/AKG in 2017 has clearly played a key role in its ability to create some quality audio accessories. All of that comes into play here. Samsung’s made some solid choices on the design front. The charging case is remarkably compact. I was actually a bit surprised when I opened the package. It’s not nearly as long as the AirPods case, though it is a bit thicker. In any case, it’s certainly compact enough to carry around, unlike, say the Powerbeats Pro.
The battery claims are pretty impressive, given the size. The company rates the buds at five hours each and 28 hours with the case. Turn off active noise canceling and Bixby (I’ll let you guess which of those two I won’t miss) and the numbers bump up to eight and 20 hours, respectively. I will say that I was able to confidently bring the headphones with me on one of my lengthy morning sabbaticals without worrying about packing the case. That’s not something I can say about every wireless earbud.
Image Credits: Brian Heater
The headphones sport an 11-millimeter woofer and 6.5-millimeter tweeter. I found the sound to be an overall good mix, whether listening to music or a podcast. If you’re so included, you can also fiddle with the equalizer in the wearable app. It features six presets, rather than sliders, so it’s an imperfect science. But I didn’t really feel the need to mess around in there much.
The active noise canceling is solid, as well (okay, I admit it, Bixby is the one I’d drop in a heartbeat). I wasn’t really aware at how good a job it was doing drowning out street noise until I switched it off — this can be accomplished with a long press on the side touch panel or through the app. By default the former switches between ANC and transparent mode, skipping the off mode in the middle. Like the equalizer, you an adjust the level of ANC here — either high or low.
If you’re a Samsung true believer, Seamless Switch can be enabled, allowing you to, say, switch between a tablet and a phone when a call comes in. Other neat Samsung-specific features include the ability to use the buds as a kind of makeshift lavalier mic while recording video on the Galaxy S21. The SmartThings app can also be used to find misplaced buds. All in all, Samsung is clearly building up its ecosystem here.
Image Credits: Brian Heater
The design of the buds themselves has been streamlined since the extremely bean-like Buds Live. The company says they were designed to minimize contact with the ear, to help relieve pressure. It’s a shame that everyone isn’t able to try every earbud on before buying — how they fit in your own ears is obviously an extremely personal thing.
I found, however, that one of my ears tends to ache when wearing them for a prolonged period — not an issue I’ve had with either the AirPods Pro or Pixel Buds (the Powerbeats Pro are also great in this respect). I found myself fiddling with them semi-regularly and triggering the touch mechanism in the process (this can be turned off by default in the app).
Most of my issues with the Buds Pro are pretty minor. They’re a worthy update to the line and a great pair of headphones if you’re a Samsung user.
Less than a full day into CES 2021, and it seems that smart glasses are very much shaping up as a trend. I wrote about a pair of AR glasses from Lenovo aimed at business applications yesterday, and a few other companies have popped up in the meantime, with various levels of “smartness” included.
Vuzix’s latest models are still several months away, but they seem to be one of the more promising we’ve seen at the show thus far. The company is best known for its business-focused solutions — that, after all, is where all the money is — at least until someone offers a really profound breakthrough in the consumer category.
These probably aren’t that (if I had to guess, I’d look more closely at offerings from bigger consumer electronics companies), but they do seem like a step in the right direction, in terms of an offering that bakes augmented reality into a presentable form factor. It seems like AR glasses that look like regular eyeglasses is the right hook here. There are clearly differentiating factors here, but the next-gen glasses look a lot closer to standard eyewear than what we’ve seen in the past.
That’s due in no small part to a partnership with Jade Bird Display, which will help commercialize the Chinese company’s microLED tech. Jade Bird describes it thusly:
JBD offers active matrix inorganic microLED display chips and panels with wavelength ranging from UV to visible to IR. The pixel pitch ranges from 400 dpi to 10,000 dpi with a varity of resolutions. With high brightness, high EQE, high reliability, these panels are ideal for AR, VR, HUD, projector, weapon sights, 3D printing, microscope, etc.
The module, which projects a monochrome stereoscopic image, is roughly the size of a pencil eraser, according to Vuzix’s description. The company says the glasses will be available in a number of configurations, including Wi-Fi and optional LTE. All will feature stereo speakers and noise-canceling mics.
No word on price, but Vuzix says they should hit the market this summer.
Given all of the…feedback Amazon has received, it’s hard to believe the Halo wasn’t widely available until today. Announced in late-August, the product has been offered in “early access” to invited users. That changes today, however, as the product opens to everyone in the U.S.
The band runs $100, a price that includes six months of membership. It was probably inevitable that the company would launch a fitness product, though Amazon’s behind the curve as far as form factors go. Smartwatches have become a dominant force in fitness tracking on the high end. Bands are still a presence on the opposite side of the market, but generally command a fraction of the cost.
What makes the Halo different is its use of voice and the amount of data it collects and processes – neither are honestly a surprise, coming Amazon. The former involves processing the wearer’s tone of voice, which has drawn some…mixed feedback. Here’s how Amazon describes that bit,
Tone of voice analysis can help you communicate more thoughtfully with family, friends, colleagues, your favorite food truck proprietor, and everyone in between.
Body fat scanning is an even bigger question mark. Early reviews have called the technology “invasive,” among other things. It has also drawn scrutiny from lawmakers. Senator Amy Klobuchar penned a letter to Health and Human Services.
“While new wearable fitness devices make it easier for people to monitor their own health, these devices give companies unprecedented access to personal and private data with limited oversight,” Klobuchar wrote. “More must be done to ensure the privacy and security of health-related consumer devices.”
Amazon has actively pushed back on privacy concerns, highlighting, among other things, that the body scans exist only on the device used to capture them. “Privacy is foundational to how we designed and built Amazon Halo,” a spokesperson told The Washington Post. “Body and Tone are both optional features that are not required to use the product.”
Amazon’s got the dually difficult task of assuring consumer privacy and attempting to set the product apart in a well-saturated market.
Human rights NGO, Amnesty International, has written to the EU’s competition regulator calling for Google’s acquisition of wearable maker Fitbit to be blocked — unless meaningful safeguards can be baked in.
The tech giant announced its intent to splash $2.1BN to acquire Fitbit a year ago but has yet to gain regulatory approval for the deal in the European Union.
In a letter addressed to the blocs competition chief, Margrethe Vestager, Amnesty writes: “The Commission must ensure that the merger does not proceed unless the two business enterprises can demonstrate that they have taken adequate account of the human rights risks and implemented strong and meaningful safeguards that prevent and mitigate these risks in the future.”
In a report last year the NGO attacked the business model of Google and Facebook — saying the “surveillance giants” enable human rights harm “at a population scale”.
In its letter to Vestager Amnesty warns that Google is “incentivized to merge and aggregate data across its different platforms”.
“Google’s business model incentivizes the company to continuously seek more data on more people across the online world and into the physical world. The merger with Fitbit is a clear example of this expansionist approach to data extraction, enabling the company to extend its data collection into the health and wearables sector,” it writes. “The sheer scale of the intrusion of Google’s business model into our private lives is an unprecedented interference with our privacy, and in fact has undermined the very essence of privacy.”
Amnesty is urging the Commission to take heed of an earlier call by a coalition of civil society groups also raising concerns about the merger for “minimum remedies” which regulators must guarantee before any approval.
We’ve reached out to the Commission and Google for a response to Amnesty’s letter.
Google’s plan to gobble Fitbit and its health tracking data has been stalled as EU regulators dig into competition concerns. Vestager elected to open an in-depth probe in August, saying she wanted to make sure the deal wouldn’t distort competition by further entrenching Google’s dominance of the online ad market.
The Commission has also voiced concerns about the risk of Google locking other wearable device makers out of its Android mobile ecosystem.
The Commission’s decision to scrutinize the acquisition rather than waiving it through with a cursory look has led Google to make a number of concessions in an attempt to get it cleared — including a pledge not to use Fitbit data for ad targeting and to guarantee support for other wearables makers to operate on Android.
“The European Data Protection Board has recognized the risks of the merger, stating that the “combination and accumulation of sensitive personal data” by Google could entail a “high level of risk” to the rights to privacy and data protection,” it adds.
As well as undermining people’s privacy, Google’s use of algorithms fed with personal data to generate profiles of Internet users in order to predict their behavior erodes what Amnesty describes as “the critical principle that all people should enjoy equal access to their human rights”.
“This risk is heightened when profiling is deployed in contexts that touch directly on people’s economic, social and cultural rights, such as the right to health where people may suffer unequal treatment based on predictions about their health, and as such must be taken into account in the context of health and fitness data,” it warns.
“This power of the platforms has not only exacerbated and magnified their rights impacts but has also created a situation in which it is very difficult to hold the companies to account, or for those affected to access an effective remedy,” Amnesty adds, noting that while big tech companies have faced a number of regulatory actions around the world none has so far been able to derail what it calls “the fundamental drivers of the surveillance-based business model”.
Per EU merger law, the Commission college takes the final decision — with a requirement to take “utmost account” of the opinion of the Member States’ advisory committee (though it’s not legally binding).
So it’s ultimately up to Brussels to determine whether Google-Fitbit gets green lit.
In recent years, competition chief Vestager, who is also EVP for the Commission’s digital strategy, has said she favors tighter regulation as a tool for ensuring businesses comply with the EU’s rules, rather than blocking market access or outright bans on certain practices.
She has also voiced opposition to breaking up tech giants, again preferring to advocate for imposing controls on how they can use data as a way to rebalance digital markets.
Simultaneously, EU lawmakers are working on a proposal for an ex ante regulation to address competition concerns in digital markets that would put specific rules and obligations on dominant players like Google — again in areas such as data use and data access.
That plan is due to be presented early next month — so it’s another factor which may be adding to the delay to the Commission’s Google-Fitbit decision.
I wasn’t super impressed when I reviewed the Echo Buds around this time last year, but Amazon’s first shot at Alexa-powered fully wireless earbuds was passable. And while they’ve already been on the market for a while now, the company’s continuing to deliver some key updates, including today’s addition of new fitness features.
Say “Alexa, start my workout” with the buds in, and they’ll begin logging steps, calories, distance, pace and duration of runs. Like many new software additions, this one will take a few days to roll out for everyone. This one also requires users to enable the new tracking feature using the Alexa app.
Once enabled, you can state/ask follow-ups, like:
“Alexa, start my run”
“Alexa, pause my walk”
“Alexa, end my workout”
“Alexa, how far have I run?”
“Alexa, what’s my pace?”
“Alexa, how was my workout?”
Asking, “Alexa, how was my workout?” After the fact will pull up your historical running stats.
As I noted previously, the Echo Buds didn’t really do much to set themselves apart from myriad other earbuds, though there certainly was a lot to be said for the price — then $130. At the moment, they’re discounted much further, now running $80 — which makes them a solidly competitive deal.
As strange as they are, you would be forgiven for getting about the existence of Echo Frames. Amazon announced the smart glasses among a deluge of Alexa-focused products at an event last year that also included an even stranger smart ring.
The experimental product was Day 1 Edition device — meaning it was available to users on an invite-only basis — a kind of hardware beta test with a fairly wide net. “If customers liked them, we’d double down,” the company noted. “If not, we’d move on.” Seems there was enough interest around the Frames to graduate them to wide release.
The second generation of the smart glasses will be available through Amazon starting December 10. They’re not cheap — running $250 (also available in five monthly installments). Basically the whole thing is a way of putting Alexa on users’ faces, with built-in mics and open ear audio on the stems that give feedback without the need for headphones.
The updated models feature 40% longer battery life, auto volume that adjusts to environmental noise and auto shut off to save battery. They’re also available in more colors.
Amazon’s Echo Loop ring, which Frederic called “maybe the oddest product Amazon demoed at its event today,” won’t be moving past the beta. The system paired up with a smartphone and let users access audio by holding it up to their ears. Amazon’s not the first company to explore a ring form factor — Oura and Motiv are probably the two best-known examples — but it seems pretty clear that there’s more juice to be squeezed from the head-mounted form factor.
Production and sales will be ending for the Loop, though the company says it will continue to offer sales and updates for existing customers.
Zoe Jervier Hewitt is a leadership coach and talent partner at multi-stage VC fund EQT Ventures, where she helps portfolio companies structure and accelerate their search for talent by facilitating connections to the right technology and people required to source candidates at each stage of company growth.
While emerging companies are often started by technically minded founders and funded by VCs for their data-driven approaches to product and growth, the irony is that these companies are often using less data and rigor when it comes to hiring talent than more traditional, less data-focused companies. The truth is, the way in which tech companies hire has been relatively untouched by disruption, with most still relying on resumes and conversational interviews for its highest-stake decisions.
The consequences of this is not only detrimental to building teams, but to the overall diversity of the startup space.
Data-driven hiring isn’t just about having the right funnel metrics in place to determine efficiency of process, it extends to the information we choose to collect (or not collect) and measure to determine if someone is a fit for a role. There’s a science to building teams, and therefore selecting talent to join teams. So, why is hiring in early-stage companies still not regarded as a data-driven activity?
Some argue that by nature, talent selection involves people and so can’t truly be scientific. People are unique, complex, emotional and unpredictable. Additionally, few people think they’re a bad judge of character and talent, most overconfidently hold the belief that they’ve got a superior instinct and “nose” for talent. Hiring talent is one of the few operational activities in business where formal training or decades of experience isn’t expected in order to be better than average.
Move away from gut-based evaluations
The impact of this outdated way of thinking is felt across the board — first and foremost when it comes to team dynamics. To first know if someone is qualified, you need to know what you’re assessing for. Companies that operate with a shallow understanding of what drives success in a role lack the vital information needed to build a strong system of selection. The output is a weak hiring process that is heavy on unstructured interviewing, light on predictive signals and relies on gut-based evaluations.
Chemistry, confidence and charisma are more likely to determine whether a candidate lands a role versus competence to do the job. As a result, almost half of new hires are estimated to fail and be ineffective, and weak teams are built. The lack of reliable data also means most companies suffer from a broken feedback loop between hiring and team performance, which stunts learning and improvement. How do you know if your selection process is efficiently assessing for the skills, traits and behaviors that drive top performance if you’re not connecting the dots?
More dangerously, a hiring process that’s not designed to collect and evaluate based on evidence almost always results in a lack of team diversity, which as we know stunts innovation and therefore limits company success.
Subjective approaches to talent selection and development create a revolving door of unconscious biases and exclusion, with a resounding impact on what now makes up the homogenous tech ecosystem. This is not helped by natural overreliance on networks as means to fill hiring pipelines in early-stage company building.
Lastly, for talent operators and people practitioners, it does no favors for the credibility of their profession. Recruiting and selecting talent will continue to be branded an unsophisticated, lesser back-office function, or as a “dark art” that is about as data-informed as looking into a crystal ball.
Taking an evidence-based approach
In bringing more objectivity to the hiring process, founders and their teams are served best when starting with a clear, evidence-based definition of what success markers look like in a role, and then putting structure around each stage of selection to assess for a specific skill or behavioral trait: What and when will you assess? What criteria will you evaluate the data based on? In other words, the objective is to get as close as possible to unearthing signals that are reliable enough to accurately predict that someone will perform in a role.
Up until recently, science-based talent assessment tools, which help hiring managers make more objective evaluations, have been largely used by bigger, more established firms that suffer from high-volumes of job applications — the luxury “Google” problem. However, three recent shifts suggest we’re about to see a trend in their adoption by earlier-stage startups as they scale their teams:
Pressure to build diverse and inclusive teams. 2020 has pushed diversity and inclusion to the top of the agenda for most companies. Assessment tools used as part of team-building can help groups better identify where specific cognitive, personality and skill gaps exist, and therefore focus hiring for those missing ingredients. Candidate assessment also helps reduce unconscious bias that might creep into interviews by showing more objective information about someone’s strengths and weaknesses.
The sharp rise in job applicants. The COVID-19 pandemic has had two significant effects on recruiting. First, companies have been forced to embrace hiring talent in remote roles, which has increased the size of the global talent pool for most jobs inside a tech firm. Second, the increase in available talent has meant that the average number of job applications has risen dramatically. This shift from a candidate-driven market to an employer-driven one means that selecting signal from noise is increasingly becoming a challenge even for early companies with a less-established talent brand.
Better designed, more affordable products on the market. For a long time, talent assessment software has been largely inaccessible to noncorporate clients. Academic user interfaces and off-putting candidate experiences has meant that many scientifically robust tools simply haven’t been able to capture the attention of tech and product-obsessed buyers. Additionally, many tools that require add-on consultancy or specialist training to administer and interpret are simply out of range of early-stage budgets. With new entrants to the assessment market that have automation, product design and compliance at their core, scale-ups will be able to justify spending in this area and perceptions will change as they become essential SaaS products in their team’s operating toolkits.
As these outside factors continue to push hiring toward a more evidence-based approach, businesses must prioritize making these changes to their hiring practices. While unstructured interviews might feel most natural, they’re perilous for accurate talent selection and while the conversation might be nice, they create noise that does nothing for making smart, accurate decisions based on what really matters.
Instinctive feelings and “going with your gut” in hiring should be treated with caution and decisions should always be based on role-relevant evidence you pinpoint. Emerging companies looking to set a strong team foundation shouldn’t risk the redundancies and biases created by subjective hiring decisions.
This year has been everything but business as usual for the venture and tech community. And we still have a presidential election ahead of us.
So, why not listen to the aptly-named experts over at Unusual Ventures? Partners Sarah Leary (co-founder of Nextdoor) and John Vrionis, formerly of Lightspeed Ventures Partners, will join us on Tuesday, October 20 on the Extra Crunch Live virtual stage.
Thanks to all of you who have joined us for our series of live discussions that has included tech leaders like Sydney Sykes, Alexia von Tobel, Mark Cuban and many others (all recordings are still accessible for Extra Crunch subscribers to watch and learn from).
If you’re new, welcome! You’ll have a chance to participate in the live discussion if you have an Extra Crunch subscription.
Unusual Ventures’ investments span the consumer and enterprise space, including companies like Robinhood, AppDynamics, Mulesoft and Winnie.
For this chat, I plan to spend some time talking to Leary and Vrionis about how early-stage venture capital has changed with the rise of rolling funds, community funds and syndicates. Unusual Ventures claims “there’s an enormous opportunity to raise the bar on what seed-stage investors provide for early-stage founders,” so we’ll get into that opportunity as well.
And if we have time, we’ll discuss remote work, building in public and the U.S. presidential election.
So, what are you waiting for? Add the deets to your calendar (below the jump!) and join me next Tuesday.
Fertility tracking has seen an explosion of startup activity in recent years. Femtech startup Lady Technologies is adding to this rich mix with the full U.S. launch of a dual-purpose device, called kegg, that’s designed to measure hormonal changes in a woman’s cervical fluid to help her determine the chance of conception on a given day.
The egg-shaped gizmo, which features a gold-plated steel cap and band ringing its tip, as well as a silicone tail to house its Bluetooth radio (so it can chat to the companion app), doubles as a connected pelvic floor trainer (the ‘k’ in kegg is for ‘kegels’) — taking a leaf out of UK femtech pioneer Elvie’s playbook. Though the two-in-one function is a new twist.
Kegg relies on a technology called impedance to sense electrolyte levels in a woman’s cervical fluid in order to detect the hormonal switch from estrogen to progesterone dominance that accompanies ovulation — via a daily test that’s touted as taking just two minutes. (If you’re also using it for the optional kegal exercises that would take a bit longer.)
“A minute electrical impulse at a specific frequency is emitted from the gold plated electrodes on the kegg and received by the other (this process is then reversed). By sensing the changing trends in the impedance, we’re able to detect the hormonal change and make a prediction to the user,” explains CEO and founder Kristina Cahojova. “Since every woman’s fluids are slightly different, kegg needs to record at least one fertile window to provide personalized predictions.”
“We have numerous patents on the underlying design of kegg and key aspects of how it operates,” she adds.
Kegg was unveiled on the TechCrunch Disrupt SF stage, back in 2018, as part of our startup battlefield competition (though it didn’t go on to win). Fast forward two years and it’s now officially launching out of beta to offer the FDA-registered gizmo to the U.S. market — priced at $275.
It’s announcing a $1.5M seed round too, with investors including Crescent Ridge Partners, SOSV, Texas Halo Fund, Fermata Fund and MegaForce, as well as some unnamed angel investors.
Commenting in a statement, Samina Hydery, kegg advisor and women’s health investor, said: “Investor interest in femtech and fertility has accelerated over the last few years. While I’ve seen an influx of ovulation prediction kits, at-home blood tests, menstrual tracking apps, and temperature monitors in the consumer market, kegg’s value proposition became clear once I spoke with women about their experiences trying to conceive and medical researchers in the field. It’s hard not to get excited by the various growth vectors that can expand kegg’s market in the future — from being used as a tool for natural family planning to helping monitor postpartum/perimenopausal health.”
“We pride ourselves in having almost half of our investors women,” notes Cahojova — whose inspiration for building kegg was personal; having suffered from irregular menstrual cycles herself.
“I didn’t want to be treated with hormones. When I talked to fertility instructors or a specialized fertility doctor, all they wanted to know about was my patterns of cervical fluid. Why? Because the fertile window is defined only by the presence of fertile cervical fluid, having a positive LH [luteinizing hormone] test is nice but it won’t help you get information to fix your cycles. That’s why so many fertility doctors are interested in cervical fluid and that is why so many women are told to track it with their fingers,” she explains.
“How on earth are you supposed to be able to track objectively something so important, yet, private without the help of technology? I was frustrated and angry that every company that I talked to didn’t have a solution and didn’t want to make this so needed product because it ‘would have to go into the vagina’. So I set out to make a product that would help me and women like me.”
Thus far kegg has been hitting a chord with U.S. women of reproductive age who are trying for a baby, according to Cahojova — who says her startup has built a 2,000-strong community of fertility-tracking women over kegg’s beta period.
“Our typical user is a woman in her reproductive age,” she says. “Our users are in long-term relationships or married and they likely have been actively trying to conceive for more than three months. Fifty percent are trying to conceive their first child, while the remaining are already mothers.
“Our customers have experience with BBT (body basal temperature charting) or LH tests (ovulation tests) and they are overall interested in holistic fertility and wellness, not in medication. They also prefer the convenience of kegg over other methods that either need to be worn throughout the night or used more frequently.”
Image credit: Lady Technologies
“Each woman is unique and so are her cycles,” she adds. “Unlike ovulation trackers, kegg helps women understand their fertile window and cyclical fertility and follow their own patterns. Usually women take up to six months to learn how to read cervical fluid patterns. Our customers report that kegg gives them confidence and they feel empowered. Many keggsters conceived with kegg after years of trying because kegg gave them trends beyond ovulation. Nothing makes me more happy than an email from a customer whose life changed thanks to my work and kegg.” (On that it says “several” women have reported successful pregnancies using kegg since the beta launch in 2018.)
The startup also has its eye on international expansion, including to Asia (with the support of its Japanese-market focused investor Fermata) — with a plan to launch kegg in Singapore in late October, and in Japan and Canada next year.
While the kegg has a core focus on fertility tracking (and a secondary feature as a connected pelvic floor trainer), Cahojova is excited about wider possibilities for women’s health that she hopes will be opened up as they’re able to take in and crunch more data.
Kegg users’ impedance readings are uploaded to the startup’s cloud for analysis, so its algorithms can make a personalized fertility prediction. But its website also notes it uses ‘anonymized/pseudonymized’ data for research into women’s health. (Cahojova specifies users’ personal data is never shared outside the company. “Any data we offer to researchers we work with is completed anonymized,” is her privacy promise.)
Asked what areas of research she’s hoping kegg will help advance, she tells us: “Researchers have noted that health issues can affect typical electrolyte cycles. In many of our internal studies we’ve seen examples where readings were ‘out of norm’ for the user. In case after case we found evidence of underlying health issues (for example infections) were the cause. In the future our goal is to understand how kegg can help monitor overall cervical health.”
Cahojova also says the device is being used by fertility instructors and doctors to help with monitoring their patients. “The beauty of kegg is that by having a user friendly and modern device that women like to use we can get data on changes of vaginal fluids on a large scale. With kegg data we also hope to help doctors finally answer their billion dollar question — how can they improve the quality of cervical fluid.”
“We are supportive of science and are open for research collaborations,” she adds. “We provided kegg for independent peer-reviewed clinical study under Dr.Gabriela López Armas, MD, PhD, for her research on kegg and other fertility trackers. All the participants finished the protocols in summer of 2020 and the study is to be published independently in the near future.”
While the business model for kegg is currently fixed price hardware sales, Cahojova says the startup is looking at offering subscription packages in future. “In the future, we want to offer more to our users, e.g.: connecting them to specialists to review their cycles or view of additional layers of information. Once we have enhanced services ready, we’ll look at switching to a subscription model,” she adds.
It has been over four years since Project Jacquard, Google’s smart fabric technology, made its debut at the I/O developer conference. Launched out of what was then Google’s ATAP unit, Jacquard is currently best known for being available on Levi’s jeansjackets, but Saint Laurent also launched its $1,000 Cit-e Backpack with built-in Jacquard technology. Today, Google is adding a fourth product to the Jacquard lineup with the launch of the Samsonite Konnect-i backpacker, which, at $200 for the Slim version and $220 for the Standard edition, is a bit more friendly on the wallet than the Saint Laurent backpack.
Jacquard, in case you need a refresher, is Google’s technology for adding touch sensitivity to fabrics. That means you can touch the sleeve of your jacket or, in this case, the strap of your backpack, to trigger a handful of functions on your phone. The whole system is powered by a small tag (which you charge via a mini-USB port). That tag can also relay notifications through its built-in LED and a small vibration motor.
Image Credits: Google/Samsonite
The number of gestures — and what they can trigger — is relatively limited, especially since you can only really assign three gestures: brush up, brush down and double-tap. You can assign standard media controls to these (think brush up for “next song”), drop a pin to save a place, hear the current time, ping your phone, hear directions to your next waypoint or arrival time or trigger the Google Assistant. Gestures can also trigger your phone’s shutter to take a selfie and there’s a “light” function that lights up the Jacquard tag’s LED. Why this last function exists isn’t quite clear to me because that LED is weak. Google says it can help you get noticed in a crowd or stay visible at night, but unless you’re trying to be found in the darkest of caves, nobody will be able to see it.
As you can see, the main idea here is to let you access some of your phone’s functions while walking through the city with your headphones on.
Image Credits: TechCrunch
It’s been about a year since Google and Levi’s launched the Jacquard-enabled trucker jacket. At the time, that was the launch of Jacquard 2.0, with a couple of additional features and a new dongle that now works across products. At the time, our review and those from our peers were pretty tepid. I’m not sure it’ll be all that different this time around.
I’ve tried out the backpack for the last few days. Like before, Jacquard does what it promises to do. The gesture recognition worked as expected. Alerts from my phone made the tag vibrate and the backpack itself is comfortable, if not the flashiest entry into the market. It’s a Samsonite, though, and the target market here isn’t necessarily college students but business travelers (though that market is pretty dead for the time being).
Image Credits: Samsonite
The backpack itself comes in two versions: slim and standard. The only real difference here is that the slim version has a vertical zipper and the standard version a horizontal one. It features plenty of pockets, a padded laptop compartment and everything else you’d want from a modern backpack. I could easily see myself going on a business trip with it.
Like before, the question remains whether Jacquard is a gimmick or actually a useful technology. Thanks to the pandemic, most of us aren’t heading out as much as we used to — and we’re definitely not going on a lot of trips. Maybe it’s not the right product for this time, but I can see myself using it more than the jacket once all of this is over. Chances are I’ll use a backpack wherever I go, after all, whereas I don’t wear a jacket half the year. The promise of Jacquard is to allow you to focus on the world around you, without the distractions of your phone. For that to work, it needs to be ubiquitous or you’ll just forget you ever had it. That works better on a backpack than a jacket — at least for me.
Whether that’s worth $200 to you is a decision you must make for yourself.
It also said it’s committing to support third-party wearable manufacturers as part of the Android ecosystem (via Android APIs for wearable devices), and maintain third-parties’ existing access to Fitbit users’ data via APIs with user consent.
“This deal is about devices, not data. The wearables space is highly crowded, and we believe the combination of Google and Fitbit’s hardware efforts will increase competition in the sector, benefiting consumers and making the next generation of devices better and more affordable,” a Google spokesperson said in a statement.
“We have been working with the European Commission on an updated approach to safeguard consumers’ expectations that Fitbit device data won’t be used for advertising. We’re also formalizing our longstanding commitment to supporting other wearable manufacturers on Android and to continue to allow Fitbit users to connect to third party services via APIs if they want to.”