Apple’s latest accessibility features are for those with limb and vocal differences

Apple announced a batch of accessibility features at WWDC 2021 that cover a wide variety of needs, among them a few for people who can’t touch or speak to their devices in the ordinary way. With Assistive Touch, Sound Control, and other improvements, these folks have new options for interacting with an iPhone or Apple Watch.

We covered Assistive Touch when it was first announced, but recently got a few more details. This feature lets anyone with an Apple Watch operate it with one hand by means of a variety of gestures. It came about when Apple heard from the community of people with limb differences — whether they’re missing an arm, or unable to use it reliably, or anything else — that as much as they liked the Apple Watch, they were tired of answering calls with their noses.

The research team cooked up a way to reliably detect the gestures of pinching one finger to the thumb, or clenching the hand into a fist, based on how doing them causes the watch to move — it’s not detecting nervous system signals or anything. These gestures, as well as double versions of them, can be set to a variety of quick actions. Among them is opening the “motion cursor,” a little dot that mimics the movements of the user’s wrist.

Considering how many people don’t have the use of a hand, this could be a really helpful way to get basic messaging, calling, and health-tracking tasks done without needing to resort to voice control.

Speaking of voice, that’s also something not everyone has at their disposal. Many of those who can’t speak fluently, however, can make a bunch of basic sounds, which can carry meaning for those who have learned — not so much Siri. But a new accessibility option called “Sound Control” lets these sounds be used as voice commands. You access it through Switch Control, not audio or voice, and add an audio switch.

Images of the process of adding an audio switch to the iPhone.

Image Credits: Apple

The setup menu lets the user choose from a variety of possible sounds: click, cluck, e, eh, k, la, muh, oo, pop, sh, and more. Picking one brings up a quick training process to let the user make sure the system understands the sound correctly, and then it can be set to any of a wide selection of actions, from launching apps to asking commonly spoken questions or invoking other tools.

For those who prefer to interact with their Apple devices through a switch system, the company has a big surprise: Game controllers, once only able to be used for gaming, now work for general purposes as well. Specifically noted is the amazing Xbox Adaptive Controller, a hub and group of buttons, switches, and other accessories that improves the accessibility of console games. This powerful tool is used by many, and no doubt they will appreciate not having to switch control methods entirely when they’re done with Fortnite and want to listen to a podcast.

Image Credits: Apple

One more interesting capability in iOS that sits at the edge of accessibility is Walking Steadiness. This feature, available to anyone with an iPhone, tracks (as you might guess) the steadiness of the user’s walk. This metric, tracked throughout a day or week, can potentially give real insight into how and when a person’s locomotion is better and worse. It’s based on a bunch of data collected in the Apple Heart and Movement study, including actual falls and the unsteady movement that led to them.

If the user is someone who recently was fitted for a prosthesis, or had foot surgery, or suffers from vertigo, knowing when and why they are at risk of falling can be very important. They may not realize it, but perhaps their movements are less steady towards the end of the day, or after climbing a flight of steps, or after waiting in line for a long time. It could also show steady improvements as they get used to an artificial limb or chronic pain declines.

Exactly how this data may be used by an actual physical therapist or doctor is an open question, but importantly it’s something that can easily be tracked and understood by the users themselves.

Images of Apple Memoji with a cochlear implant, an oxygen tube, and a soft helmet.

Image Credits: Apple

Among Apple’s other assistive features are new languages for voice control, improved headphone acoustic accommodation, support for bidirectional hearing aids, and of course the addition of cochlear implants and oxygen tubes for memoji. As an Apple representative put it, they don’t want to embrace differences just in features, but on the personalization and fun side as well.

#accessibility, #apple, #apps, #gadgets, #mobile, #tc, #wearables, #wwdc-2021

0

Microsoft plans to launch dedicated Xbox cloud gaming hardware

Microsoft will soon launch a dedicated device for game streaming, the company announced today. It’s also working with a number of TV manufacturers to build the Xbox experience right into their internet-connected screens and Microsoft plans to bring build cloud gaming to the PC Xbox app later this year, too, with a focus on play-before-you-buy scenarios.

It’s unclear what these new game streaming devices will look like. Microsoft didn’t provide any further details. But chances are, we’re talking about either a Chromecast-like streaming stick or a small Apple TV-like box. So far, we also don’t know which TV manufacturers it will partner with.

It’s no secret that Microsoft is bullish about cloud gaming. With Xbox Game Pass Ultimate, it’s already making it possible for its subscribers to play more than 100 console games on Android, streamed from the Azure cloud, for example. In a few weeks, it’ll open cloud gaming in the browser on Edge, Chrome and Safari, to all Xbox Game Pass Ultimate subscribers (it’s currently in limited beta). And it is bringing Game Pass Ultimate to Australia, Brazil, Mexico and Japan later this year, too.

In many ways, Microsoft is unbundling gaming from the hardware — similar to what Google is trying with Stadia (an effort that, so far, has fallen flat for Google) and Amazon with Luna. The major advantage Microsoft has here is a large library of popular games, something that’s mostly missing on competing services, with the exception of Nvidia’s GeForce Now platform — though that one has a different business model since its focus is not on a subscription but on allowing you to play the games you buy in third-party stores like Steam or the Epic store.

What Microsoft clearly wants to do is expand the overall Xbox ecosystem, even if that means it sells fewer dedicated high-powered consoles. The company likens this to the music industry’s transition to cloud-powered services backed by all-you-can-eat subscription models.

“We believe that games, that interactive entertainment, aren’t really about hardware and software. It’s not about pixels. It’s about people. Games bring people together,”
said Microsoft’s Xbox head Phil Spencer. “Games build bridges and forge bonds, generating mutual empathy among people all over the world. Joy and community -that’s why we’re here.”

It’s worth noting that Microsoft says it’s not doing away with dedicated hardware, though, and is already working on the next generation of its console hardware — but don’t expect a new Xbox console anytime soon.

#amazon, #android, #australia, #brazil, #cloud-gaming, #computing, #directx, #gadgets, #gaming, #google, #hardware, #japan, #luna, #mexico, #microsoft, #nvidia, #phil-spencer, #tc, #xbox, #xbox-cloud-gaming, #xbox-game-pass

0

Voice AIs are raising competition concerns, EU finds

The European Union has been digging into the competition implications of AI-powered voice assistants and other Internet of Things (IoT) connected technologies for almost a year. Today it’s put out a first report discussing potential concerns that EU lawmakers say will help inform their wider digital policymaking in the coming years.

A major piece of EU legislation introduced at the back of last year is already set to apply ex ante regulations to so-called ‘gatekeeper’ platforms operating in the region, with a list of business practice ‘dos and don’ts’ for powerful, intermediating platforms being baked into the forthcoming pan-EU Digital Services Act.

But if course applications of technology don’t stand still. The bloc’s competition chief, Margrethe Vestager, has also had her eye on voice assistant AI technologies for a while — raising concerns about the challenges being posed for user choice as far back as 2019, when she said her department was “trying to figure out how access to data will change the marketplace”.

The Commission took a concrete step last July when it announced a sectoral inquiry to examine IoT competition concerns in detail.

It’s now published a preliminary report, based on polling more than 200 companies operating in consumer IoT product and services markets (in Europe, Asia and the US) — and is soliciting further feedback on the findings (until September 1) ahead of a final report due in the first half of next year.

Among the main areas of potential competition concern it found are: Exclusivity and tying practices in relation to voice assistants and practices that limit the possibility to use different voice assistants on the same smart device; the intermediating role of voice assistants and mobile OSes between users and the wider device and services market — with the concern being this allows the owners of the platform voice AI to control user relationships, potentially impacting the discoverability and visibility of rival IoT services.

Another concern is around (unequal) access to data. Survey participants suggested that platform and voice assistant operators gain extensive access to user data — including capturing information on user interactions with third-party smart devices and consumer IoT services as a result of the intermediating voice AI.

“The respondents to the sector inquiry consider that this access to and accumulation of large amounts of data would not only give voice assistant providers advantages in relation to the improvement and market position of their general-purpose voice assistants, but also allow them to leverage more easily into adjacent markets,” the Commission writes in a press release.

A similar concern underlies an ongoing EU antitrust investigation into Amazon’s use of third party merchants’ data which it obtains via its ecommerce marketplace (and which the Commission believes could be illegally distorting competition in online retail markets).

Lack of interoperability in the consumer IoT sector is another concern flagged in the report. “In particular, a few providers of voice assistants and operating systems are said to unilaterally control interoperability and integration processes and to be capable of limiting functionalities of third-party smart devices and consumer IoT services, compared to their own,” it says.

There’s nothing very surprising in the above list. But it’s noteworthy that the Commission is trying to get a handle on competitive risks — and start mulling potential remedies — at a point when the adoption of voice assistant AIs is still at a relatively early stage in the region.

In its press release, the Commission notes that usage of voice assistant tech is growing worldwide and expected to double between 2020 and 2024 (from 4.2BN voice AIs to 8.4BN) — although only 11% of EU citizens surveyed last year had already used a voice assistant, per cited Eurostat data.

EU lawmakers have certainly learned lessons from the recent failure of competition policy to keep up with digital developments and rein in a first wave of tech giants. And those giants of course continue to dominate the market for voice AIs now (Amazon with Alexa, Google with its eponymous Assistant and Apple’s Siri). So the risks for competition are crystal clear — and the Commission will be keen to avoid repeating the mistakes of the past.

Still, quite how policymakers could look to tackle competitive lock-in around voice AIs — whose USP tends to be their lazy-web, push-button and branded convenience for users — remains to be seen.

One option, enforcing interoperability, could increase complexity in a way that’s negative for usability — and may raise other concerns, such as around the privacy of user data.

Although giving users themselves more say and control over how the consumer tech they own works can certainly be a good idea, at least provided the platform’s presentation of choices isn’t itself manipulative and exploitative.

There are certainly plenty of pitfalls where IoT and competition is concerned — but also potential opportunities for startups and smaller players if proactive regulatory action can ensure that dominant platforms don’t get to set all the defaults once again.

Commenting in a statement, Vestager said: “When we launched this sector inquiry, we were concerned that there might be a risk of gatekeepers emerging in this sector. We were worried that they could use their power to harm competition, to the detriment of developing businesses and consumers. From the first results published today, it appears that many in the sector share our concerns. And fair competition is needed to make the most of the great potential of the Internet of Things for consumers in their daily lives. This analysis will feed into our future enforcement and regulatory action, so we look forward to receiving further feedback from all interested stakeholders in the coming months.”

The full sectoral report can be found here.

 

#alexa, #amazon, #ambient-intelligence, #artificial-intelligence, #assistant, #digital-competition, #europe, #european-union, #gadgets, #google, #internet-of-things, #iot, #margrethe-vestager, #policy, #privacy, #smart-device, #smart-devices, #technology, #virtual-assistant, #voice-assistant

0

Tiny handheld Playdate ships next month for $179, with 24 charming monochrome games to start

Playdate, app and game designer Panic’s first shot at hardware, finally has a firm price and ship date, as well as a bunch of surprise features cooked up since its announcement in 2019. The tiny handheld gaming console will cost $179, ship next month, and come with a 24-game “season” doled out over 12 weeks. But now it also has a cute speaker dock and low-code game creation platform.

We first heard about Playdate more than two years ago, were charmed by its clean look, funky crank control, and black and white display, and have been waiting for news ever since. Panic’s impeccable design credentials combined with Teenage Engineering’s creative hardware chops? It’s bound to be a joy to use, but there wasn’t much more than that to go on.

Now the company has revealed all the important details we were hoping for, and many more to boot.

The Playdate handheld with a person playing a game on it.

Image Credits: Panic

Originally we were expecting 12 games to be delivered over 12 weeks, but in the intervening period it seems they’ve collected more titles than planned, and that initial “season” of games has expanded to 24. No one knows exactly what to expect from these games except that they’re exclusive to the Playdate and many use the crank mechanic in what appear to be fun and interesting ways: turning a turntable, opening a little door, doing tricks as a surfer, and so on.

The team hasn’t decided how future games will be distributed, though they seem to have some ideas. Another season? One-off releases? Certainly the presence of a new game by one-man indie hit parade Lucas Pope would sell like hotcakes.

Screenshots of the Pulp game creation tool.

Image Credits: Panic

But the debut of a new lo-fi game development platform called Pulp suggests a future where self-publishing may also be an option. This lovely little web-based tool lets anyone put together a game using presets for things like controls and actions, and may prove to be a sort of tiny Twine in time.

A dock accessory was announced as well, something to keep your Playdate front and center on your desk. The speaker-equipped dock, also a lemony yellow, acts as a magnetic charging cradle for the console, activating a sort of stationary mode with a clock and music player (Poolsuite.fm, apparently, with original relaxing tunes). It even has two holes in which to put your pens (and Panic made a special yellow pen just for the purpose as well).

Playdate attached to its little cubical dock.

Image Credits: Panic

The $179 price may cause some to balk — after all, it’s considerably more than a Nintendo 3DS and with the dock probably approaches the price of a Switch. But this isn’t meant to be a competitor with mainstream gaming — instead, it’s a sort of anti-establishment system that embraces weirdness and provides something equally unfamiliar and undeniably fun.

The team says that there will be a week’s warning before orders can be placed, and that they don’t plan to shut orders down if inventory runs out, but simply allow people to preorder and cancel at will until they receive their unit. We hope to get one ourselves to test and review, but since part of the charm of the whole thing is the timed release and social aspect of discovery and sharing, it’s more than likely we’ll be experiencing it along with everyone else.

#gadgets, #gaming, #hardware, #panic, #playdate

0

Apple releases torrent of updates, and Wall Street yawns

Today’s WWDC keynote from Apple covered a huge range of updates. From a new macOS to a refreshed watchOS to a new iOS, better privacy controls, FaceTime updates, and even iCloud+, there was something for everyone in the laundry list of new code.

Apple’s keynote was essentially what happens when the big tech companies get huge; they have so many projects that they can’t just detail a few items. They have to run down their entire parade of platforms, dropping packets of news concerning each.

But despite the obvious indication that Apple has been hard at work on the critical software side of its business, especially its services-side (more here), Wall Street gave a firm, emphatic shrug.

This is standard but always slightly confusing.

Investors care about future cash flows, at least in theory. Those future cash flows come from anticipated revenues, which are born from product updates, driving growth in sales of services, software, and hardware. Which, apart from the hardware portion of the equation, is precisely what Apple detailed today.

And lo, Wall Street looked upon the drivers of its future earnings estimates, and did sayeth “lol, who really cares.”

Shares of Apple were down a fraction for most of the day, picking up as time passed not thanks to the company’s news dump, but because the Nasdaq largely rose as trading raced to a close.

Here’s the Apple chart, via YCharts:

And here’s the Nasdaq:

Presuming that you are not a ChartMaster™, those might not mean much to you. Don’t worry. The charts say very little all-around so you are missing little. Apple was down a bit, and the Nasdaq up a bit. Then the Nasdaq went up more, and Apple’s stock generally followed. Which is good to be clear, but somewhat immaterial.

So after yet another major Apple event that will help determine the health and popularity of every Apple platform — key drivers of lucrative hardware sales! — the markets are betting that all their prior work estimating the True and Correct value of Apple was dead-on and that there is no need for any sort of up-or-down change.

That, or Apple is so big now that investors are simply betting it will grow in keeping with GDP. Which would be a funny diss. Regardless, more from the Apple event here in case you are behind.

 

#apple, #apps, #gadgets, #lol, #mobile, #stock-market, #wall-street, #wwdc-2021

0

Apple announces iCloud+ with privacy-focused features

Apple is rolling out some updates to iCloud under the name iCloud+. The company is announcing those features at its developer conference. Existing paid iCloud users are going to get those iCloud+ features for the same monthly subscription price.

In Safari, Apple is going to launch a new privacy feature called Private Relay. It sounds a bit like the new DNS feature that Apple has been developing with Cloudflare. Originally named Oblivious DNS-over-HTTPS, Private Relay could be a better name for something quite simple — a combination of DNS-over-HTTPS with proxy servers.

When Private Relay is turned on, nobody can track your browsing history — not your internet service provider, anyone standing in the middle of your request between your device and the server you’re requesting information from. We’ll have to wait a bit to learn more about how it works exactly.

The second iCloud+ feature is ‘Hide my email’. It lets you generate random email addresses when you sign up to a newsletter or when you create an account on a website. If you’ve used ‘Sign in with Apple’, you know that Apple offers you the option to use fake iCloud email addresses. This works similarly, but for any app.

Finally, Apple is overhauling HomeKit Secure Video. With the name iCloud+, Apple is separating free iCloud users from paid iCloud users. Basically, you used to pay for more storage. Now, you pay for more storage and more features. Subscriptions start at $0.99 per month for 50GB (and iCloud+ features).

More generally, Apple is adding two much needed to iCloud accounts. Now, you can add a friend for account recovery. This way, you can request access to your data to your friend. But that doesn’t mean that your friend can access your iCloud data — it’s just a way to recover your account.

The last much-needed update is a legacy feature. You’ll soon be able to add one or several legacy contacts. Data can be passed along when you pass away. And this is a much needed feature as many photo libraries become inaccessible when someone close to you passes away.

read more about Apple's WWDC 2021 on TechCrunch

#apple, #apps, #gadgets, #icloud, #mobile, #wwdc, #wwdc-2021

0

Apple unveils iOS 15 with new features for FaceTime and better notifications

During the virtual keynote of WWDC, Apple shared the first details about iOS 14, the next major version of iOS that is going to be released later this year. There are four pillars with this year’s release: staying connected, focusing without distraction, using intelligence and exploring the world.

“For many of us, our iPhones have become indispensable,” SVP of Software Engineering Craig Federighi said. “Our new release is iOS 15. It’s packed with features that make the iOS experience adapt to and complement the way you use iPhone, whether it’s staying connected with those who matter to you most. Finding the space to focus without distraction, using intelligence to discover the information you need, or exploring the world around you.”

FaceTime gets a bunch of new features

Apple is adding spatial audio to FaceTime. Now the voices are spread out depending on the position of your friends on the screen. For instance, if someone appears on the left, it’ll sound like they’re on the left in your ears. In other FaceTime news, iOS now detects background noise and tries to suppress it so that you can hear your friends and family members more easily. That’s an optional feature, which means that you can disable it in case you’re showing a concert during a FaceTime call for instance.

Another FaceTime feature is ‘Portrait mode’. Behind this term, Apple means that it can automatically blurs the background, like in ‘Portrait mode’ photos. In case you want to use FaceTime for work conferences, you can now generate FaceTime links and add it to a calendar invite. FaceTime will also work in a web browser, which means that people without an Apple device can join a FaceTime call.

FaceTime is a big focus as Apple is also introducing SharePlay. With this feature, you can listen together to a music album. Press play in Apple Music and the music will start for everyone on the call. The queue is shared with everyone else, which means anyone can add songs, skip to the next track, etc.

SharePlay also lets you watch movies and TV shows together. Someone on the call starts a video and it starts on your friend’s phone or tablet. It is also compatible with AirPlay, picture-in-picture and everything you’d expect from videos on iOS.

This isn’t just compatible with Apple TV videos. Apple said there will be an API to make videos compatible with SharePlay. Partners include Disney+, Hulu, HBO Max, Twitch, TikTok and more. Here’s a screenshot of the initial partners:

Now let’s switch to Messages. The app is getting better integration with other Apple apps like News, Photos and Music. With items shared via Messages showing up in there. In other words, Messages (and iMessage) is acting as the social layer on top of Apple’s apps.

A new notification summary

Apple is going to use on-device intelligence to create summaries of your notifications. Instead of being sorted by apps and by date, it is sorted by priority. For instance, notifications from friends will be closer to the top.

When you silence notifications, your iMessage contacts will see that you have activated ‘Do not disturb’. It works a bit like ‘Do not disturb’ in Slack. But there are new settings. Apple calls this Focus mode. You can choose apps and people you want notifications from and change your focus depending on what you’re doing.

For instance, if you’re at work, you can silence personal apps and personal calls and messages. If it’s the weekend, you can silence your work emails. Your settings sync across your iCloud account if you have multiple Apple devices. And it’ll even affect your home screen by showing and hiding apps and widgets.

New smart features

Apple is going to scan your photos for text. Called Live Text, this feature lets you highlight, copy and paste text in photos. It could be a nice accessibility feature as well. iOS is going to leverage that info for Spotlight. You can search from text in your photos directly in Spotlight. These features are handled on device directly.

With iOS 15, memories are getting an upgrade. “These new memories are built on the fly. They are interactive and alive,” Chelsea Burnette, Senior Manager, Photos Engineering. Memories are those interactive movies that you can watch in the Photos app. Now, you can tap with your finger to pause the movie. While music still plays in the background, your photo montage resumes when you lift your finger.

You can now search for a specific song to pair with a memory. It’s going to be interesting to see in details what’s new for the Photos app.

Wallet, Weather and Maps

This is a developing story…

read more about Apple's WWDC 2021 on TechCrunch

#apple, #apps, #gadgets, #ios, #ios-15, #mobile, #wwdc, #wwdc-2021

0

Huawei officially launches Android alternative HarmonyOS for smartphones

Think you’re living in a hyper-connected world? Huawei’s proprietary HarmonyOS wants to eliminate delays and gaps in user experience when you move from one device onto another by adding interoperability to all devices, regardless of the system that powers them.

Two years after Huawei was added to the U.S. entity list that banned the Chinese telecom giant from accessing U.S. technologies, including core chipsets and Android developer services from Google, Huawei’s alternative smartphone operating system was unveiled.

On Wednesday, Huawei officially launched its proprietary operating system HarmonyOS for mobile phones. The firm began building the operating system in 2016 and made it open-source for tablets, electric vehicles and smartwatches last September. Its flagship devices such as Mate 40 could upgrade to HarmonyOS starting Wednesday, with the operating system gradually rolling out on lower-end models in the coming quarters.

HarmonyOS is not meant to replace Android or iOS, Huawei said. Rather, its application is more far-reaching, powering not just phones and tablets but an increasing number of smart devices. To that end, Huawei has been trying to attract hardware and home appliance manufacturers to join its ecosystem.

To date, more than 500,000 developers are building applications based on HarmonyOS. It’s unclear whether Google, Facebook and other mainstream apps in the West are working on HarmonyOS versions.

Some Chinese tech firms have answered Huawei’s call. Smartphone maker Meizu hinted on its Weibo account that its smart devices might adopt HarmonyOS. Oppo, Vivo and Xiaomi, who are much larger players than Meizu, are probably more reluctant to embrace a rival’s operating system.

Huawei’s goal is to collapse all HarmonyOS-powered devices into one single control panel, which can, say, remotely pair the Bluetooth connections of headphones and a TV. A game that is played on a phone can be continued seamlessly on a tablet. A smart soymilk blender can customize a drink based on the health data gleaned from a user’s smartwatch.

Devices that aren’t already on HarmonyOS can also communicate with Huawei devices with a simple plug-in. Photos from a Windows-powered laptop can be saved directly onto a Huawei phone if the computer has the HarmonyOS plug-in installed. That raises the question of whether Android, or even iOS, could, one day, talk to HarmonyOS through a common language.

The HarmonyOS launch arrived days before Apple’s annual developer event scheduled for next week. A recent job posting from Apple mentioned a seemingly new concept, homeOS, which may have to do with Apple’s smart home strategy, as noted by Macrumors.

Huawei denied speculations that HarmonyOS is a derivative of Android and said no single line of code is identical to that of Android. A spokesperson for Huawei declined to say whether the operating system is based on Linux, the kernel that powers Android.

Several tech giants have tried to introduce their own mobile operating systems to no avail. Alibaba built AliOS based on Linux but has long stopped updating it. Samsung flirted with its own Tizen but the operating system is limited to powering a few Internet of Things like smart TVs.

Huawei may have a better shot at drumming up developer interest compared to its predecessors. It’s still one of China’s largest smartphone brands despite losing a chunk of its market after the U.S. government cut it off critical chip suppliers, which could hamper its ability to make cutting-edge phones. HarmonyOS also has a chance to create an alternative for developers who are disgruntled with Android, if Huawei is able to capture their needs.

The U.S. sanctions do not block Huawei from using Android’s open-source software, which major Chinese smartphone makers use to build their third-party Android operating system. But the ban was like a death knell for Huawei’s consumer markets overseas as its phones abroad lost access to Google Play services.

#alibaba, #android, #apple, #asia, #bluetooth, #china, #facebook, #gadgets, #harmonyos, #huawei, #internet-of-things, #linux, #meizu, #microsoft-windows, #mobile, #mobile-linux, #mobile-operating-system, #mobile-phones, #open-source-software, #operating-system, #operating-systems, #smart-devices, #smartphone, #smartphones, #tc, #xiaomi

0

Kabuto releases a larger version of its smart suitcase

Kabuto, the French startup that designs and sells smart suitcases, is releasing a new suitcase today. Called the Kabuto Trunk, this is the company’s biggest suitcase to date. Unlike smart suitcases from other brands, this isn’t just a suitcase with a battery in it.

In particular, there’s a fingerprint reader located at the top of the suitcase. You can save up to 10 different fingerprints. After that, it works pretty much like a fingerprint reader on a smartphone — you put your finger on the reader and it unlocks your suitcase.

In that case, it unlocks the zippers. If somebody else is using your suitcase or the battery is dead, you can also open the suitcase with a traditional key.

The Kabuto Trunk features a hard-shell design with a capacity of 95 liters. It has metal bearing wheels and real tires. Users can choose between two batteries — a 10,000mAh battery and a bigger 20,000mAh battery. Basically you have to choose between weight and battery capacity as bigger batteries tend to be heavier.

Customers can also choose to buy a backpack that magnetically attaches to the suitcase. Designed with travel in mind, that backpack is expandable and can double in thickness from 9 liters to 18 liters.

Image Credits: Kabuto

The suitcase currently costs $629 and the backpack $299 — the company plans to raise prices once the Kickstarter campaign is over.

As always with Kabuto products, this isn’t a product for everyone. They tend to be more expensive than what you’d normally pay for a suitcase. But some people like to pack things in a very specific way so that important items remain available. The startup has previously raised $1 million (€900,000) from Frédéric Mazzella, Michel & Augustin, Bpifrance, Fabien Pierlot and others.

Image Credits: Kabuto

#europe, #france, #france-newsletter, #gadgets, #kabuto, #smart-suitcase, #startups, #suitcase

0

US removes Xiaomi’s designation as a Communist Chinese Military Company

Xiaomi, one of China’s high-profile tech firms that fell in the crosshairs of the Trump administration, has been removed from a U.S. government blacklist that designated it as a Communist Chinese Military Company.

The U.S. District Court for the District of Columbia has vacated the Department of Defence’s designation of Xiaomi as a CCMC in January, a document filed on May 25 shows.

In February, Xiaomi sued the U.S. government over its inclusion in the military blacklist. In March, the D.C. court granted Xiaomi a preliminary injunction against the DoD designation, which would have forbidden all U.S. persons from purchasing or possessing Xiaomi’s securities, saying the decision was “arbitrary and capricious.” The ruling was made to prevent “irreparable harm” to the Chinese phone maker.

Xiaomi has this to say about getting off the blacklist:

The Company is grateful for the trust and support of its global users, partners, employees and shareholders. The Company reiterates that it is an open, transparent, publicly traded, independently operated and managed corporation. The Company will continue to provide reliable consumer electronics products and services to users, and to relentlessly build amazing products with honest prices to let everyone in the world enjoy a better life through innovative technology.

Xiaomi’s domestic competitor Huawei is still struggling with its inclusion in the U.S. trade blacklist, which bans it from accessing critical U.S. technologies and has crippled its smartphone sales around the world.

#asia, #china, #gadgets, #government, #telecommunications, #trump-administration, #u-s-government, #united-states, #xiaomi

0

Deep Science: Robots, meet world

Research papers come out far too frequently for anyone to read them all. That’s especially true in the field of machine learning, which now affects (and produces papers in) practically every industry and company. This column aims to collect some of the most relevant recent discoveries and papers — particularly in, but not limited to, artificial intelligence — and explain why they matter.

This edition, we have a lot of items concerned with the interface between AI or robotics and the real world. Of course most applications of this type of technology have real-world applications, but specifically this research is about the inevitable difficulties that occur due to limitations on either side of the real-virtual divide.

One issue that constantly comes up in robotics is how slow things actually go in the real world. Naturally some robots trained on certain tasks can do them with superhuman speed and agility, but for most that’s not the case. They need to check their observations against their virtual model of the world so frequently that tasks like picking up an item and putting it down can take minutes.

What’s especially frustrating about this is that the real world is the best place to train robots, since ultimately they’ll be operating in it. One approach to addressing this is by increasing the value of every hour of real-world testing you do, which is the goal of this project over at Google.

In a rather technical blog post the team describes the challenge of using and integrating data from multiple robots learning and performing multiple tasks. It’s complicated, but they talk about creating a unified process for assigning and evaluating tasks, and adjusting future assignments and evaluations based on that. More intuitively, they create a process by which success at task A improves the robots’ ability to do task B, even if they’re different.

Humans do it — knowing how to throw a ball well gives you a head start on throwing a dart, for instance. Making the most of valuable real-world training is important, and this shows there’s lots more optimization to do there.

Another approach is to improve the quality of simulations so they’re closer to what a robot will encounter when it takes its knowledge to the real world. That’s the goal of the Allen Institute for AI’s THOR training environment and its newest denizen, ManipulaTHOR.

Animated image of a robot navigating a virtual environment and moving items around.

Image Credits: Allen Institute

Simulators like THOR provide an analogue to the real world where an AI can learn basic knowledge like how to navigate a room to find a specific object — a surprisingly difficult task! Simulators balance the need for realism with the computational cost of providing it, and the result is a system where a robot agent can spend thousands of virtual “hours” trying things over and over with no need to plug them in, oil their joints and so on.

#artificial-intelligence, #deep-science, #ec-hardware, #ec-news-analysis, #ec-robotics, #gadgets, #hardware, #lab-wrap, #robotics, #science, #tc

0

Apple Watch gets a motion-controlled cursor with ‘Assistive Touch’

Tapping the tiny screen of the Apple Watch with precision has certain level of fundamental difficulty, but for some people with disabilities it’s genuinely impossible. Apple has remedied this with a new mode called “Assistive Touch” that detects hand gestures to control a cursor and navigate that way.

The feature was announced as part of a collection of accessibility-focused additions across its products, but Assistive Touch seems like the one most likely to make a splash across the company’s user base.

It relies on the built-in gyroscope and accelerometer, as well as data from the heart rate sensor, to deduce the position of the wrist and hand. Don’t expect it to tell a peace sign from a metal sign just yet, but for now it detects “pinch” (touching the index finger to the thumb) and “clench” (make a loose fist), which can act as basic “next” and “confirm” actions. Incoming calls, for instance, can be quickly accepted with a clench.

Most impressive, however, is the motion pointer. You can activate it either by selecting it in the Assistive Touch menu, or by shaking your wrist vigorously. It then detects the position of your hand as you move it around, allowing you to “swipe” by letting the cursor linger at the edge of the screen, or interact with things using a pinch or clench.

Needless to say this could be extremely helpful for anyone who only has the one hand available for interacting with the watch. And even for those who don’t strictly need it, the ability to keep one hand on the exercise machine, cane, or whatever else while doing smartwatch things is surely an attractive possibility. (One wonders about the potential of this control method as a cursor for other platforms as well…)

Memoji featuring new accessibility-focused gear.

Image Credits: Apple

Assistive Touch is only one of many accessibility updates Apple shared in this news release; other advances for the company’s platforms include:

  • SignTime, an ASL interpreter video call for Apple Store visits and support
  • Support for new hearing aids
  • Improved VoiceOver-based exploration of images
  • A built-in background noise generator (which I fully intend to use)
  • Replacement of certain buttons with non-verbal mouth noises (for people who have limited speech and mobility)
  • Memoji customizations for people with oxygen tubes, cochlear implants, and soft helmets
  • Featured media in the App Store, Apple TV, Books, and Maps apps from or geared towards people with disabilities

It’s all clustered around Global Accessibility Awareness Day, which is tomorrow, May 20th.

#accessibility, #apple, #apple-watch, #gadgets, #hardware

0

Liquid Instruments raises $13.7M to bring its education-focused 8-in-1 engineering gadget to market

Part of learning to be an engineer is understanding the tools you’ll have to work with — voltmeters, spectrum analyzers, things like that. But why use two, or eight for that matter, where one will do? The Moku:Go combines several commonly used tools into one compact package, saving room on your workbench or classroom while also providing a modern, software-configurable interface. Creator Liquid Instruments has just raised $13.7 million to bring this gadget to students and engineers everywhere.

Students at a table use a Moku Go device to test a circuit board.

Image Credits: Liquid Instruments

The idea behind Moku:Go is largely the same as the company’s previous product, the Moku:Lab. Using a standard input port, a set of FPGA-based tools perform the same kind of breakdowns and analyses of electrical signals as you would get in a larger or analog device. But being digital saves a lot of space that would normally go towards bulky analog components.

The Go takes this miniaturization further than the Lab, doing many of the same tasks at half the weight and with a few useful extra features. It’s intended for use in education or smaller engineering shops where space is at a premium. Combining eight tools into one is a major coup when your bench is also your desk and your file cabinet.

Those eight tools, by the way, are: waveform generator, arbitrary waveform generator, frequency response analyzer, logic analyzer/pattern generator, oscilloscope/voltmeter, PID controller, spectrum analyzer, and data logger. It’s hard to say whether that really adds up to more or less than eight, but it’s definitely a lot to have in a package the size of a hardback book.

You access and configure them using a software interface rather than a bunch of knobs and dials — though let’s be clear, there are good arguments for both. When you’re teaching a bunch of young digital natives, however, a clean point-and-click interface is probably a plus. The UI is actually very attractive; you can see several examples by clicking the instruments on this page, but here’s an example of the waveform generator:

Graphical interface for a waveform generator

Image Credits: Liquid Instruments

Love those pastels.

The Moku:Go currently works with Macs and Windows but doesn’t have a mobile app yet. It integrates with Python, MATLAB, and LabVIEW. Data goes over Wi-Fi.

Compared with the Moku:Lab, it has a few perks. A USB-C port instead of a mini, a magnetic power port, a 16-channel digital I/O, optional power supply of up to four channels, and of course it’s half the size and weight. It compromises on a few things — no SD card slot and less bandwidth for its outputs, but if you need the range and precision of the more expensive tool, you probably need a lot of other stuff too.

A person uses a Moku Go device at a desk.

Image Credits: Liquid Instruments

Since the smaller option also costs $500 to start (“a price comparable to a textbook”… yikes) compared with the big one’s $3,500, there’s major savings involved. And it’s definitely cheaper than buying all those instruments individually.

The Moku:Go is “targeted squarely at university education,” said Liquid Instruments VP of marketing Doug Phillips. “Professors are able to employ the device in the classroom and individuals, such as students and electronic engineering hobbyists, can experiment with it on their own time. Since its launch in March, the most common customer profile has been students purchasing the device at the direction of their university.”

About a hundred professors have signed on to use the device as part of their Fall classes, and the company is working with other partners in universities around the world. “There is a real demand for portable, flexible systems that can handle the breadth of four years of curriculum,” Phillips said.

Production starts in June (samples are out to testers), the rigors and costs of which likely prompted the recent round of funding. The $13.7M comes from existing investors Anzu Partners and ANU Connect Ventures, and new investors F1 Solutions and Moelis Australia’s Growth Capital Fund. It’s a convertible note “in advance of an anticipated Series B round in 2022,” Phillips said. It’s a larger amount than they intended to raise at first, and the note nature of the round is also not standard, but given the difficulties faced by hardware companies over the last year, some irregularities are probably to be expected.

No doubt the expected B round will depend considerably on the success of the Moku:Go’s launch and adoption. But this promising product looks as if it might be a commonplace item in thousands of classrooms a couple years from now.

#education, #engineering, #funding, #fundings-exits, #gadgets, #hardware, #science, #startups, #stem, #tc

0

Everything Google announced at I/O today

This year’s I/O event from Google was heavy on the “we’re building something cool” and light on the “here’s something you can use or buy tomorrow.” But there were also some interesting surprises from the semi-live event held in and around the company’s Mountain View campus. Read on for all the interesting bits.

Android 12 gets a fresh new look and some quality of life features

We’ve known Android 12 was on its way for months, but today was our first real look at the next big change for the world’s most popular operating system. A new look, called Material You (yes), focuses on users, apps, and things like time of day or weather to change the UI’s colors and other aspects dynamically. Some security features like new camera and microphone use indicators are coming, as well as some “private compute core” features that use AI processes on your phone to customize replies and notifications. There’s a beta out today for the adventurous!

Wow, Android powers 3 billion devices now

Subhed says it all (but read more here). Up from 2 billion in 2017.

Smart Canvas smushes Docs, productivity, and video calls together

Millions of people and businesses use Google’s suite of productivity and collaboration tools, but the company felt it would be better if they weren’t so isolated. Now with Smart Canvas you can have a video call as you work on a shared doc together and bring in information and content from your Drive and elsewhere. Looks complicated, but potentially convenient.

AI conversations get more conversational with LaMDA

It’s a little too easy to stump AIs if you go off script, asking something in a way that to you seems normal but to the language model is totally incomprehensible. Google’s LaMDA is a new natural language processing technique that makes conversations with AI models more resilient to unusual or unexpected queries, making it more like a real person and less like a voice interface for a search function. They demonstrated it by showing conversations with anthropomorphized versions of Pluto and a paper airplane. And yes, it was exactly as weird as it sounds.

Google built a futuristic 3D video calling booth

One of the most surprising things at the keynote had to be Project Starline, a high-tech 3D video call setup that uses Google’s previous research and Lytro DNA to show realistic 3D avatars of people on both sides of the system. It’s still experimental but looks very promising.

Wear OS gets a revamp and lots of health-focused apps

Image Credits: Google

Few people want to watch a movie on their smartwatch, but lots of people like to use it to track their steps, meditation, and other health-related practices. Wear OS is getting a bunch of Fitbit DNA infused, with integrated health tracking stuff and a lot of third party apps like Calm and Flo.

Samsung and Google announce a unified smartwatch platform

These two mobile giants have been fast friends in the phone world for years, but when it comes to wearables, they’ve remained rivals. In the face of Apple’s utter dominance in the smartwatch space, however, the two have put aside their differences and announced they’ll work on a “unified platform” so developers can make apps that work on both Tizen and Wear OS.

And they’re working together on foldables too

Apparently Google and Samsung realized that no one is going to buy foldable devices unless they do some really cool things, and that collaboration is the best way forward there. So the two companies will also be working together to improve how folding screens interact with Android.

Android TV hits 80 million devices and adds phone remote

Image Credits: Google

The smart TV space is a competitive one, and after a few starts Google has really made it happen with Android TV, which the company announced had reached 80 million monthly active devices — putting it, Roku, and Amazon (the latter two with around 50 million monthly active accounts) all in the same league. The company also showed off a powerful new phone-based remote app that will (among other things) make putting in passwords way better than using the d-pad on the clicker. Developers will be glad to hear there’s a new Google TV emulator and Firebase Test Lab will have Android TV support.

Your Android phone is now (also) your car key

Well, assuming you have a really new Android device with a UWB chip in it. Google is working with BMW first, and other automakers soon most likely, to make a new method for unlocking the car when you get near it, or exchanging basic commands without the use of a fob or Bluetooth. Why not Bluetooth you ask? Well, Bluetooth is old. UWB is new.

Vertex collects machine learning development tools in one place

Google and its sibling companies are both leaders in AI research and popular platforms for others to do their own AI work. But its machine learning development tools have been a bit scattershot — useful but disconnected. Vertex is a new development platform for enterprise AI that puts many of these tools in one place and integrates closely with optional services and standards.

There’s a new generation of Google’s custom AI chips

Google does a lot of machine learning stuff. Like, a LOT a lot. So they are constantly working to make better, more efficient computing hardware to handle the massive processing load these AI systems create. TPUv4 is the latest, twice as fast as the old ones, and will soon be packaged into 4,096-strong pods. Why 4,096 and not an even 4,000? The same reason any other number exists in computing: powers of 2.

And they’re powering some new Photos features including one that’s horrifying

cinematic google photo

NO THANK YOU

Google Photos is a great service, and the company is trying to leverage the huge library of shots most users have to find patterns like “selfies with the family on the couch” and “traveling with my lucky hat” as fun ways to dive back into the archives. Great! But they’re also taking two photos taken a second apart and having an AI hallucinate what comes between them, leading to a truly weird looking form of motion that shoots deep, deep into the uncanny valley, from which hopefully it shall never emerge.

Forget your password? Googlebot to the rescue

Google’s “AI makes a hair appointment for you” service Duplex didn’t exactly set the world on fire, but the company has found a new way to apply it. If you forget your password, Duplex will automatically fill in your old password, pick a new one and let you copy it before submitting it to the site, all by interacting with the website’s normal reset interface. It’s only going to work on Twitter and a handful of other sites via Chrome for now, but hey, if it happens to you a lot, maybe it’ll save you some trouble.

Enter the Shopping Graph

Image Credits: Google I/O 2021

The aged among our readers may remember Froogle, Google’s ill-fated shopping interface. Well, it’s back… kind of. The plan is to include lots of product information, from price to star rating, availability and other info, right in the Google interface when you search for something. It sucks up this information from retail sites, including whether you have something in your cart there. How all this benefits anyone more than Google is hard to imagine, but naturally they’re positioning it as wins all around. Especially for new partner Shopify. (Me, I use DuckDuckGo.)

Flutter cross-platform devkit gets an update

A lot of developers have embraced Google’s Flutter cross-platform UI toolkit. The latest version, announced today, adds some safety settings, performance improvements, and workflow updates. There’s lots more coming, too.

Firebase gets an update too

Popular developer platform Firebase got a bunch of new and updated features as well. Remote Config gets a nice update allowing developers to customize the app experience to individual user types, and App Check provides a basic level of security against external threats. There’s plenty here for devs to chew on.

The next version of Android Studio is Arctic Fox

Image Credits: Google

The beta for the next version of Google’s Android Studio environment is coming soon, and it’s called Arctic Fox. It’s got a brand new UI building toolkit called Jetpack Compose, and a bunch of accessibility testing built in to help developers make their apps more accessible to people with disabilities. Connecting to devices to test on them should be way easier now too. Oh, and there’s going to be a version of Android Studio for Apple Silicon.

#artificial-intelligence, #augmented-reality, #automotive, #finance, #gadgets, #google, #google-i-o, #google-i-o-2021, #google-io-2021, #hardware, #media, #mobile, #privacy, #tc, #transportation, #wearables

0

Google is making a 3D, life-size video calling booth

Google is working on a video calling booth that uses 3D imagery on a 3D display to create a lifelike image of the people on both sides. While it’s still experimental, “Project Starline” builds on years of research and acquisitions, and could be the core of a more personal-feeling video meeting in the near future.

The system was only shown via video of unsuspecting participants, who were asked to enter a room with a heavily obscured screen and camera setup. Then the screen lit up with a video feed of a loved one, but in a way none of them expected:

“I could feel her and see her, it was like this 3D experience. It was like she was here.”

“I felt like I could really touch him!”

“It really, really felt like she and I were in the same room.”

CEO Sundar Pichai explained that this “experience” was made possible with high-resolution cameras and custom depth sensors, almost certainly related to these Google research projects into essentially converting videos of people and locations into interactive 3D scenes:

The cameras and sensors — probably a dozen or more hidden around the display — capture the person from multiple angles and figure out their exact shape, creating a live 3D model of them. This model and all the color and lighting information is then (after a lot of compression and processing) sent to the other person’s setup, which shows it in convincing 3D. It even tracks their heads and bodies to adjust the image to their perspective. (There’s a bit more on an early version of the technique here.)

But 3D TVs have more or less fallen by the wayside; turns out no one wants to wear special glasses for hours at a time, and the quality on glasses-free 3D was generally pretty bad. So what’s making this special 3D image?

Pichai said “we have developed a breakthrough light field display,” probably with the help of the people and IP it scooped up from Lytro, the light field camera company that didn’t manage to get its own tech off the ground and dissolved in 2018.

Light field cameras and displays create and show 3D imagery using a variety of techniques that are very difficult to explain or show in 2D. The startup Looking Glass has made several that are extremely arresting to view in person, showing 3D models and photographic scenes that truly look like tiny holograms.

Whether Google’s approach is similar or different, the effect appears to be equally impressive, as the participants indicate. They’ve been testing this internally and are getting ready to send out units to partners in various industries (such as medicine) where the feeling of a person’s presence makes a big difference.

At this point Project Starline is still very much a prototype, and probably a ridiculously expensive one — so don’t expect to get one in your home any time soon. But it’s not wild to think that a consumer version of this light field setup may be available down the line. Google promises to share more later this year.

#augmented-reality, #gadgets, #google, #google-i-o-2021, #google-io-2021, #hardware, #light-field, #science, #tc

0

Watch Google I/O keynote live right here

After skipping a year, Google is holding a keynote for its developer conference Google I/O. While it’s going to be an all-virtual event, there should be plenty of announcements, new products and new features for Google’s ecosystem.

The conference starts at 10 AM Pacific Time (1 PM on the East Cost, 6 PM in London, 7 PM in Paris) and you can watch the live stream right here on this page.

Rumor has it that Google should give us a comprehensive preview of Android 12, the next major release of Google’s operating system. There could also be some news when it comes to Google Assistant, Home/Nest devices, Wear OS and more.

#apps, #developer, #gadgets, #google, #google-i-o, #google-i-o-2021, #google-io-2021, #mobile

0

Alba Orbital’s mission to image the Earth every 15 minutes brings in $3.4M seed round

Orbital imagery is in demand, and if you think having daily images of everywhere on Earth is going to be enough in a few years, you need a lesson in ambition. Alba Orbital is here to provide it with its intention to provide Earth observation at intervals of 15 minutes rather than hours or days — and it just raised $3.4M to get its next set of satellites into orbit.

Alba attracted our attention at Y Combinator’s latest demo day; I was impressed with the startup’s accomplishment of already having 6 satellites in orbit, which is more than most companies with space ambition ever get. But it’s only the start for the company, which will need hundreds more to begin to offer its planned high-frequency imagery.

The Scottish company has spent the last few years in prep and R&D, pursuing the goal, which some must have thought laughable, of creating a solar-powered Earth observation satellite that weighs in at less than one kilogram. The joke’s on the skeptics, however — Alba has launched a proof of concept and is ready to send the real thing up as well.

Little more than a flying camera with the minimum of storage, communication, power, and movement, the sub-kilogram Unicorn-2 is about the size of a soda can, with paperback-size solar panel wings, and costs in the neighborhood of $10,000. It should be able to capture up to 10 meter resolution, good enough to see things like buildings, ships, crops, even planes.

A member of the Alba Orbital team holds a Unicorn-2 satellite.

Image Credits: Alba Orbital

“People thought we were idiots. Now they’re taking it seriously,” said Tom Walkinshaw, founder and CEO of Alba. “They can see it for what it is: a unique platform for capturing datasets.”

Indeed, although the idea of daily orbital imagery like Planet’s once seemed excessive, in some situations it’s quite clearly not enough.

“The California case is probably wildfires,” said Walkinshaw (and it always helps to have a California case). “Having an image once a day of a wildfire is a bit like having a chocolate teapot… not very useful. And natural disasters like hurricanes, flooding is a big one, transportation as well.”

Walkinshaw noted that they company was bootstrapped and profitable before taking on the task of launching dozens more satellites, something the seed round will enable.

“It gets these birds in the air, gets them finished and shipped out,” he said. “Then we just need to crank up the production rate.”

Alba Orbital founder Tom Walkinshaw next to a Y Combinator sign.

Image Credits: Alba Orbital

When I talked to Walkinshaw via video call, ten or so completed satellites in their launch shells were sitting on a rack behind him in the clean room, and more are in the process of assembly. Aiding in the scaling effort is new investor James Park, founder and CEO of FitBit — definitely someone who knows a little bit about bringing hardware to market.

Interestingly, the next batch to go to orbit (perhaps as soon as in a month or two, depending on the machinations of the launch provider) will be focusing on nighttime imagery, an area Walkinshaw suggested was undervalued. But as orbital thermal imaging startup Satellite Vu has shown, there’s immense appetite for things like energy and activity monitoring, and nighttime observation is a big part of that.

The seed round will get the next few rounds of satellites into space, and after that Alba will be working on scaling manufacturing to produce hundreds more. Once those start going up it can demonstrate the high-cadence imaging it is aiming to produce — for now it’s impossible to do so, though Alba already has customers lined up to buy the imagery it does get.

The round was led by Metaplanet Holdings, with participation by Y Combinator, Liquid2, Soma, Uncommon Denominator, Zillionize, and numerous angels.

As for competition, Walkinshow welcomes it, but feels secure that he and his company have more time and work invested in this class of satellite than anyone in the world — a major obstacle for anyone who wants to do battle. It’s more likely companies will, as Alba has done, pursue a distinct product complementary to those already or in the process of being offered.

“Space is a good place to be right now,” he concluded.

#aerospace, #funding, #fundings-exits, #gadgets, #hardware, #planet, #recent-funding, #space, #startups, #tc, #y-combinator

0

Xbox teams up with Tencent’s Honor of Kings maker TiMi Studios

TiMi Studios, one of the world’s most lucrative game makers and is part of Tencent’s gargantuan digital entertainment empire, said Thursday that it has struck a strategic partnership with Xbox.

The succinct announcement did not mention whether the tie-up is for content development or Xbox’s console distribution in China but said more details will be unveiled for the “deep partnership” by the end of this year.

Established in 2008 within Tencent, TiMi is behind popular mobile titles such as Honor of Kings and Call of Duty Mobile. In 2020, Honor of Kings alone generated close to $2.5 billion in player spending, according to market research company SensorTower. In all, TiMi pocketed $10 billion in revenue last year, according to a report from Reuters citing people with knowledge.

The partnership could help TiMi build a name globally by converting its mobile titles into console plays for Microsoft’s Xbox. TiMi has been trying to strengthen its own brand and distinguish itself from other Tencent gaming clusters, such as its internal rival LightSpeed & Quantum Studio, which is known for PUBG Mobile.

TiMi operates a branch in Los Angeles and said in January 2020 that it planned to “triple” its headcount in North America, adding that building high-budget, high-quality AAA mobile games was core to its global strategy. There are clues in a recruitment notice posted recently by a TiMi employee: The unit is hiring developers for an upcoming AAA title that is benchmarked against the Oasis, a massively multiplayer online game that evolves into a virtual society in the fiction and film Ready Player One. Oasis is played via a virtual reality headset.

Xbox’s latest Series X and Series S are to debut in China imminently, though the launch doesn’t appear to be linked to the Tencent deal. Sony’s Playstation 5 just hit the shelves in China in late April. Nintendo Switch distributes in China through a partnership with Tencent sealed in 2019.

Chinese console players often resort to grey markets for foreign editions because the list of Chinese titles approved by local authorities is tiny compared to what’s available outside the country. But these grey markets, both online and offline, are susceptible to ongoing clampdown. Most recently in March, product listings by multiple top sellers of imported console games vanished from Alibaba’s Taobao marketplace.

#asia, #call-of-duty, #china, #gadgets, #gaming, #honor-of-kings, #los-angeles, #nintendo, #nintendo-switch, #player, #ready-player-one, #tencent, #video-games, #video-gaming, #virtual-reality, #xbox

0

The Last Gameboard raises $4M to ship its digital tabletop gaming platform

The tabletop gaming industry has exploded over the last few years as millions discovered or rediscovered its joys, but it too is evolving — and The Last Gameboard hopes to be the venue for that evolution. The digital tabletop platform has progressed from crowdfunding to $4M seed round, and having partnered with some of the biggest names in the industry, plans to ship by the end of the year.

As the company’s CEO and co-founder Shail Mehta explained in a TC Early Stage pitch-off earlier this year, The Last Gameboard is a 16-inch square touchscreen device with a custom OS and a sophisticated method of tracking game pieces and hand movements. The idea is to provide a digital alternative to physical games where that’s practical, and do so with the maximum benefit and minimum compromise.

If the pitch sounds familiar… it’s been attempted once or twice before. I distinctly remember being impressed by the possibilities of D&D on an original Microsoft Surface… back in 2009. And I played with another at PAX many years ago. Mehta said that until very recently there simply wasn’t the technology and market weren’t ready.

“People tried this before, but it was either way too expensive or they didn’t have the audience. And the tech just wasn’t there; they were missing that interaction piece,” she explained, and certainly any player will recognize that the, say, iPad version of a game definitely lacks physicality. The advance her company has achieved is in making the touchscreen able to detect not just taps and drags, but game pieces, gestures and movements above the screen, and more.

“What Gameboard does, no other existing touchscreen or tablet on the market can do — it’s not even close,” Mehta said. “We have unlimited touch, game pieces, passive and active… you can use your chess set at home, lift up and put down the pieces, we track it the whole time. We can do unique identifiers with tags and custom shapes. It’s the next step in how interactive surfaces can be.”

It’s accomplished via a not particularly exotic method, which saves the Gameboard from the fate of the Surface and its successors, which cost several thousand dollars due to their unique and expensive makeups. Mehta explained that they work strictly with ordinary capacitive touch data, albeit at a higher framerate than is commonly used, and then use machine learning to characterize and track object outlines. “We haven’t created a completely new mechanism, we’re just optimizing what’s available today,” she said.

The Last Gameboard's interface, showing games available to play on the tablet's surface.

Image Credits: The Last Gameboard

At $699 for the Gameboard it’s not exactly an impulse buy, either, but the fact of the matter is people spend a lot of money on gaming, with some titles running into multiple hundreds of dollars for all the expansions and pieces. Tabletop is now a more than $20 billion industry. If the experience is as good as they hope to make it, this is an investment many a player will not hesitate (much, anyway) to make.

Of course, the most robust set of gestures and features won’t matter if all they had on the platform were bargain-bin titles and grandpa’s-parlor favorites like Parcheesi. Fortunately The Last Gameboard has managed to stack up some of the most popular tabletop companies out there, and aims to have the definitive digital edition for their games.

Asmodee Digital is probably the biggest catch, having adapted many of today’s biggest hits, from modern classics Catan and Carcassone to crowdfunded breakout hit Scythe and immense dungeon-crawler Gloomhaven. The full list of partners right now includes Dire Wolf Digital, Nomad Games, Auroch Digital, Restoration Games, Steve Jackson Games, Knights of Unity, Skyship Studios, EncounterPlus, PlannarAlly, and Sugar Gamers, as well as individual creators and developers.

Animation of two players grabbing dots on a screen and moving them around.

Image Credits: The Last Gameboard

These games may be best played in person, but have successfully transitioned to digital versions, and one imagines that a larger screen and inclusion of real pieces could make for an improved hybrid experience. There will be options both to purchase games individually, like you might on mobile or Steam, or to subscribe to an unlimited access model (pricing to be determined on both).

It would also be something that the many gaming shops and playing venues might want to have a couple of on hand. Testing out a game in-store and then buying a few to stock, or convincing consumers to do the same, could be a great sales tactic for all involved.

In addition to providing a unique and superior digital version of a game, the device can connect with others to trade moves, send game invites, and all that sort of thing. The whole OS, Mehta said, “is alive and real. If we didn’t own it and create it, this wouldn’t work.” This is more than a skin on top of Android with a built-in store, but there’s enough shared that Android-based ports will be able to be brought over with little fuss.

Head of content Lee Allentuck suggested that the last couple years (including the pandemic) have started to change game developers’ and publishers’ minds about the readiness of the industry for what’s next. “They see the digital crossover is going to happen — people are playing online board games now. If you can be part of that new trend at the very beginning, it gives you a big opportunity,” he said.

CEO Shail Mehta (center) plays Stop Thief on the Gameboard with others on the team.

Allentuck, who previously worked at Hasbro, said there’s widespread interest in the toy and tabletop industry to be more tech-forward, but there’s been a “chicken and egg scenario,” where there’s no market because no one innovates, and no one innovates because there’s no market. Fortunately things have progressed to the point where a company like The Last Gameboard can raise $4M series A to help cover the cost of creating that market.

The round was led by TheVentureCity, with participation from SOSV, Riot Games, Conscience VC, Corner3 VC, and others. While the company didn’t go through HAX, SOSV’s involvement has that HAX-y air, and partner Garrett Winther gives a glowing recommendation of its approach: “They are the first to effectively tie collaborative physical and digital gameplay together while not losing the community, storytelling or competitive foundations that we all look for in gaming.”

Mehta noted that the pandemic nearly cooked the company by derailing their funding, which was originally supposed to come through around this time last year when everything went pear-shaped. “We had our functioning prototype, we had filed for a patent, we got the traction, we were gonna raise, everything was great… and then COVID hit,” she recalled. “But we got a lot of time to do R&D, which was actually kind of a blessing. Our team was super small so we didn’t have to lay anyone off — we just went into survival mode for like six months and optimized, developed the platform. 2020 was rough for everyone, but we were able to focus on the core product.”

Now the company is poised to start its beta program over the summer and (following feedback from that) ship its first production units before the holiday season when purchases like this one seem to make a lot of sense.

(This article originally referred to this raise as The Last Gameboard’s round A — it’s actually the seed. This has been updated.)

#artificial-intelligence, #augmented-reality, #funding, #fundings-exits, #gadgets, #gaming, #hardware, #tabletop, #tabletop-gaming, #tc

0

CMU researchers show potential of privacy-preserving activity tracking using radar

Imagine if you could settle/rekindle domestic arguments by asking your smart speaker when the room last got cleaned or whether the bins already got taken out?

Or — for an altogether healthier use-case — what if you could ask your speaker to keep count of reps as you do squats and bench presses? Or switch into full-on ‘personal trainer’ mode — barking orders to peddle faster as you spin cycles on a dusty old exercise bike (who needs a Peloton!).

And what if the speaker was smart enough to just know you’re eating dinner and took care of slipping on a little mood music?

Now imagine if all those activity tracking smarts were on tap without any connected cameras being plugged inside your home.

Another bit of fascinating research from researchers at Carnegie Mellon University’s Future Interfaces Group opens up these sorts of possibilities — demonstrating a novel approach to activity tracking that does not rely on cameras as the sensing tool. 

Installing connected cameras inside your home is of course a horrible privacy risk. Which is why the CMU researchers set about investigating the potential of using millimeter wave (mmWave) doppler radar as a medium for detecting different types of human activity.

The challenge they needed to overcome is that while mmWave offers a “signal richness approaching that of microphones and cameras”, as they put it, data-sets to train AI models to recognize different human activities as RF noise are not readily available (as visual data for training other types of AI models is).

Not to be deterred, they set about sythensizing doppler data to feed a human activity tracking model — devising a software pipeline for training privacy-preserving activity tracking AI models. 

The results can be seen in this video — where the model is shown correctly identifying a number of different activities, including cycling, clapping, waving and squats. Purely from its ability to interpret the mmWave signal the movements generate — and purely having been trained on public video data. 

“We show how this cross-domain translation can be successful through a series of experimental results,” they write. “Overall, we believe our approach is an important stepping stone towards significantly reducing the burden of training such as human sensing systems, and could help bootstrap uses in human-computer interaction.”

Researcher Chris Harrison confirms the mmWave doppler radar-based sensing doesn’t work for “very subtle stuff” (like spotting different facial expressions). But he says it’s sensitive enough to detect less vigorous activity — like eating or reading a book.

The motion detection ability of doppler radar is also limited by a need for line-of-sight between the subject and the sensing hardware. (Aka: “It can’t reach around corners yet.” Which, for those concerned about future robots’ powers of human detection, will surely sound slightly reassuring.)

Detection does require special sensing hardware, of course. But things are already moving on that front: Google has been dipping its toe in already, via project Soli — adding a radar sensor to the Pixel 4, for example.

Google’s Nest Hub also integrates the same radar sense to track sleep quality.

“One of the reasons we haven’t seen more adoption of radar sensors in phones is a lack of compelling use cases (sort of a chicken and egg problem),” Harris tells TechCrunch. “Our research into radar-based activity detection helps to open more applications (e.g., smarter Siris, who know when you are eating, or making dinner, or cleaning, or working out, etc.).”

Asked whether he sees greater potential in mobile or fixed applications, Harris reckons there are interesting use-cases for both.

“I see use cases in both mobile and non mobile,” he says. “Returning to the Nest Hub… the sensor is already in the room, so why not use that to bootstrap more advanced functionality in a Google smart speaker (like rep counting your exercises).

“There are a bunch of radar sensors already used in building to detect occupancy (but now they can detect the last time the room was cleaned, for example).”

“Overall, the cost of these sensors is going to drop to a few dollars very soon (some on eBay are already around $1), so you can include them in everything,” he adds. “And as Google is showing with a product that goes in your bedroom, the threat of a ‘surveillance society’ is much less worry-some than with camera sensors.”

Startups like VergeSense are already using sensor hardware and computer vision technology to power real-time analytics of indoor space and activity for the b2b market (such as measuring office occupancy).

But even with local processing of low-resolution image data, there could still be a perception of privacy risk around the use of vision sensors — certainly in consumer environments.

Radar offers an alternative to such visual surveillance that could be a better fit for privacy-risking consumer connected devices such as ‘smart mirrors‘.

“If it is processed locally, would you put a camera in your bedroom? Bathroom? Maybe I’m prudish but I wouldn’t personally,” says Harris.

He also points to earlier research which he says underlines the value of incorporating more types of sensing hardware: “The more sensors, the longer tail of interesting applications you can support. Cameras can’t capture everything, nor do they work in the dark.”

“Cameras are pretty cheap these days, so hard to compete there, even if radar is a bit cheaper. I do believe the strongest advantage is privacy preservation,” he adds.

Of course having any sensing hardware — visual or otherwise — raises potential privacy issues.

A sensor that tells you when a child’s bedroom is occupied may be good or bad depending on who has access to the data, for example. And all sorts of human activity can generate sensitive information, depending on what’s going on. (I mean, do you really want your smart speaker to know when you’re having sex?)

So while radar-based tracking may be less invasive than some other types of sensors it doesn’t mean there are no potential privacy concerns at all.

As ever, it depends on where and how the sensing hardware is being used. Albeit, it’s hard to argue that the data radar generates is likely to be less sensitive than equivalent visual data were it to be exposed via a breach.

“Any sensor should naturally raise the question of privacy — it is a spectrum rather than a yes/no question,” agrees Harris.  “Radar sensors happen to be usually rich in detail, but highly anonymizing, unlike cameras. If your doppler radar data leaked online, it’d be hard to be embarrassed about it. No one would recognize you. If cameras from inside your house leaked online, well… ”

What about the compute costs of synthesizing the training data, given the lack of immediately available doppler signal data?

“It isn’t turnkey, but there are many large video corpuses to pull from (including things like Youtube-8M),” he says. “It is orders of magnitude faster to download video data and create synthetic radar data than having to recruit people to come into your lab to capture motion data.

“One is inherently 1 hour spent for 1 hour of quality data. Whereas you can download hundreds of hours of footage pretty easily from many excellently curated video databases these days. For every hour of video, it takes us about 2 hours to process, but that is just on one desktop machine we have here in the lab. The key is that you can parallelize this, using Amazon AWS or equivalent, and process 100 videos at once, so the throughput can be extremely high.”

And while RF signal does reflect, and do so to different degrees off of different surfaces (aka “multi-path interference”), Harris says the signal reflected by the user “is by far the dominant signal”. Which means they didn’t need to model other reflections in order to get their demo model working. (Though he notes that could be done to further hone capabilities “by extracting big surfaces like walls/ceiling/floor/furniture with computer vision and adding that into the synthesis stage”.)

“The [doppler] signal is actually very high level and abstract, and so it’s not particularly hard to process in real time (much less ‘pixels’ than a camera).” he adds. “Embedded processors in cars use radar data for things like collision breaking and blind spot monitoring, and those are low end CPUs (no deep learning or anything).”

The research is being presented at the ACM CHI conference, alongside another Group project — called Pose-on-the-Go — which uses smartphone sensors to approximate the user’s full-body pose without the need for wearable sensors.

CMU researchers from the Group have also previously demonstrated a method for indoor ‘smart home’ sensing on the cheap (also without the need for cameras), as well as — last year — showing how smartphone cameras could be used to give an on-device AI assistant more contextual savvy.

In recent years they’ve also investigated using laser vibrometry and electromagnetic noise to give smart devices better environmental awareness and contextual functionality. Other interesting research out of the Group includes using conductive spray paint to turn anything into a touchscreen. And various methods to extend the interactive potential of wearables — such as by using lasers to project virtual buttons onto the arm of a device user or incorporating another wearable (a ring) into the mix.

The future of human computer interaction looks certain to be a lot more contextually savvy — even if current-gen ‘smart’ devices can still stumble on the basics and seem more than a little dumb.

 

#ai-assistant, #artificial-intelligence, #aws, #carnegie-mellon-university, #chris-harrison, #cmu, #gadgets, #google, #privacy, #radar, #real-time-analytics, #science, #smart-devices, #smart-speaker

0

Exeger takes $38M to ramp up production of its flexible solar cells for self-powered gadgets

Sweden’s Exeger, which for over a decade has been developing flexible solar cell technology (called Powerfoyle) that it touts as efficient enough to power gadgets solely with light, has taken in another tranche of funding to expand its manufacturing capabilities by opening a second factory in the country.

The $38 million raise is comprised of $20M in debt financing from Swedbank and Swedish Export Credit Corporation (SEK), with a loan amounting to $12M from Swedbank (partly underwritten by the Swedish Export Credit Agency (EKN) under the guarantee of investment credits for companies with innovations) and SEK issuing a loan amounting to $8M (partly underwritten by the pan-EU European Investment Fund (EIF)); along with $18M through a directed share issue to Ilija Batljan Invest AB.

The share issue of 937,500 shares has a transaction share price of $19.2 — which corresponds to a pre-money valuation of $860M for the solar cell maker.

Back in 2019 SoftBank also put $20M into Exeger, in two investments of $10M — entering a strategic partnership to accelerate the global rollout of its tech and further extending its various investments in solar energy.

The Swedish company has also previously received a loan from the Swedish Energy Agency, in 2014, to develop its solar cell tech. But this latest debt financing round is its first on commercial terms (albeit partly underwritten by EKN and EIF).

Exeger says its solar cell tech is the only one that can be printed in free-form and different colors, meaning it can “seamlessly enhance any product with endless power”, as its PR puts it.

So far two devices have integrated the Powerfoyle tech: A bike helmet with an integrated safety taillight (by POC), and a pair of wireless headphones (by Urbanista). Although neither has yet been commercially launched — but both are slated to go on sale next month.

Exeger says its planned second factory in Stockholm will allow it to increase its manufacturing capacity tenfold by 2023, helping it target a broader array of markets sooner and accelerating its goal of mass adoption of its tech.

Its main target markets for the novel solar cell technology currently include consumer electronics, smart home, smart workplace, and IoT.

More device partnerships are slated as coming this year.

Exeger’s Powerfoyle solar cell tell integrated into a pair of Urbanista headphones (Image credits: Exeger/Urbanista)

“We don’t label our rounds but take a more pragmatic view on fundraising,” said Giovanni Fili, founder and CEO. “Developing a new technology, a new energy source, as well as laying the foundation for a new industry takes time. Thus, a company like ours requires long-term strategic investors that all buy into the vision as well as the overall strategy. We have spent a lot of time and energy on this, and it has paid off. It has given the company the resources required, both time and money, to bring an invention to a commercial launch, which is where we are today.”

Fili added that it’s chosen to raise debt financing now “because we can”.

“The same answer as when asked why we build a new factory in Stockholm, Sweden, rather than abroad. We have always said that once commercial, we will start leveraging the balance sheet when securing funds for the next factory. Thanks to our long-standing relationship with Swedbank and SEK, as well as the great support of the Swedish government through EKN underwriting part of the loans, we were able to move this forward,” he said.

Discussing the forthcoming two debut gizmos, the POC Omne Eternal helmet and the Urbanista Los Angeles headphones — which will both go sale in June — Fili says interest in the self-powered products has “surpassed all our expectations”.

“Any product which integrates Powerfoyle is able to charge under all forms of light, whether from indoor lamps or natural outdoor light. The stronger the light, the faster it charges. The POC helmet, for example, doesn’t have a USB port to power the safety light because the ambient light will keep it charging, cycling or not,” he tells TechCrunch.

“The Urbanista Los Angeles wireless headphones have already garnered tremendous interest online. Users can spend one hour outdoors with the headphones and gain three hours of battery time. This means most users will never need to worry about charging. As long as you have our product in light, any light, it will constantly charge. That’s one of the key aspects of our technology, we have designed and engineered the solar cell to work wherever people need it to work.”

“This is the year of our commercial breakthrough,” he added in a statement. “The phenomenal response from the product releases with POC and Urbanista are clear indicators this is the perfect time to introduce self-powered products to
the world. We need mass scale production to realize our vision which is to touch the lives of a billion people by 2030, and that’s why the factory is being built now.”

 

#consumer-electronics, #energy, #europe, #european-investment-fund, #european-union, #exeger, #fundings-exits, #gadgets, #greentech, #poc, #powerfoyle, #softbank, #solar-cell, #solar-energy, #stockholm, #sweden, #urbanista, #wireless-headphones

0

Lightmatter’s photonic AI ambitions light up an $80M B round

AI is fundamental to many products and services today, but its hunger for data and computing cycles is bottomless. Lightmatter plans to leapfrog Moore’s law with its ultra-fast photonic chips specialized for AI work, and with a new $80M round the company is poised to take its light-powered computing to market.

We first covered Lightmatter in 2018, when the founders were fresh out of MIT and had raised $11M to prove that their idea of photonic computing was as valuable as they claimed. They spent the next three years and change building and refining the tech — and running into all the hurdles that hardware startups and technical founders tend to find.

For a full breakdown of what the company’s tech does, read that feature — the essentials haven’t changed.

In a nutshell, Lightmatter’s chips perform certain complex calculations fundamental to machine learning in a flash — literally. Instead of using charge, logic gates, and transistors to record and manipulate data, the chips use photonic circuits that perform the calculations by manipulating the path of light. It’s been possible for years, but until recently getting it to work at scale, and for a practical, indeed a highly valuable purpose has not.

Prototype to product

It wasn’t entirely clear in 2018 when Lightmatter was getting off the ground whether this tech would be something they could sell to replace more traditional compute clusters like the thousands of custom units companies like Google and Amazon use to train their AIs.

“We knew in principle the tech should be great, but there were a lot of details we needed to figure out,” CEO and co-founder Nick Harris told TechCrunch in an interview. “Lots of hard theoretical computer science and chip design challenges we needed to overcome… and COVID was a beast.”

With suppliers out of commission and many in the industry pausing partnerships, delaying projects, and other things, the pandemic put Lightmatter months behind schedule, but they came out the other side stronger. Harris said that the challenges of building a chip company from the ground up were substantial, if not unexpected.

A rack of Lightmatter servers.

Image Credits: Lightmatter

“In general what we’re doing is pretty crazy,” he admitted. “We’re building computers from nothing. We design the chip, the chip package, the card the chip package sits on, the system the cards go in, and the software that runs on it…. we’ve had to build a company that straddles all this expertise.”

That company has grown from its handful of founders to more than 70 employees in Mountain View and Boston, and the growth will continue as it brings its new product to market.

Where a few years ago Lightmatter’s product was more of a well-informed twinkle in the eye, now it has taken a more solid form in the Envise, which they call a ‘general purpose photonic AI accelerator.” It’s a server unit designed to fit into normal datacenter racks but equipped with multiple photonic computing units, which can perform neural network inference processes at mind-boggling speeds. (It’s limited to certain types of calculations, namely linear algebra for now, and not complex logic, but this type of math happens to be a major component of machine learning processes.)

Harris was reticent to provide exact numbers on performance improvements, but more because those improvements are increasing than that they’re not impressive enough. The website suggests it’s 5x faster than an NVIDIA A100 unit on a large transformer model like BERT, while using about 15 percent of the energy. That makes the platform doubly attractive to deep-pocketed AI giants like Google and Amazon, which constantly require both more computing power and who pay through the nose for the energy required to use it. Either better performance or lower energy cost would be great — both together is irresistible.

It’s Lightmatter’s initial plan to test these units with its most likely customers by the end of 2021, refining it and bringing it up to production levels so it can be sold widely. But Harris emphasized this was essentially the Model T of their new approach.

“If we’re right, we just invented the next transistor,” he said, and for the purposes of large-scale computing, the claim is not without merit. You’re not going to have a miniature photonic computer in your hand any time soon, but in datacenters, where as much as 10 percent of the world’s power is predicted to go by 2030, “they really have unlimited appetite.”

The color of math

A Lightmatter chip with its logo on the side.

Image Credits: Lightmatter

There are two main ways by which Lightmatter plans to improve the capabilities of its photonic computers. The first, and most insane sounding, is processing in different colors.

It’s not so wild when you think about how these computers actually work. Transistors, which have been at the heart of computing for decades, use electricity to perform logic operations, opening and closing gates and so on. At a macro scale you can have different frequencies of electricity that can be manipulated like waveforms, but at this smaller scale it doesn’t work like that. You just have one form of currency, electrons, and gates are either open or closed.

In Lightmatter’s devices, however, light passes through waveguides that perform the calculations as it goes, simplifying (in some ways) and speeding up the process. And light, as we all learned in science class, comes in a variety of wavelengths — all of which can be used independently and simultaneously on the same hardware.

The same optical magic that lets a signal sent from a blue laser be processed at the speed of light works for a red or a green laser with minimal modification. And if the light waves don’t interfere with one another, they can travel through the same optical components at the same time without losing any coherence.

That means that if a Lightmatter chip can do, say, a million calculations a second using a red laser source, adding another color doubles that to two million, adding another makes three — with very little in the way of modification needed. The chief obstacle is getting lasers that are up to the task, Harris said. Being able to take roughly the same hardware and near-instantly double, triple, or 20x the performance makes for a nice roadmap.

It also leads to the second challenge the company is working on clearing away, namely interconnect. Any supercomputer is composed of many small individual computers, thousands and thousands of them, working in perfect synchrony. In order for them to do so, they need to communicate constantly to make sure each core knows what other cores are doing, and otherwise coordinate the immensely complex computing problems supercomputing is designed to take on. (Intel talks about this “concurrency” problem building an exa-scale supercomputer here.)

“One of the things we’ve learned along the way is, how do you get these chips to talk to each other when they get to the point where they’re so fast that they’re just sitting there waiting most of the time?” said Harris. The Lightmatter chips are doing work so quickly that they can’t rely on traditional computing cores to coordinate between them.

A photonic problem, it seems, requires a photonic solution: a wafer-scale interconnect board that uses waveguides instead of fiber optics to transfer data between the different cores. Fiber connections aren’t exactly slow, of course, but they aren’t infinitely fast, and the fibers themselves are actually fairly bulky at the scales chips are designed, limiting the number of channels you can have between cores.

“We built the optics, the waveguides, into the chip itself; we can fit 40 waveguides into the space of a single optical fiber,” said Harris. “That means you have way more lanes operating in parallel — it gets you to absurdly high interconnect speeds.” (Chip and server fiends can find that specs here.)

The optical interconnect board is called Passage, and will be part of a future generation of its Envise products — but as with the color calculation, it’s for a future generation. 5-10x performance at a fraction of the power will have to satisfy their potential customers for the present.

Putting that $80M to work

Those customers, initially the “hyper-scale” data handlers that already own datacenters and supercomputers that they’re maxing out, will be getting the first test chips later this year. That’s where the B round is primarily going, Harris said: “We’re funding our early access program.”

That means both building hardware to ship (very expensive per unit before economies of scale kick in, not to mention the present difficulties with suppliers) and building the go-to-market team. Servicing, support, and the immense amount of software that goes along with something like this — there’s a lot of hiring going on.

The round itself was led by Viking Global Investors, with participation from HP Enterprise, Lockheed Martin, SIP Global Partners, and previous investors GV, Matrix Partners and Spark Capital. It brings their total raised to about $113 million; There was the initial $11M A round, then GV hopping on with a $22M A-1, then this $80M.

Although there are other companies pursuing photonic computing and its potential applications in neural networks especially, Harris didn’t seem to feel that they were nipping at Lightmatter’s heels. Few if any seem close to shipping a product, and at any rate this is a market that is in the middle of its hockey stick moment. He pointed to an OpenAI study indicating that the demand for AI-related computing is increasing far faster than existing technology can provide it, except with ever larger datacenters.

The next decade will bring economic and political pressure to rein in that power consumption, just as we’ve seen with the cryptocurrency world, and Lightmatter is poised and ready to provide an efficient, powerful alternative to the usual GPU-based fare.

As Harris suggested hopefully earlier, what his company has made is potentially transformative in the industry and if so there’s no hurry — if there’s a gold rush, they’ve already staked their claim.

#artificial-intelligence, #funding, #fundings-exits, #gadgets, #hardware, #lightmatter, #machine-learning, #photonics, #recent-funding, #startups, #tc

0

Oculii looks to supercharge radar for autonomy with $55M round B

Autonomous vehicles rely on many sensors to perceive the world around them, and while cameras and lidar get a lot of the attention, good old radar is an important piece of the puzzle — though it has some fundamental limitations. Oculii, which just raised a $55M round, aims to minimize those limitations and make radar more capable with a smart software layer for existing devices — and sell its own as well.

Radar’s advantages lie in its superior range, and in the fact that its radio frequency beams can pass through things like raindrops, snow, and fog — making it crucial for perceiving the environment during inclement weather. Lidar and ordinary visible light cameras can be totally flummoxed by these common events, so it’s necessary to have a backup.

But radar’s major disadvantage is that, due to the wavelengths and how the antennas work, it can’t image things in detail the way lidar can. You tend to get very precisely located blobs rather than detailed shapes. It still provides invaluable capabilities in a suite of sensors, but if anyone could add a bit of extra fidelity to its scans, it would be that much better.

That’s exactly what Oculii does — take an ordinary radar and supercharge it. The company claims a 100x improvement to spatial resolution accomplished by handing over control of the system to its software. Co-founder and CEO Steven Hong explained in an email that a standard radar might have, for a 120 degree field of view, a 10 degree spatial resolution, so it can tell where something is with a precision of a few degrees on either side, and little or no ability to tell the object’s elevation.

Some are better, some worse, but for the purposes of this example that amounts to an effectively 12×1 resolution. Not great!

Handing over control to the Oculii system, however, which intelligently adjusts the transmissions based on what it’s already perceiving, could raise that to a 0.5° horizonal x 1° vertical resolution, giving it an effective resolution of perhaps 120×10. (Again, these numbers are purely for explanatory purposes and aren’t inherent to the system.)

That’s a huge improvement and results in the ability to see that something is, for example, two objects near each other and not one large one, or that an object is smaller than another near it, or — with additional computation — that it is moving one way or the other at such and such a speed relative to the radar unit.

Here’s a video demonstration of one of their own devices, showing considerably more detail than one would expect:

Exactly how this is done is part of Oculii’s proprietary magic, and Hong did not elaborate much on how exactly the system works. “Oculii’s sensor uses AI to adaptively generate an ‘intelligent’ waveform that adapts to the environment and embed information across time that can be leveraged to improve the resolution significantly,” he said. (Integrating information over time is what gives it the “4D” moniker, by the way.)

Here’s a little sizzle reel that gives a very general idea:

Autonomous vehicle manufacturers have not yet hit on any canonical set of sensors that AVs should have, but something like Oculii could give radar a more prominent place — its limitations sometimes mean it is relegated to emergency braking detection at the front or some such situation. With more detail and more data, radar could play a larger role in AV decisionmaking systems.

The company is definitely making deals — it’s working with Tier-1s and OEMs, one of which (Hella) is an investor, which gives a sense of confidence in Oculii’s approach. It’s also working with radar makers and has some commercial contracts looking at a 2024-2025 timeline.

CG render of Oculii's two radar units.

Image Credits: Oculii

It’s also getting into making its own all-in-one radar units, doing the hardware-software synergy thing. It claims these are the world’s highest resolution radars, and I don’t see any competitors out there contradicting this — the simple fact is radars don’t compete much on “resolution,” but more on the precision of their rangefinding and speed detection.

One exception might be Echodyne, which uses a metamaterial radar surface to direct a customizable radar beam anywhere in its field of view, examining objects in detail or scanning the whole area quickly. But even then its “resolution” isn’t so easy to estimate.

At any rate the company’s new Eagle and Falcon radars might be tempting to manufacturers working on putting together cutting-edge sensing suites for their autonomous experiments or production driver-assist systems.

It’s clear that with radar tipped as a major component of autonomous vehicles, robots, aircraft and other devices, it’s worth investing seriously in the space. The $55M B round certainly demonstrates that well enough. It was, as Oculii’s press release lists it, “co-led by Catapult Ventures and Conductive Ventures, with participation from Taiwania Capital, Susquehanna Investment Group (SIG), HELLA Ventures, PHI-Zoyi Capital, R7 Partners, VectoIQ, ACVC Partners, Mesh Ventures, Schox Ventures, and Signature Bank.”

The money will allow for the expected scaling and hiring, and as Hong added, “continued investment of the technology to deliver higher resolution, longer range, more compact and cheaper sensors that will accelerate an autonomous future.”

#automotive, #autonomous-vehicles, #funding, #fundings-exits, #gadgets, #hardware, #oculii, #radar, #recent-funding, #self-driving-cars, #startups, #tc, #transportation

0

Peloton’s leaky API let anyone grab rider’s private account data

Halfway through my Monday afternoon workout last week, I got a message from a security researcher with a screenshot of my Peloton account data.

My Peloton profile is set to private and my friend’s list is deliberately zero, so nobody can view my profile, age, city, or workout history. But a bug allowed anyone to pull users’ private account data directly from Peloton’s servers, even with their profile set to private.

Peloton, the at-home fitness brand synonymous with its indoor stationary bike, has more than three million subscribers. Even President Biden is even said to own one. The exercise bike alone costs upwards of $1,800, but anyone can sign up for a monthly subscription to join a broad variety of classes.

As Biden was inaugurated (and his Peloton moved to the White House — assuming the Secret Service let him), Jan Masters, a security researcher at Pen Test Partners, found he could make unauthenticated requests to Peloton’s API for user account data without it checking to make sure the person was allowed to request it. (An API allows two things to talk to each other over the internet, like a Peloton bike and the company’s servers storing user data.)

But the exposed API let him — and anyone else on the internet — access a Peloton user’s age, gender, city, weight, workout statistics, and if it was the user’s birthday, details that are hidden when users’ profile pages are set to private.

Masters reported the leaky API to Peloton on January 20 with a 90-day deadline to fix the bug, the standard window time that security researchers give to companies to fix bugs before details are made public.

But that deadline came and went, the bug wasn’t fixed, and Masters hadn’t heard back from the company, aside from an initial email acknowledging receipt of the bug report. Instead, Peloton only restricted access to its API to its members. But that just meant anyone could sign up with a monthly membership and get access to the API again.

TechCrunch contacted Peloton after the deadline lapsed to ask why the vulnerability report had been ignored, and Peloton confirmed yesterday that it had fixed the vulnerability. (TechCrunch held this story until the bug was fixed in order to prevent misuse.)

Peloton spokesperson Amelise Lane provided the following statement:

It’s a priority for Peloton to keep our platform secure and we’re always looking to improve our approach and process for working with the external security community. Through our Coordinated Vulnerability Disclosure program, a security researcher informed us that he was able to access our API and see information that’s available on a Peloton profile. We took action, and addressed the issues based on his initial submissions, but we were slow to update the researcher about our remediation efforts. Going forward, we will do better to work collaboratively with the security research community and respond more promptly when vulnerabilities are reported. We want to thank Ken Munro for submitting his reports through our CVD program and for being open to working with us to resolve these issues.

Masters has since put up a blog post explaining the vulnerabilities in more detail.

Munro, who founded Pen Test Partners, told TechCrunch: “Peloton had a bit of a fail in responding to the vulnerability report, but after a nudge in the right direction, took appropriate action. A vulnerability disclosure program isn’t just a page on a website; it requires coordinated action across the organisation.”

But questions remain for Peloton. When asked repeatedly, the company declined to say why it had not responded to Masters’ vulnerability report. It’s also not known if anyone maliciously exploited the vulnerabilities, such as mass-scraping account data.

Facebook, LinkedIn, and Clubhouse have all fallen victim to scraping attacks that abuse access to APIs to pull in data about users on their platforms. But Peloton declined to confirm if it had logs to rule out any malicious exploitation of its leaky API.

#api, #gadgets, #hacking, #peloton, #pen-test-partners, #privacy, #security, #vulnerability

0

Cognixion’s brain-monitoring headset enables fluid communication for people with severe disabilities

Of the many frustrations of having a severe motor impairment, the difficulty of communicating must surely be among the worst. The tech world has not offered much succor to those affected by things like locked-in syndrome, ALS, and severe strokes, but startup Cognixion aims to with a novel form of brain monitoring that, combined with a modern interface, could make speaking and interaction far simpler and faster.

The company’s One headset tracks brain activity closely in such a way that the wearer can direct a cursor — reflected on a visor like a heads-up display — in multiple directions or select from various menus and options. No physical movement is needed, and with the help of modern voice interfaces like Alexa, the user can not only communicate efficiently but freely access all kinds of information and content most people take for granted.

But it’s not a miracle machine, and it isn’t a silver bullet. Here’s where how it got started.

Overhauling decades-old brain tech

Everyone with a motor impairment has different needs and capabilities, and there are a variety of assistive technologies that cater to many of these needs. But many of these techs and interfaces are years or decades old — medical equipment that hasn’t been updated for an era of smartphones and high-speed mobile connections.

Some of the most dated interfaces, unfortunately, are those used by people with the most serious limitations: those whose movements are limited to their heads, faces, eyes — or even a single eyelid, like Jean-Dominique Bauby, the famous author of “The Diving Bell and the Butterfly.”

One of the tools in the toolbox is the electroencephalogram, or EEG, which involves detecting activity in the brain via patches on the scalp that record electrical signals. But while they’re useful in medicine and research in many ways, EEGs are noisy and imprecise — more for finding which areas of the brain are active than, say, which sub-region of the sensory cortex or the like. And of course you have to wear a shower cap wired with electrodes (often greasy with conductive gel) — it’s not the kind of thing anyone wants to do for more than an hour, let alone all day every day.

Yet even among those with the most profound physical disabilities, cognition is often unimpaired — as indeed EEG studies have helped demonstrate. It made Andreas Forsland, co-founder and CEO of Cognixion, curious about further possibilities for the venerable technology: “Could a brain-computer interface using EEG be a viable communication system?”

He first used EEG for assistive purposes in a research study some five years ago. They were looking into alternative methods of letting a person control an on-screen cursor, among them an accelerometer for detecting head movements, and tried integrating EEG readings as another signal. But it was far from a breakthrough.

A modern lab with an EEG cap wired to a receiver and laptop – this is an example of how EEG is commonly used.

He ran down the difficulties: “With a read-only system, the way EEG is used today is no good; other headsets have slow sample rates and they’re not accurate enough for a real-time interface. The best BCIs are in a lab, connected to wet electrodes — it’s messy, it’s really a non-starter. So how do we replicate that with dry, passive electrodes? We’re trying to solve some very hard engineering problems here.”

The limitations, Forsland and his colleagues found, were not so much with the EEG itself as with the way it was carried out. This type of brain monitoring is meant for diagnosis and study, not real-time feedback. It would be like taking a tractor to a drag race. Not only do EEGs often work with a slow, thorough check of multiple regions of the brain that may last several seconds, but the signal it produces is analyzed by dated statistical methods. So Cognixion started by questioning both practices.

Improving the speed of the scan is more complicated than overclocking the sensors or something. Activity in the brain must be inferred by collecting a certain amount of data. But that data is collected passively, so Forsland tried bringing an active element into it: a rhythmic electric stimulation that is in a way reflected by the brain region, but changed slightly depending on its state — almost like echolocation.

The Cognixion One headset with its dry EEG terminals visible.

They detect these signals with a custom set of six EEG channels in the visual cortex area (up and around the back of your head), and use a machine learning model to interpret the incoming data. Running a convolutional neural network locally on an iPhone — something that wasn’t really possible a couple years ago — the system can not only tease out a signal in short order but make accurate predictions, making for faster and smoother interactions.

The result is sub-second latency with 95-100 percent accuracy in a wireless headset powered by a mobile phone. “The speed, accuracy and reliability are getting to commercial levels —  we can match the best in class of the current paradigm of EEGs,” said Forsland.

Dr. William Goldie, a clinical neurologist who has used and studied EEGs and other brain monitoring techniques for decades (and who has been voluntarily helping Cognixion develop and test the headset), offered a positive evaluation of the technology.

“There’s absolutely evidence that brainwave activity responds to thinking patterns in predictable ways,” he noted. This type of stimulation and response was studied years ago. “It was fascinating, but back then it was sort of in the mystery magic world. Now it’s resurfacing with these special techniques and the computerization we have these days. To me it’s an area that’s opening up in a manner that I think clinically could be dramatically effective.”

BCI, meet UI

The first thing Forsland told me was “We’re a UI company.” And indeed even such a step forward in neural interfaces as he later described means little if it can’t be applied to the problem at hand: helping people with severe motor impairment to express themselves quickly and easily.

Sad to say, it’s not hard to imagine improving on the “competition,” things like puff-and-blow tubes and switches that let users laboriously move a cursor right, right a little more, up, up a little more, then click: a letter! Gaze detection is of course a big improvement over this, but it’s not always an option (eyes don’t always work as well as one would like) and the best eye-tracking solutions (like a Tobii Dynavox tablet) aren’t portable.

Why shouldn’t these interfaces be as modern and fluid as any other? The team set about making a UI with this and the capabilities of their next-generation EEG in mind.

Image of the target Cognixion interface as it might appear to a user, with buttons for yes, no, phrases and tools.

Image Credits: Cognixion

Their solution takes bits from the old paradigm and combines them with modern virtual assistants and a radial design that prioritizes quick responses and common needs. It all runs in an app on an iPhone, the display of which is reflected in a visor, acting as a HUD and outward-facing display.

In easy reach of, not to say a single thought but at least a moment’s concentration or a tilt of the head, are everyday questions and responses — yes, no, thank you, etc. Then there are slots to put prepared speech into — names, menu orders, and so on. And then there’s a keyboard with word- and sentence-level prediction that allows common words to be popped in without spelling them out.

“We’ve tested the system with people who rely on switches, who might take 30 minutes to make 2 selections. We put the headset on a person with cerebral palsy, and she typed our her name and hit play in 2 minutes,” Forsland said. “It was ridiculous, everyone was crying.”

Goldie noted that there’s something of a learning curve. “When I put it on, I found that it would recognize patterns and follow through on them, but it also sort of taught patterns to me. You’re training the system, and it’s training you — it’s a feedback loop.”

“I can be the loudest person in the room”

One person who has found it extremely useful is Chris Benedict, a DJ, public speaker, and disability advocate who himself has Dyskinetic Cerebral Palsy. It limits his movements and ability to speak, but doesn’t stop him from spinning (digital) records at various engagements, however, or from explaining his experience with Cognixion’s One headset over email. (And you can see him demonstrating it in person in the video above.)

DJ Chris Benedict wears the Cognixion Headset in a bright room.

Image Credits: Cognixion

“Even though it’s not a tool that I’d need all the time it’s definitely helpful in aiding my communication,” he told me. “Especially when I need to respond quickly or am somewhere that is noisy, which happens often when you are a DJ. If I wear it with a Bluetooth speaker I can be the loudest person in the room.” (He always has a speaker on hand, since “you never know when you might need some music.”)

The benefits offered by the headset give some idea of what is lacking from existing assistive technology (and what many people take for granted).

“I can use it to communicate, but at the same time I can make eye contact with the person I’m talking to, because of the visor. I don’t have to stare at a screen between me and someone else. This really helps me connect with people,” Benedict explained.

“Because it’s a headset I don’t have to worry about getting in and out of places, there is no extra bulk added to my chair that I have to worry about getting damaged in a doorway. The headset is balanced too, so it doesn’t make my head lean back or forward or weigh my neck down,” he continued. “When I set it up to use the first time it had me calibrate, and it measured my personal range of motion so the keyboard and choices fit on the screen specifically for me. It can also be recalibrated at any time, which is important because not every day is my range of motion the same.”

Alexa, which has been extremely helpful to people with a variety of disabilities due to its low cost and wide range of compatible devices, is also part of the Cognixion interface, something Benedict appreciates, having himself adopted the system for smart home and other purposes. “With other systems this isn’t something you can do, or if it is an option, it’s really complicated,” he said.

Next steps

As Benedict demonstrates, there are people for whom a device like Cognixion’s makes a lot of sense, and the hope is it will be embraced as part of the necessarily diverse ecosystem of assistive technology.

Forsland said that the company is working closely with the community, from users to clinical advisors like Goldie and other specialists, like speech therapists, to make the One headset as good as it can be. But the hurdle, as with so many devices in this class, is how to actually put it on people’s heads — financially and logistically speaking.

Cognixion is applying for FDA clearance to get the cost of the headset — which, being powered by a phone, is not as high as it would be with an integrated screen and processor — covered by insurance. But in the meantime the company is working with clinical and corporate labs that are doing neurological and psychological research. Places where you might find an ordinary, cumbersome EEG setup, in other words.

The company has raised funding and is looking for more (hardware development and medical pursuits don’t come cheap), and has also collected a number of grants.

The One headset may still be some years away from wider use (the FDA is never in a hurry), but that allows the company time to refine the device and include new advances. Unlike many other assistive devices, for example a switch or joystick, this one is largely software-limited, meaning better algorithms and UI work will significantly improve it. While many wait for companies like Neuralink to create a brain-computer interface for the modern era, Cognixion has already done so for a group of people who have much more to gain from it.

You can learn more about the Cognixion One headset and sign up to receive the latest at its site here.

#accessibility, #artificial-intelligence, #brain-computer-interface, #disabilities, #disability, #eeg, #gadgets, #hardware, #science, #startups, #tc, #wearables

0