Mobility startups can be equitable, accessible and profitable

Mobility should be a right, but too often it’s a privilege. Can startups provide the technology and the systems necessary to help correct this injustice? Shared micromobility, in particular, offers an opportunity for more equitable and accessible mobility within cities, but only if done intentionally. Building equity and accessibility into the business model is not always top of mind for startups looking to pay back investors and make money, and it’s a time-consuming task. Is it still possible to achieve those goals while remaining profitable?

At our TC Sessions: Mobility 2021 event, I sat down with Revel CEO and co-founder Frank Reig, Remix CEO and co-founder Tiffany Chu, and community organizer, transportation consultant and lawyer Tamika L. Butler to discuss how mobility companies should think about equity, why incorporating it from the get-go will save money in the long run, and how they can partner with cities to expand accessible and sustainable mobility.

What does equity mean?

Shared mobility services have often directly appealed to the young, able-bodied and affluent, especially when they first dropped into cities around the world. Older populations and communities of color have been less likely to either have access to or to use shared mobility services, but that’s beginning to change. As mobility startups consider how to weigh providing equitable service while maintaining a profit, Butler outlined the importance of thinking about those who are most vulnerable.

Who isn’t this helping? And it doesn’t matter if that’s a small amount of people, right? So you might say something like, people with disabilities might be proportionally a smaller number of people, Black people might be proportionally a smaller number of people. But if you make things better for folks with a disability, say, by adding curb cuts into sidewalks, that actually makes things better for a ton of people. And so you may be thinking of it … only helping a small group of people. And I think we really have to shift the way we think about equity. It’s not just numbers, who is this going to help the most, it’s … who is often intentionally neglected or pushed aside because their numbers aren’t big enough? (Timestamp: 19:10)

Build equity into the business model from the start

Many startups are just trying to keep their idea alive and start a business at the beginning. They want to solve an essential problem, like lack of socially distanced mobility options, and prove their unit economics so they don’t come back to their investors empty handed. Some companies might even be of the mindset that building equity and accessibility into their business model isn’t their concern. But delivering on those core values will just be the price of doing business in the future, so it certainly should be their concern, Butler said.

I think for companies, I would say that people like to say it takes too much time or costs too much money to do things equitably. But whether or not you’re retrofitting a house or whether or not you’re retrofitting your company, whenever you retrofit something, it costs more money. And so if you think about equity as something you just build in from the beginning, it will actually save you money and take less time than if you try to do it later because someone tells you to do it or you’ve had some controversy or you all of a sudden feel bad. (Timestamp: 4:50)

Reig chimed in to talk about Revel’s access program, which gives 50% off to riders who are on any form of public assistance.

So the access program, for instance, was something from day one. That wasn’t something we added in a year or two later after venture funding … that was still when we were bootstraped, you know, a company in North Brooklyn with 70 mopeds. From day one, I’ve never used the gig economy, customer service agent mechanic battery swapper, from day one, every single person on the team is a regular employee. And I think that’s just a cultural ethos I’ve always wanted in the company. (Timestamp: 6:04)

Micromobility’s potential to alleviate transit deserts

#accessibility, #ec-techcrunch-tc-mobility, #equity, #event-recap, #micromobility, #mobility-2021, #remix, #revel, #startups, #tc, #transportation, #via

0

Apple’s latest accessibility features are for those with limb and vocal differences

Apple announced a batch of accessibility features at WWDC 2021 that cover a wide variety of needs, among them a few for people who can’t touch or speak to their devices in the ordinary way. With Assistive Touch, Sound Control, and other improvements, these folks have new options for interacting with an iPhone or Apple Watch.

We covered Assistive Touch when it was first announced, but recently got a few more details. This feature lets anyone with an Apple Watch operate it with one hand by means of a variety of gestures. It came about when Apple heard from the community of people with limb differences — whether they’re missing an arm, or unable to use it reliably, or anything else — that as much as they liked the Apple Watch, they were tired of answering calls with their noses.

The research team cooked up a way to reliably detect the gestures of pinching one finger to the thumb, or clenching the hand into a fist, based on how doing them causes the watch to move — it’s not detecting nervous system signals or anything. These gestures, as well as double versions of them, can be set to a variety of quick actions. Among them is opening the “motion cursor,” a little dot that mimics the movements of the user’s wrist.

Considering how many people don’t have the use of a hand, this could be a really helpful way to get basic messaging, calling, and health-tracking tasks done without needing to resort to voice control.

Speaking of voice, that’s also something not everyone has at their disposal. Many of those who can’t speak fluently, however, can make a bunch of basic sounds, which can carry meaning for those who have learned — not so much Siri. But a new accessibility option called “Sound Control” lets these sounds be used as voice commands. You access it through Switch Control, not audio or voice, and add an audio switch.

Images of the process of adding an audio switch to the iPhone.

Image Credits: Apple

The setup menu lets the user choose from a variety of possible sounds: click, cluck, e, eh, k, la, muh, oo, pop, sh, and more. Picking one brings up a quick training process to let the user make sure the system understands the sound correctly, and then it can be set to any of a wide selection of actions, from launching apps to asking commonly spoken questions or invoking other tools.

For those who prefer to interact with their Apple devices through a switch system, the company has a big surprise: Game controllers, once only able to be used for gaming, now work for general purposes as well. Specifically noted is the amazing Xbox Adaptive Controller, a hub and group of buttons, switches, and other accessories that improves the accessibility of console games. This powerful tool is used by many, and no doubt they will appreciate not having to switch control methods entirely when they’re done with Fortnite and want to listen to a podcast.

Image Credits: Apple

One more interesting capability in iOS that sits at the edge of accessibility is Walking Steadiness. This feature, available to anyone with an iPhone, tracks (as you might guess) the steadiness of the user’s walk. This metric, tracked throughout a day or week, can potentially give real insight into how and when a person’s locomotion is better and worse. It’s based on a bunch of data collected in the Apple Heart and Movement study, including actual falls and the unsteady movement that led to them.

If the user is someone who recently was fitted for a prosthesis, or had foot surgery, or suffers from vertigo, knowing when and why they are at risk of falling can be very important. They may not realize it, but perhaps their movements are less steady towards the end of the day, or after climbing a flight of steps, or after waiting in line for a long time. It could also show steady improvements as they get used to an artificial limb or chronic pain declines.

Exactly how this data may be used by an actual physical therapist or doctor is an open question, but importantly it’s something that can easily be tracked and understood by the users themselves.

Images of Apple Memoji with a cochlear implant, an oxygen tube, and a soft helmet.

Image Credits: Apple

Among Apple’s other assistive features are new languages for voice control, improved headphone acoustic accommodation, support for bidirectional hearing aids, and of course the addition of cochlear implants and oxygen tubes for memoji. As an Apple representative put it, they don’t want to embrace differences just in features, but on the personalization and fun side as well.

#accessibility, #apple, #apps, #gadgets, #mobile, #tc, #wearables, #wwdc-2021

0

Apple’s Live Text lets you interact with text in your photos

Apple has introduced a new feature to its camera system that automatically recognizes and transcribes text in your photos, from a phone number on a business card to a whiteboard full of notes. Live Text, as the feature is called, doesn’t need any prompting or special work from the user — just tap the icon and you’re good to go.

Announced by Craig Federighi on the virtual stage of WWDC, Live Text will be arriving on iPhones with iOS 15. He demonstrated it with a couple pictures, one of a whiteboard after a meeting, and a couple snapshots that included restaurant signs in the background.

Tapping the Live Text button in the lower right gave detected text a slight underline, and then a swipe allowed it to be selected and copied. In the case of the whiteboard, it collected several sentences of notes including bullet points, and with one of the restaurant signs it grabbed the phone number, which could be called or saved.

Screenshot of a phone selecting text in an image.

The feature is reminiscent of many found in Google’s long-developed Lens app, and the Pixel 4 added more robust scanning capability in 2019. The difference is that the text is captured more or less passively in every photo taken by an iPhone running the new system — you don’t have to enter scanner mode or launch a separate app.

This is a nice thing for anyone to have, but it could be especially helpful for people with visual impairments. A snapshot or two makes any text, otherwise difficult to read, able to be dictated or saved.

The process seems to take place entirely on the phone, so don’t worry that this info is being sent to a datacenter somewhere. That also means it’s fairly quick, though until we test it for ourselves we can’t say whether it’s instantaneous or, like some other machine learning features, something that happens over the next few seconds or minutes after you take a shot.

read more about Apple's WWDC 2021 on TechCrunch

#accessibility, #apple, #artificial-intelligence, #live-text, #machine-learning, #wwdc-2021

0

Twitter Spaces will be available for web, including accessibility features

On Wednesday evening, Twitter announced that Spaces – its Clubhouse competitor – will start rolling out for use on the web. Earlier this month, Twitter Spaces became available for any user with more than 600 followers on the iOS or Android apps, and around the same time, Clubhouse finally released its long-awaited Android app. Still, Clubhouse has yet to debut on the web, marking a success for Twitter in the race to corner the live social audio market. 

Even Instagram is positioning itself as a Clubhouse competitor, allowing users to “go live” with the ability to mute their audio and video. How will each app differentiate itself? Twitter CFO Ned Segal attempted to address this at JP Morgan’s 49th Annual Technology, Media, & Communications conference this week. 

“Twitter is where you go to find out what’s happening in the world and what people are talking about,” said Segal. “So when you come to Twitter, and you look at your home Timeline and you see a Space, it’s gonna perhaps be people who you don’t know but who are talking about a topic that’s incredibly relevant to you. It could be Bitcoin, it could be the aftershock from the Grammys, it could be that they’re talking about the NFL Draft.” 

Twitter’s focus areas for the web version of Spaces include a UI that adapts to the user’s screen size and reminders for scheduled Spaces. Before joining a space, Twitter will display a preview that shows who is in a Space, and a description of the topic being discussed. Users will also be able to have a Space open on the right side of their screen while still scrolling through their Timeline.

Image Credits: Twitter

Most crucially, this update lists accessibility and transcriptions as a focus area. For an audio-only platform, live transcriptions are necessary for Deaf and hard-of-hearing people to join in on the conversation. In screenshots Twitter shared of the new features, we can see how live captions will appear in Spaces. As for how accurate these transcriptions will be, the jury’s still out.

Twitter fielded well-deserved criticism last year when it failed to include captioning on its audio tweet feature. In an apology tweet, Twitter Support wrote, “Accessibility should not be an afterthought.” By September, Twitter launched two accessibility teams

Still, accessibility has often been treated as an afterthought throughout the rise of live audio. Clubhouse does not yet support live captioning. 

#accessibility, #android, #apps, #clubhouse, #instagram-live, #ned-segal, #social-media, #twitter, #twitter-spaces

0

Apple rolls out a slew of new accessibility features to iPhone, Watch, and more

On Wednesday, Apple announced a bunch of new accessibility features coming to iPhones, iPads, and the Apple Watch. The new features and services will roll out in the coming days, weeks, and months.

The first feature to arrive will be a new service called SignTime, which Apple says will launch tomorrow, May 20. SignTime will allow users to communicate with Apple’s customer service representatives (either AppleCare or Retail Customer Care) using sign language. The service will launch first in the US, UK, and France with American Sign Language, British Sign Language, and French Sign Language, respectively. Further, customers at Apple Stores will be able to use SignTime to get in touch with an interpreter while shopping or getting customer support without having to make an appointment in advance.

While SignTime’s arrival is right around the corner, software updates loaded with new features aimed at making Apple’s software and hardware more accessible for people with cognitive, mobility, hearing, and vision disabilities will hit Apple’s platforms sometime later this year.

Read 6 remaining paragraphs | Comments

#accessibility, #apple, #apple-watch, #ios, #ipad, #ipados, #iphone, #tech, #watchos

0

Apple Watch gets a motion-controlled cursor with ‘Assistive Touch’

Tapping the tiny screen of the Apple Watch with precision has certain level of fundamental difficulty, but for some people with disabilities it’s genuinely impossible. Apple has remedied this with a new mode called “Assistive Touch” that detects hand gestures to control a cursor and navigate that way.

The feature was announced as part of a collection of accessibility-focused additions across its products, but Assistive Touch seems like the one most likely to make a splash across the company’s user base.

It relies on the built-in gyroscope and accelerometer, as well as data from the heart rate sensor, to deduce the position of the wrist and hand. Don’t expect it to tell a peace sign from a metal sign just yet, but for now it detects “pinch” (touching the index finger to the thumb) and “clench” (make a loose fist), which can act as basic “next” and “confirm” actions. Incoming calls, for instance, can be quickly accepted with a clench.

Most impressive, however, is the motion pointer. You can activate it either by selecting it in the Assistive Touch menu, or by shaking your wrist vigorously. It then detects the position of your hand as you move it around, allowing you to “swipe” by letting the cursor linger at the edge of the screen, or interact with things using a pinch or clench.

Needless to say this could be extremely helpful for anyone who only has the one hand available for interacting with the watch. And even for those who don’t strictly need it, the ability to keep one hand on the exercise machine, cane, or whatever else while doing smartwatch things is surely an attractive possibility. (One wonders about the potential of this control method as a cursor for other platforms as well…)

Memoji featuring new accessibility-focused gear.

Image Credits: Apple

Assistive Touch is only one of many accessibility updates Apple shared in this news release; other advances for the company’s platforms include:

  • SignTime, an ASL interpreter video call for Apple Store visits and support
  • Support for new hearing aids
  • Improved VoiceOver-based exploration of images
  • A built-in background noise generator (which I fully intend to use)
  • Replacement of certain buttons with non-verbal mouth noises (for people who have limited speech and mobility)
  • Memoji customizations for people with oxygen tubes, cochlear implants, and soft helmets
  • Featured media in the App Store, Apple TV, Books, and Maps apps from or geared towards people with disabilities

It’s all clustered around Global Accessibility Awareness Day, which is tomorrow, May 20th.

#accessibility, #apple, #apple-watch, #gadgets, #hardware

0

Leveling the playing field

In 2011, a product developer named Fred Davison read an article about inventor Ken Yankelevitz and his QuadControl video game controller for quadriplegics. At the time, Yankelevitz was on the verge of retirement. Davison wasn’t a gamer, but he said his mother, who had the progressive neurodegenerative disease ALS, inspired him to pick up where Yankelevitz was about to leave off.

Launched in 2014, Davison’s QuadStick represents the latest iteration of the Yankelevitz controller — one that has garnered interest across a broad range of industries. 

“The QuadStick’s been the most rewarding thing I’ve ever been involved in,” Davison told TechCrunch. “And I get a lot of feedback as to what it means for [disabled gamers] to be able to be involved in these games.”

Laying the groundwork

Erin Muston-Firsch, an occupational therapist at Craig Hospital in Denver, says adaptive gaming tools like the QuadStick have revolutionized the hospital’s therapy team. 

Six years ago, she devised a rehabilitation solution for a college student who came in with a spinal cord injury. She says he liked playing video games, but as a result of his injury could no longer use his hands. So the rehab regimen incorporated Davison’s invention, which enabled the patient to play World of Warcraft and Destiny. 

QuadStick

Jackson “Pitbull” Reece is a successful Facebook streamer who uses his mouth to operate the QuadStick, as well as the XAC, (the Xbox Adaptive Controller), a controller designed by Microsoft for use by people with disabilities to make user input for video games more accessible. 

Reece lost the use of his legs in a motorcycle accident in 2007 and later, due to an infection, lost the use of his upper body. He says he remembers able-bodied life as one filled with mostly sports video games. He says being a part of the gaming community is an important part of his mental health.

Fortunately there is an atmosphere of collaboration, not competition, around the creation of hardware for gamers within the assistive technology community. 

But while not every major tech company has been proactive about accessibility, after-market devices are available to create customized gaming experiences for disabled gamers.

Enter Microsoft

At its Hackathon in 2015, Microsoft’s Inclusive Lead Bryce Johnson met with disabled veterans’ advocacy group Warfighter Engaged

“We were at the same time developing our views on inclusive design,” Johnson said. Indeed, eight generations of gaming consoles created barriers for disabled gamers.

“Controllers have been optimized around a primary use case that made assumptions,” Johnson said. Indeed, the buttons and triggers of a traditional controller are for able-bodied people with the endurance to operate them. 

Besides Warfighter Engaged, Microsoft worked with AbleGamers (the most recognized charity for gamers with disabilities), Craig Hospital, the Cerebral Palsy Foundation and Special Effect, a U.K.-based charity for disabled young gamers. 

Xbox Adaptive Controller

The finished XAC, released in 2018, is intended for a gamer with limited mobility to seamlessly play with other gamers. One of the details gamers commented on was that the XAC looks like a consumer device, not a medical device.

“We knew that we couldn’t design this product for this community,” Johnson told TechCrunch. “We had to design this product with this community. We believe in ‘nothing about us without us.’ Our principles of inclusive design urge us to include communities from the very beginning.”

Taking on the giants

There were others getting involved. Like many inventions, the creation of the Freedom Wing was a bit of serendipity.

At his booth at an assistive technology (AT) conference, ATMakers‘ Bill Binko showcased a doll named “Ella” using the ATMakers Joystick, a power-chair device. Also in attendance was Steven Spohn, who is part of the brain trust behind AbleGamers.

Spohn saw the Joystick and told Binko he wanted a similar device to work with the XAC. The Freedom Wing was ready within six weeks. It was a matter of manipulating the sensors to control a game controller instead of a chair. This device didn’t require months of R&D and testing because it had already been road tested as a power-chair device. 

ATMakers Freedom Wing 2

Binko said mom-and-pop companies are leading the way in changing the face of accessible gaming technology. Companies like Microsoft and Logitech have only recently found their footing.

ATMakers, QuadStick and other smaller creators, meanwhile, have been busy disrupting the industry. 

“Everybody gets [gaming] and it opens up the ability for people to engage with their community,” Binko said. “Gaming is something that people can wrap their heads around and they can join in.” 

Barriers of entry

As the technology evolves, so do the obstacles to accessibility. These challenges include lack of support teams, security, licensing and VR. 

Binko said managing support teams for these devices with the increase in demand is a new hurdle. More people with the technological skills are needed to join the AT industry to assist with the creation, installation and maintenance of devices. 

Security and licensing is out of the hands of small creators like Davison because of financial and other resources needed to work with different hardware companies. For example, Sony’s licensing enforcement technology has become increasingly complex with each new console generation. 

With Davison’s background in tech, he understands the restrictions to protect proprietary information. “They spend huge amounts of money developing a product and they want to control every aspect of it,” Davison said. “Just makes it tough for the little guy to work with.”

And while PlayStation led the way in button mapping, according to Davison, the security process is stringent. He doesn’t understand how it benefits the console company to prevent people from using whichever controller they want. 

“The cryptography for the PS5 and DualSense controller is uncrackable so far, so adapter devices like the ConsoleTuner Titan Two have to find other weaknesses, like the informal ‘man in the middle’ attack,” Davison said. 

The technique allows devices to utilize older-gen PlayStation controllers as a go-between from the QuadStick to the latest-gen console, so disabled gamers can play the PS5. TechCrunch reached out to Sony’s accessibility division, whose representative said there are no immediate plans for an adaptable PlayStation or controller. However, they stated their department works with advocates and gaming devs to consider accessibility from day one.  

In contrast, Microsoft’s licensing system is more forgiving, especially with the XAC and the ability to use older-generation controllers with newer systems. 

“Compare the PC industry to the Mac,” Davison said. “You can put together a PC system from a dozen different manufacturers, but not for the Mac. One is an open standard and the other is closed.”

A more accessible future

In November, Japanese controller company HORI released an officially licensed accessibility controller for the Nintendo Switch. It’s not available for sale in the United States currently, but there are no region restrictions to purchase one online. This latest development points toward a more accessibility-friendly Nintendo, though the company has yet to fully embrace the technology. 

Nintendo’s accessibility department declined a full interview but sent a statement to TechCrunch. “Nintendo endeavors to provide products and services that can be enjoyed by everyone. Our products offer a range of accessibility features, such as button-mapping, motion controls, a zoom feature, grayscale and inverted colors, haptic and audio feedback, and other innovative gameplay options. In addition, Nintendo’s software and hardware developers continue to evaluate different technologies to expand this accessibility in current and future products.”

The push for more accessible hardware for disabled gamers hasn’t been smooth. Many of these devices were created by small business owners with little capital. In a few cases corporations with a determination for inclusivity at the earliest stages of development became involved. 

Slowly but surely, however, assistive technology is moving forward in ways that can make the experience much more accessible for gamers with disabilities.

 

#accessibility, #advocacy, #chair, #column, #cryptography, #game-controller, #gaming, #hardware, #hori, #joystick, #logitech, #microsoft, #nintendo, #playstation, #xbox, #xbox-adaptive-controller, #xbox-one

0

Cognixion’s brain-monitoring headset enables fluid communication for people with severe disabilities

Of the many frustrations of having a severe motor impairment, the difficulty of communicating must surely be among the worst. The tech world has not offered much succor to those affected by things like locked-in syndrome, ALS, and severe strokes, but startup Cognixion aims to with a novel form of brain monitoring that, combined with a modern interface, could make speaking and interaction far simpler and faster.

The company’s One headset tracks brain activity closely in such a way that the wearer can direct a cursor — reflected on a visor like a heads-up display — in multiple directions or select from various menus and options. No physical movement is needed, and with the help of modern voice interfaces like Alexa, the user can not only communicate efficiently but freely access all kinds of information and content most people take for granted.

But it’s not a miracle machine, and it isn’t a silver bullet. Here’s where how it got started.

Overhauling decades-old brain tech

Everyone with a motor impairment has different needs and capabilities, and there are a variety of assistive technologies that cater to many of these needs. But many of these techs and interfaces are years or decades old — medical equipment that hasn’t been updated for an era of smartphones and high-speed mobile connections.

Some of the most dated interfaces, unfortunately, are those used by people with the most serious limitations: those whose movements are limited to their heads, faces, eyes — or even a single eyelid, like Jean-Dominique Bauby, the famous author of “The Diving Bell and the Butterfly.”

One of the tools in the toolbox is the electroencephalogram, or EEG, which involves detecting activity in the brain via patches on the scalp that record electrical signals. But while they’re useful in medicine and research in many ways, EEGs are noisy and imprecise — more for finding which areas of the brain are active than, say, which sub-region of the sensory cortex or the like. And of course you have to wear a shower cap wired with electrodes (often greasy with conductive gel) — it’s not the kind of thing anyone wants to do for more than an hour, let alone all day every day.

Yet even among those with the most profound physical disabilities, cognition is often unimpaired — as indeed EEG studies have helped demonstrate. It made Andreas Forsland, co-founder and CEO of Cognixion, curious about further possibilities for the venerable technology: “Could a brain-computer interface using EEG be a viable communication system?”

He first used EEG for assistive purposes in a research study some five years ago. They were looking into alternative methods of letting a person control an on-screen cursor, among them an accelerometer for detecting head movements, and tried integrating EEG readings as another signal. But it was far from a breakthrough.

A modern lab with an EEG cap wired to a receiver and laptop – this is an example of how EEG is commonly used.

He ran down the difficulties: “With a read-only system, the way EEG is used today is no good; other headsets have slow sample rates and they’re not accurate enough for a real-time interface. The best BCIs are in a lab, connected to wet electrodes — it’s messy, it’s really a non-starter. So how do we replicate that with dry, passive electrodes? We’re trying to solve some very hard engineering problems here.”

The limitations, Forsland and his colleagues found, were not so much with the EEG itself as with the way it was carried out. This type of brain monitoring is meant for diagnosis and study, not real-time feedback. It would be like taking a tractor to a drag race. Not only do EEGs often work with a slow, thorough check of multiple regions of the brain that may last several seconds, but the signal it produces is analyzed by dated statistical methods. So Cognixion started by questioning both practices.

Improving the speed of the scan is more complicated than overclocking the sensors or something. Activity in the brain must be inferred by collecting a certain amount of data. But that data is collected passively, so Forsland tried bringing an active element into it: a rhythmic electric stimulation that is in a way reflected by the brain region, but changed slightly depending on its state — almost like echolocation.

The Cognixion One headset with its dry EEG terminals visible.

They detect these signals with a custom set of six EEG channels in the visual cortex area (up and around the back of your head), and use a machine learning model to interpret the incoming data. Running a convolutional neural network locally on an iPhone — something that wasn’t really possible a couple years ago — the system can not only tease out a signal in short order but make accurate predictions, making for faster and smoother interactions.

The result is sub-second latency with 95-100 percent accuracy in a wireless headset powered by a mobile phone. “The speed, accuracy and reliability are getting to commercial levels —  we can match the best in class of the current paradigm of EEGs,” said Forsland.

Dr. William Goldie, a clinical neurologist who has used and studied EEGs and other brain monitoring techniques for decades (and who has been voluntarily helping Cognixion develop and test the headset), offered a positive evaluation of the technology.

“There’s absolutely evidence that brainwave activity responds to thinking patterns in predictable ways,” he noted. This type of stimulation and response was studied years ago. “It was fascinating, but back then it was sort of in the mystery magic world. Now it’s resurfacing with these special techniques and the computerization we have these days. To me it’s an area that’s opening up in a manner that I think clinically could be dramatically effective.”

BCI, meet UI

The first thing Forsland told me was “We’re a UI company.” And indeed even such a step forward in neural interfaces as he later described means little if it can’t be applied to the problem at hand: helping people with severe motor impairment to express themselves quickly and easily.

Sad to say, it’s not hard to imagine improving on the “competition,” things like puff-and-blow tubes and switches that let users laboriously move a cursor right, right a little more, up, up a little more, then click: a letter! Gaze detection is of course a big improvement over this, but it’s not always an option (eyes don’t always work as well as one would like) and the best eye-tracking solutions (like a Tobii Dynavox tablet) aren’t portable.

Why shouldn’t these interfaces be as modern and fluid as any other? The team set about making a UI with this and the capabilities of their next-generation EEG in mind.

Image of the target Cognixion interface as it might appear to a user, with buttons for yes, no, phrases and tools.

Image Credits: Cognixion

Their solution takes bits from the old paradigm and combines them with modern virtual assistants and a radial design that prioritizes quick responses and common needs. It all runs in an app on an iPhone, the display of which is reflected in a visor, acting as a HUD and outward-facing display.

In easy reach of, not to say a single thought but at least a moment’s concentration or a tilt of the head, are everyday questions and responses — yes, no, thank you, etc. Then there are slots to put prepared speech into — names, menu orders, and so on. And then there’s a keyboard with word- and sentence-level prediction that allows common words to be popped in without spelling them out.

“We’ve tested the system with people who rely on switches, who might take 30 minutes to make 2 selections. We put the headset on a person with cerebral palsy, and she typed our her name and hit play in 2 minutes,” Forsland said. “It was ridiculous, everyone was crying.”

Goldie noted that there’s something of a learning curve. “When I put it on, I found that it would recognize patterns and follow through on them, but it also sort of taught patterns to me. You’re training the system, and it’s training you — it’s a feedback loop.”

“I can be the loudest person in the room”

One person who has found it extremely useful is Chris Benedict, a DJ, public speaker, and disability advocate who himself has Dyskinetic Cerebral Palsy. It limits his movements and ability to speak, but doesn’t stop him from spinning (digital) records at various engagements, however, or from explaining his experience with Cognixion’s One headset over email. (And you can see him demonstrating it in person in the video above.)

DJ Chris Benedict wears the Cognixion Headset in a bright room.

Image Credits: Cognixion

“Even though it’s not a tool that I’d need all the time it’s definitely helpful in aiding my communication,” he told me. “Especially when I need to respond quickly or am somewhere that is noisy, which happens often when you are a DJ. If I wear it with a Bluetooth speaker I can be the loudest person in the room.” (He always has a speaker on hand, since “you never know when you might need some music.”)

The benefits offered by the headset give some idea of what is lacking from existing assistive technology (and what many people take for granted).

“I can use it to communicate, but at the same time I can make eye contact with the person I’m talking to, because of the visor. I don’t have to stare at a screen between me and someone else. This really helps me connect with people,” Benedict explained.

“Because it’s a headset I don’t have to worry about getting in and out of places, there is no extra bulk added to my chair that I have to worry about getting damaged in a doorway. The headset is balanced too, so it doesn’t make my head lean back or forward or weigh my neck down,” he continued. “When I set it up to use the first time it had me calibrate, and it measured my personal range of motion so the keyboard and choices fit on the screen specifically for me. It can also be recalibrated at any time, which is important because not every day is my range of motion the same.”

Alexa, which has been extremely helpful to people with a variety of disabilities due to its low cost and wide range of compatible devices, is also part of the Cognixion interface, something Benedict appreciates, having himself adopted the system for smart home and other purposes. “With other systems this isn’t something you can do, or if it is an option, it’s really complicated,” he said.

Next steps

As Benedict demonstrates, there are people for whom a device like Cognixion’s makes a lot of sense, and the hope is it will be embraced as part of the necessarily diverse ecosystem of assistive technology.

Forsland said that the company is working closely with the community, from users to clinical advisors like Goldie and other specialists, like speech therapists, to make the One headset as good as it can be. But the hurdle, as with so many devices in this class, is how to actually put it on people’s heads — financially and logistically speaking.

Cognixion is applying for FDA clearance to get the cost of the headset — which, being powered by a phone, is not as high as it would be with an integrated screen and processor — covered by insurance. But in the meantime the company is working with clinical and corporate labs that are doing neurological and psychological research. Places where you might find an ordinary, cumbersome EEG setup, in other words.

The company has raised funding and is looking for more (hardware development and medical pursuits don’t come cheap), and has also collected a number of grants.

The One headset may still be some years away from wider use (the FDA is never in a hurry), but that allows the company time to refine the device and include new advances. Unlike many other assistive devices, for example a switch or joystick, this one is largely software-limited, meaning better algorithms and UI work will significantly improve it. While many wait for companies like Neuralink to create a brain-computer interface for the modern era, Cognixion has already done so for a group of people who have much more to gain from it.

You can learn more about the Cognixion One headset and sign up to receive the latest at its site here.

#accessibility, #artificial-intelligence, #brain-computer-interface, #disabilities, #disability, #eeg, #gadgets, #hardware, #science, #startups, #tc, #wearables

0

Instagram adds a captions option for Stories and soon, Reels

Instagram is making its video Stories and Reels more accessible with the launch of a new “captions sticker” that will allow users to watch without having the sound on. The addition will not only make it easier for users who are hard of hearing or deaf to engage with video content, it also offers a way for users to watch videos when they’re somewhere they don’t want to have their sound on — and either don’t want to wear or don’t have access headphones or earbuds.

To use the feature, creators will first record a new video using the Stories or Reels Camera in the Instagram app, or select a video to upload from their phone’s gallery. Then, you’ll open the sticker tray and look for the new “Captions” sticker, which will convert your speech to text. You can also edit the style, position of the caption, and the text and color so it matches your content. When you post, the captions will appear alongside your video for everyone to see.

At launch, the feature is only available in English and in English-speaking countries, but Instagram plans to roll it out to other countries and languages soon, it says. It’s also rolling out the captions sticker first to Stories and will then begin testing it in Reels, with a broader launch to follow.

The captions sticker had been spotted last year while in development, alongside other potential new additions, like a Collab sticker, Link sticker, Reshare sticker, and others. Instagram parent Facebook also appears to have a captions sticker of its own in development. The sticker then began testing earlier this spring with some number of Instagram users.

The addition comes only weeks after TikTok announced its own captions feature, which it calls auto captions. The two products are somewhat different, however. Auto captions automatically translate the speech from a TikTok video in either American English and Japanese, to start, but the text itself isn’t customizable and can be turned on or off by the viewer from the app’s share panel. It also hasn’t yet been broadly adopted and many TikTok creators tend to still use captions they create themselves or via third-party apps.

Instagram notes it had previously launched support for captions across Threads and IGTV, but its expansion to Stories and Reels will make more of an impact, given that instagram Stories alone is used by over 500 million people every day.

#accessibility, #apps, #captions, #creators, #deaf, #facebook, #instagram, #media, #mobile, #reels, #social, #stories, #video, #videos

0

SLAIT’s real-time sign language translation promises more accessible online communication

Sign language is used by millions of people around the world, but unlike Spanish, Mandarin or even Latin, there’s no automatic translation available for those who can’t use it. SLAIT claims the first such tool available for general use, which can translate around 200 words and simple sentences to start — using nothing but an ordinary computer and webcam.

People with hearing impairments, or other conditions that make vocal speech difficult, number in the hundreds of millions, rely on the same common tech tools as the hearing population. But while emails and text chat are useful and of course very common now, they aren’t a replacement for face-to-face communication, and unfortunately there’s no easy way for signing to be turned into written or spoken words, so this remains a significant barrier.

We’ve seen attempts at automatic sign language (usually American/ASL) translation for years and years: in 2012 Microsoft awarded its Imagine Cup to a student team that tracked hand movements with gloves; in 2018 I wrote about SignAll, which has been working on a sign language translation booth using multiple cameras to give 3D positioning; and in 2019 I noted that a new hand-tracking algorithm called MediaPipe, from Google’s AI labs, could lead to advances in sign detection. Turns out that’s more or less exactly what happened.

SLAIT is a startup built out of research done at the Aachen University of Applied Sciences in Germany, where co-founder Antonio Domènech built a small ASL recognition engine using MediaPipe and custom neural networks. Having proved the basic notion, Domènech was joined by co-founders Evgeny Fomin and William Vicars to start the company; they then moved on to building a system that could recognize first 100, and now 200 individual ASL gestures and some simple sentences. The translation occurs offline, and in near real time on any relatively recent phone or computer.

Animation showing ASL signs being translated to text, and spoken words being transcribed to text back.

They plan to make it available for educational and development work, expanding their dataset so they can improve the model before attempting any more significant consumer applications.

Of course, the development of the current model was not at all simple, though it was achieved in remarkably little time by a small team. MediaPipe offered an effective, open-source method for tracking hand and finger positions, sure, but the crucial component for any strong machine learning model is data, in this case video data (since it would be interpreting video) of ASL in use — and there simply isn’t a lot of that available.

As they recently explained in a presentation for the DeafIT conference, the first team evaluated using an older Microsoft database, but found that a newer Australian academic database had more and better quality data, allowing for the creation of a model that is 92 percent accurate at identifying any of 200 signs in real time. They have augmented this with sign language videos from social media (with permission, of course) and government speeches that have sign language interpreters — but they still need more.

Animated image of a woman saying "deaf understand hearing" in ASL.

A GIF showing one of the prototypes in action — the consumer product won’t have a wireframe, obviously.Image Credits: Slait.ai

Their intention is to make the platform available to the deaf and ASL learner communities, who hopefully won’t mind their use of the system being turned to its improvement.

And naturally it could prove an invaluable tool in its present state, since the company’s translation model, even as a work in progress, is still potentially transformative for many people. With the amount of video calls going on these days and likely for the rest of eternity, accessibility is being left behind — only some platforms offer automatic captioning, transcription, summaries, and certainly none recognize sign language. But with SLAIT’s tool people could sign normally and participate in a video call naturally rather than using the neglected chat function.

“In the short term, we’ve proven that 200 word models are accessible and our results are getting better every day,” said SLAIT’s Evgeny Fomin. “In the medium term, we plan to release a consumer facing app to track sign language. However, there is a lot of work to do to reach a comprehensive library of all sign language gestures. We are committed to making this future state a reality. Our mission is to radically improve accessibility for the Deaf and hard of hearing communities.”

From left, Evgeny Fomin, Dominic Domènech, and Bill Vicars.Image Credits: Slait.ai

He cautioned that it will not be totally complete — just as translation and transcription in or to any language is only an approximation, the point is to provide practical results for millions of people, and a few hundred words goes a long way toward doing so. As data pours in, new words can be added to the vocabulary, and new multi-gesture phrases as well, and performance for the core set will improve.

Right now the company is seeking initial funding to get its prototype out and grow the team beyond the founding crew. Fomin said they have received some interest but want to make sure they connect with an investor who really understands the plan and vision.

When the engine itself has been built up to be more reliable by the addition of more data and the refining of the machine learning models, the team will look into further development and integration of the app with other products and services. For now the product is more of a proof of concept, but what a proof it is — with a bit more work SLAIT will have leapfrogged the industry and provided something that deaf and hearing people both have been wanting for decades.

#accessibility, #artificial-intelligence, #asl, #computer-vision, #deaf, #disabilities, #hearing-impaired, #science, #sign-language, #slait, #slait-ai, #startups, #tc

0

Flawed data is putting people with disabilities at risk

Data isn’t abstract — it has a direct impact on people’s lives.

In 2019, an AI-powered delivery robot momentarily blocked a wheelchair user from safely accessing the curb when crossing a busy road. Speaking about the incident, the person noted how “it’s important that the development of technologies [doesn’t put] disabled people on the line as collateral”.

Alongside other minority groups, people with disabilities have long been harmed by flawed data and data tools. Disabilities are diverse, nuanced, and dynamic; they don’t fit within the formulaic structure of AI, which is programmed to find patterns and form groups. Because AI treats any outlier data as ‘noise’ and disregards it, too often people with disabilities are excluded from its conclusions.

Take for example the case of Elaine Herzberg, who was struck and killed by a self-driving Uber SUV in 2018. At the time of the collision, Herzberg was pushing a bicycle, which meant Uber’s system struggled to categorize her and flitted between labeling her as a ‘vehicle,’ ‘bicycle,’ and ‘other.’ The tragedy raised many questions for people with disabilities: would a person in a wheelchair or a scooter be at risk of the same fatal misclassification?

We need a new way of collecting and processing data. ‘Data’ ranges from personal information, user feedback, resumes, multimedia, user metrics, and much more, and it’s constantly being used to optimize our software. However, it’s not done so with the understanding of the spectrum of nefarious ways that it can and is used in the wrong hands, or when principles are not applied to each touchpoint of building.

Our products are long overdue for a new, fairer data framework to ensure that data is managed with people with disabilities in mind. If it isn’t, people with disabilities will face more friction, and dangers, in a day-to-day life that is increasingly dependent on digital tools.

Misinformed data hampers the building of good tools

Products that lack accessibility might not stop people with disabilities from leaving their homes, but they can stop them from accessing pivot points of life like quality healthcare, education, and on-demand deliveries.

Our tools are a product of their environment. They reflect their creators’ world view and subjective lens. For too long, the same groups of people have been overseeing faulty data systems. It’s a closed loop, where underlying biases are perpetuated and groups that were already invisible remain unseen. But as data progresses, that loop becomes a snowball. We’re dealing with machine-learning models — if they’re taught long enough that ‘not being X’ (read: white, able-bodied, cisgendered) means not being ‘normal’, they will evolve by building on that foundation.

Data is interlinked in ways that are invisible to us. It’s not enough to say that your algorithm won’t exclude people with registered disabilities. Biases are present in other sets of data. For example, in the United States it’s illegal to refuse someone a mortgage loan because they’re Black. But by basing the process heavily on credit scores — which have inherent biases detrimental to people of color — banks indirectly exclude that segment of society.

For people with disabilities, indirectly biased data could potentially be: frequency of physical activity or number of hours commuted per week. Here’s a concrete example of how indirect bias translates to software: If a hiring algorithm studies candidates’ facial movements during a video interview, a person with a cognitive disability or mobility impairment will experience different barriers than a fully able-bodied applicant.

The problem also stems from people with disabilities not being viewed as part of businesses’ target market. When companies are in the early stage of brainstorming their ideal users, people’s disabilities often don’t figure, especially when they’re less noticeable — like mental health illness. That means the initial user data used to iterate products or services doesn’t come from these individuals. In fact, 56% of organizations still don’t routinely test their digital products among people with disabilities.

If tech companies proactively included individuals with disabilities on their teams, it’s far more likely that their target market would be more representative. In addition, all tech workers need to be aware of and factor in the visible and invisible exclusions in their data. It’s no simple task, and we need to collaborate on this. Ideally, we’ll have more frequent conversations, forums and knowledge-sharing on how to eliminate indirect bias from the data we use daily.

We need an ethical stress test for data

We test our products all the time — on usability, engagement, and even logo preferences. We know which colors perform better to convert paying customers, and the words that resonate most with people, so why aren’t we setting a bar for data ethics?

Ultimately, the responsibility of creating ethical tech does not just lie at the top. Those laying the brickwork for a product day after day are also liable. It was the Volkswagen engineer (not the company CEO) who was sent to jail for developing a device that enabled cars to evade US pollution rules.

Engineers, designers, product managers: we all have to acknowledge the data in front of us and think about why we collect it and how we collect it. That means dissecting the data we’re requesting and analyzing what our motivations are. Does it always make sense to ask about someone’s disabilities, sex or race? How does having this information benefit the end user?

At Stark, we’ve developed a five-point framework to run when designing and building any kind of software, service or tech. We have to address:

  1. What data we’re collecting
  2. Why we’re collecting it
  3. How it will be used (and how it can be misused)
  4. Simulate IFTTT: ‘if this, then that.’ Explain possible scenarios in which the data can be used nefariously, and alternate solutions. For instance, how users can be impacted by an at-scale data breach? What happens if this private information becomes public to their family and friends?
  5. Ship or trash the idea

If we can only explain our data using vague terminology and unclear expectations, or by stretching the truth, we shouldn’t be allowed to have that data. The framework forces us to break down data in the most simple manner; and if we can’t, it’s because we’re not yet equipped to handle it responsibly.

Innovation has to include people with disabilities

Complex data technology is entering new sectors all the time, from vaccine development to robotaxis. Any bias against individuals with disabilities in these sectors stops them from accessing the most cutting-edge products and services. As we become more dependent on tech in every niche of our lives, there’s greater room for exclusion in how we carry out everyday activities.

This is all about forward thinking and baking inclusion into your product at the start. Money and/or experience aren’t limiting factors here — changing your thought process and development journey is free, it’s just a conscious pivot in a better direction. And while the upfront cost may be a heavy lift, the profits you’d lose from not tapping into these markets, or because you end up retrofitting your product down the line, far outweigh that initial expense. This is especially true for enterprise-level companies that won’t be able to access academia or governmental contracts without being compliant.

So early-stage companies, integrate accessibility principles into your product development and gather user data to constantly reinforce those principles. Sharing data across your onboarding, sales, and design teams will give you a more complete picture of where your users are experiencing difficulties. Later-stage companies should carry out a self-assessment to determine where those principles are lacking in their product, and harness historical data and new user feedback to generate a fix.

An overhaul of AI and data isn’t just about adapting businesses’ framework. We still need the people at the helm to be more diverse. The fields remain overwhelmingly male and white, and in tech, there are numerous first-hand accounts of exclusion and bias towards people with disabilities. Until the teams curating data tools are themselves more diverse, nations’ growth will continue to be stifled, and people with disabilities will be some of the hardest-hit casualties.

#accessibility, #artificial-intelligence, #cat-noone, #column, #data, #diversity, #ethics, #opinion, #tc

0

Quest for prosthetic retinas progresses towards human trials, with a VR assist

An artificial retina would be an enormous boon to the many people with visual impairments, and the possibility is creeping closer to reality year by year. One of the latest advancements takes a different and very promising approach, using tiny dots that convert light to electricity, and virtual reality has helped show that it could be a viable path forward.

These photovoltaic retinal prostheses come from the École polytechnique fédérale de Lausanne, where Diego Ghezzi has been working on the idea for several years now.

Early retinal prosthetics were created decades ago, and the basic idea is as follows. A camera outside the body (on a pair of glasses, for instance) sends a signal over a wire to a tiny microelectrode array, which consists of many tiny electrodes that pierce the non-functioning retinal surface and stimulate the working cells directly.

The problems with this are mainly that powering and sending data to the array requires a wire running from outside the eye in — generally speaking a “don’t” when it comes to prosthetics, and the body in general. The array itself is also limited in the number of electrodes it can have by the size of each, meaning for many years the effective resolution in the best case scenario was on the order of a few dozen or hundred “pixels.” (The concept doesn’t translate directly because of the way the visual system works.)

Ghezzi’s approach obviates both these problems with the use of photovoltaic materials, which turn light into an electric current. It’s not so different from what happens in a digital camera, except instead of recording the charge as in image, it sends the current into the retina like the powered electrodes did. There’s no need for a wire to relay power or data to the implant, because both are provided by the light shining on it.

Researcher Diego Ghezzi holds a contact lens with photovoltaic dots on it.

Image Credits: Alain Herzog / EPFL

In the case of the EPFL prosthesis, there are thousands of tiny photovoltaic dots, which would in theory be illuminated by a device outside the eye sending light in according to what it detects from a camera. Of course, it’s still an incredibly difficult thing to engineer. The other part of the setup would be a pair of glasses or goggles that both capture an image and project it through the eye onto the implant.

We first heard of this approach back in 2018, and things have changed somewhat since then, as a new paper documents.

“We increased the number of pixels from about 2,300 to 10,500,” explained Ghezzi in an email to TechCrunch. “So now it is difficult to see them individually and they look like a continuous film.”

Of course when those dots are pressed right up against the retina it’s a different story. After all, that’s only 100×100 pixels or so if it were a square — not exactly high definition. But the idea isn’t to replicate human vision, which may be an impossible task to begin with, let alone realistic for anyone’s first shot.

“Technically it is possible to make pixel smaller and denser,” Ghezzi explained. “The problem is that the current generated decreases with the pixel area.”

Image showing a close-up of the photovoltaic dots on the retinal implant, labeled as being about 80 microns across each.

Current decreases with pixel size, and pixel size isn’t exactly large to begin with.Image Credits: Ghezzi et al

So the more you add, the tougher it is to make it work, and there’s also the risk (which they tested) that two adjacent dots will stimulate the same network in the retina. But too few and the image created may not be intelligible to the user. 10,500 sounds like a lot, and it may be enough — but the simple fact is that there’s no data to support that. To start on that the team turned to what may seem like an unlikely medium: VR.

Because the team can’t exactly do a “test” installation of an experimental retinal implant on people to see if it works, they needed another way to tell whether the dimensions and resolution of the device would be sufficient for certain everyday tasks like recognizing objects and letters.

A digitally rendered street scene and distorted monochrome versions below showing various ways of representing it via virtual phosphors.

Image Credits: Jacob Thomas Thorn et al

To do this, they put people in VR environments that were dark except for little simulated “phosphors,” the pinpricks of light they expect to create by stimulating the retina via the implant; Ghezzi likened what people would see to a constellation of bright, shifting stars. They varied the number of phosphors, the area they appear over, and the length of their illumination or “tail” when the image shifted, asking participants how well they could perceive things like a word or scene.

The word "AGREE" rendered in various ways with virtual phosphors.

Image Credits: Jacob Thomas Thorn et al

Their primary finding was that the most important factor was visual angle — the overall size of the area where the image appears. Even a clear image is difficult to understand if it only takes up the very center of your vision, so even if overall clarity suffers it’s better to have a wide field of vision. The robust analysis of the visual system in the brain intuits things like edges and motion even from sparse inputs.

This demonstration showed that the implant’s parameters are theoretically sound and the team can start working towards human trials. That’s not something that can happen in a hurry, and while this approach is very promising compared with earlier, wired ones, it will still be several years even in the best case scenario before it’s possible it could be made widely available. Still, the very prospect of a working retinal implant of this type is an exciting one and we’ll be following it closely.

#accessibility, #blindness, #computer-vision, #disabilities, #epfl, #gadgets, #hardware, #science, #tc

0

4 signs your product is not as accessible as you think

For too many companies, accessibility wasn’t baked into their products from the start, meaning they now find themselves trying to figure out how to inject it retrospectively. But bringing decades-long legacy code and design into the future isn’t easy (or cheap).

Businesses have to overcome the fear and uncertainty about how to do such retrofitting, address the lack of education to launch such projects, and balance the scope of these iterations while still maintaining other production work.

Among the U.S. adult population, 26% live with some form of disability, and businesses that are ignorant or slow to respond to accessibility needs are producing digital products for a smaller group of users. Someone who is a neophyte might not be able to use a product with overwhelming cognitive overhead. Someone using a product that isn’t localized may not be able to refill their prescription in a new country.

We recently saw this play out in the “cat lawyer” episode, which the kitten-faced attorney took in good humor. But it also reminded us that many people struggle with today’s basic tools, and for those who don’t, it’s hard to understand just how much this disrupts people’s personal and professional lives.

If you’re a founder with a software product out there, you probably won’t receive as loud an alarm bell as a viral cat filter video to tell you that something’s wrong. At least not immediately. Unfortunately, that time will come because social media has become the megaphone for support issues. You want to avoid that final, uncontrollable warning sign. Here are four other warning signs that make clear your product is not as accessible as you might think — and how you can address that.

1. You didn’t define a11y principles at the start of your journey

Accessibility is a key ingredient in your product cake — and it’ll always taste best when it’s added to the mix at the beginning. It’s also more time- and cost-effective, as fixing a usability issue after the product has been released can cost up to 100 times more than earlier on in the development process.

Your roadmap should work toward the four principles of accessibility, described using the acronym POUR.

  • Perceivable: Your users need to be able to perceive all of the information displayed on your user interface.
  • Operable: Your users must be able to operate and navigate your interface components.
  • Understandable: Your content and the functioning of your user interface must be clearly understandable to users.
  • Robust: Your content has to be robust enough that a wide variety of users can continue to access it as technologies advance, including assistive technologies.

Without following each of these principles consistently, you cannot guarantee that your product is accessible to everybody.

The roadmap should integrate accessibility efforts into the design, development and quality assurance process, all the way through to product release and updates, where the cycle starts all over.

This means it’s vital to have everyone on your team informed and committed to accessibility. You could even go further and nominate one person from each team to lead the accessibility process and be responsible for each team’s compliance. It’s worth starting any new project with an accessibility audit so you understand exactly where your gaps are. And by syncing with sales and support teams, you can identify where users are experiencing friction.

This baking process helps you avoid legal problems in the future as a result of non-compliance. In 2019, a blind man successfully sued Domino’s after he was unable to order food on the Domino’s website and mobile app, despite using screen-reading software. Beyoncé’s company was sued by a blind woman that same year. Product owners are wide open to lawsuits if they don’t implement the Web Content Accessibility Guidelines.

To help you on your way, IBM’s Carbon Design System is an open-source design system for digital products that offers free guidelines to build an accessible product, including for people with physical or cognitive disabilities. In addition, software tools exist that can help you do accessibility checks ahead of time rather than when the product is finished.

2. You’re treating a11y like a set-it-and-forget-it

Design trends evolve fast in the tech world. Your team is probably staying on top of the latest software or mobile features, but are they paying attention to accessibility?

A11y needs maintenance; the requirements for the web and mobile platforms are changing all the time and it’s important (as well as necessary) to stay on top of those changes. If you’re not carrying out constant tweaks and upgrades, chances are that you’ve racked up a few accessibility issues over time.

Plan regular meetings where you review and discuss your products’ accessibility and a11y compliance. Look at what other products are doing to be more accessible and attend courses about inclusive design (e.g., TechCrunch Sessions). Platforms like the A11Y Project are also incredibly useful resources for teams to stay up to date, and they also offer books, tutorials, professional support and professional testers.

3. You and your team haven’t tried out a11y tools

The best accessibility tool you have is your team itself. Building a product with a diverse group of people will mean you encounter and rectify any barriers to use faster and can innovate with greater impact — after all, people with disabilities are some of the world’s greatest innovators.

Outside of your team makeup, ask yourself: Have you ever used a screen reader? Or tried to navigate your website using only your keyboard? Seen your designs simulated against various types of vision?

If the answer is no, chances are you’re letting key accessibility features slip through the cracks. By putting yourself in the shoes of someone with impairments, these tools force you toward a better appreciation of their needs.

Try and get your team using these tools as early as possible, especially if you’re struggling to convey to them the importance of a11y. Once you’ve broadened your perspective, it’ll genuinely be harder to not see how people with different abilities are experiencing your product. Which is why you should come back to your product afterward, as a user, and explore it through a new lens.

4. You aren’t talking to your users

Lastly, there’s little chance you’ve built a truly accessible product without actually talking to its users. The general population is the most diverse set of critics to warn you if your product’s falling short for people of different backgrounds and abilities. Every single user experiences a product uniquely, and regardless of all the effort you’ve put in until now, there will likely be issues.

Lend an ear to a wide range of users, throughout the product life cycle. You can do this by doing user testing with each update, asking users to complete surveys on their in-app experience, and holding focus groups that proactively enlist people with a spectrum of needs.

Accessible design is just good design. It’s a misconception that it only improves UX for people with disabilities — it provides a better experience for everyone. And all founders want their product to reach as many people as possible. Once you put in the initial effort and embrace it, it becomes easier, like another tool in your kit. You won’t get it 100% right on the first try. But this is about progress, not perfection.

#accessibility, #column, #design, #developer, #diversity, #opinion, #tc, #usability, #ux, #web-accessibility

0

Android’s latest update will let you schedule texts, secure your passwords, and more

Google today announced the next set of features coming to Android, including a new password checkup tool, a way to schedule your texts, along with other improvements to products like its screen reader TalkBack, Maps, Assistant, and Android Auto. This spring 2021 release is latest in a series of smaller update bundles, similar to iOS “point releases,” that add new functionality and features to Android outside of the larger update cycle.

One the security front, this update will integrate a feature called Password Checkup into devices running Android 9 and above to alert you to passwords you’re using that have been previously exposed.

The feature works with Autofill with Google, which lets you quickly sign in to apps and other services on Android. Now, when you use Autofill, Password Checkup will check your credentials against a list of known compromised passwords, then notify you if your credentials appear on that list and what to do about it.

Image Credits: Google

The prompt can also direct you to your Password Manager page on Google, where you can review all your other saved Autofill passwords for similar issues.

To use this feature, you’ll need to have Autofill enabled. (Settings > System > Languages & Input > Advanced, the tap Autofill. Tap Google to ensure the setting is enabled.)

The new Messages feature rolling out this update could see profilifc texters considering a switch to Android, as it’s one of the most in-demand features since SMS was invented: the ability to schedule your texts.

Image Credits: Google

Android’s new scheduled send feature will allow you to compose a message ahead of time, whenever it’s convenient for you, then schedule it to be sent later when it’s a more appropriate time. This can be particularly helpful if you have friends, family or coworkers and colleagues in other timezones, and are hesitant to bother them when they could be sleeping or enjoying family time after work. It can also help those who often remember something they meant to text when it’s late at night and too late to send the message.

To use this feature, you’ll just write the text as usual, then press and hold the send button to select a date and time to deliver the message. You’ll need the latest version of the Android Messages app for this feature to work.

Another flagship feature arriving in this Android release is aimed at making Android’s screen reader, known as TalkBack, easier to use for those users who are blind or have low vision. TalkBack today allows users to navigate their device with their voice and gestures in order to read, write, send emails, share social media, order delivery and more.

Image Credits: Google

The updated version (TalkBack 9.1) will now include a dozen new multi-finger gestures to interact with apps and perform common actions, like selecting and editing text, controlling media or getting help. This will work on Pixel and Samsung Galaxy devices from One UI 3 onwards, Google says.

Google is also responding to user feedback over TalkBack’s confusing multiple menu system, and has returned to the single menu system users wanted. This single menu will adapt to context while also providing consistent access to the most common functions.

Other TalkBack improvements includes new gestures — like an up and right swipe to access over 25 voice commands — and new reading controls that let users either skim a page, read only headlines, listen word-by-word or even character-by-character.

 

Users can also now add or remove options from the TalkBack menu or the reading controls to further customize the interface to their needs. Plus, TalkBack’s braille keyboard added support for Arabic and Spanish.

The spring update also adds more minor improvements to Maps, Assistant and Android Auto.

Maps is getting a dark mode that you can enable as the default under Settings > Theme and then selecting “Always in Dark Theme.”

 

Image Credits: Google

 

Google Assistant’s update will let you use the feature when the phone is locked or further away from you, by turning on Lock Screen Personal Results in Assistant’s Settings then saying “Hey Google,” as needed.

The new cards that appear when the phone is locked are meant to be easier to read with just a glance, Google says.

And finally, Android Auto will now include custom wallpapers and voice-activated games like trivia and “Jeopardy!” which you can ask for via the “Hey Google” command.

There are also now shortcuts on the launch screen for accessing your contacts, or using Assistant to complete tasks like checking the weather or adjusting the thermostat, for example. Cars with wider screens will gain access to a split screen view with Google Maps on one side and media controls on the other.

Android Auto’s features will roll out in the “coming days” on phones running Android 6.0 and higher and work with compatible cars, Google notes.

 

#accessibility, #android, #apps, #google, #maps, #messages, #mobile, #mobile-os

0

Microsoft offers new accessibility testing service for PC and Xbox games

As gaming has grown from niche to mainstream over the past decades, it has also become both much more, and much less accessible to people with disabilities or other considerations. Microsoft aims to make the PC and Xbox more inclusive with a new in-house testing service that compares games to the newly expanded Xbox Accessibility Guidelines.

The Microsoft Game Accessibility Testing Service, as it’s called, is live now and anyone releasing a game on Windows or an Xbox platform can take advantage of it.

“Games are tested against the Xbox Accessibility Guidelines by a team of subject matter experts and gamers with disabilities. Our goal is to provide accurate and timely feedback, turned around within 7 business days,” said Brannon Zahand, senior gaming accessibility program manager at the company.

It’s not free (though Microsoft did not specify costs, which probably differ depending on the project), so if you want to know what the reports look like without diving in cash in hand, talk to your account rep and they can probably hook you up with a sample. But you don’t need final code to send it in.

“As game accessibility is much easier to implement early in a game’s development, we encourage game developers to submit as soon as they have a representative build that incorporates core UI and game experiences,” said Zahand. “That said, developers who already have released their products and are keeping them fresh with new updates and content may also find this testing valuable, as often there are relatively small tweaks or feature additions that can be made as part of a content update that will provide benefits for gamers with disabilities and others who take advantage of accessibility features.”

The guidelines themselves were introduced in January of last year, and include hundreds of tips and checks to include or consider when developing a game. Microsoft has done the right thing by continuing to support and revise the guidelines; The “2.0” version published today brings a number of improvements, summarized in this Xbox blog post.

Generally speaking the changes are about clarity and ease of application, giving developers more direct and simple advice, but there are also now many examples from published games showing that yes, this stuff is not just theoretically possible.

Image of an options screen for a Forza racing game where many aspects of the game have their own difficulty setting.

Seems obvious to do this now. Image Credits: Microsoft

Everything from the UI to control methods and difficulty settings is in there, and they actually make for compelling reading for any interested gamer. Once you see how some games have created granular difficulty settings or included features or modes to improve access without affecting the core of the game, you start to wonder why they aren’t everywhere.

There are also more nuts and bolts tips, such as how best to structure a menu screen or in-game UI so that a screen reader can access the information.

Some argue that adding or subtracting some features can interfere with the way a game is “meant” to be played. And indeed one does struggle to imagine how famously difficult and obtuse games like the Dark Souls series could integrate such changes gracefully. But for one thing, that is a consideration for very smart developers to work out on their end, and for another, these options of which we speak are almost all able to be toggled or adjusted, as indeed many things can be even in the most hardcore titles. And that’s without speaking to the lack of consideration for others in different circumstances evinced in such a sentiment.

Microsoft has made several moves towards accessibility in gaming in recent years, the most prominent of which must be the Xbox Adaptive Controller, which lets people plug in all manner of assistive devices to work as joysticks, buttons, and triggers — making it much easier for much wider spectrum of people to play games on the company’s platforms.

#accessibility, #gaming, #microsoft, #tc, #xbox, #xbox-one, #xbox-series-x

0

Accessibility overlay startup AccessiBe closes $28M series A

If you want to make your website accessible right now but lack the resources for the kind of serious revamp it would take, an “accessibility overlay” may take the pressure off while the work gets done. Though critics argue that these tools aren’t a permanent solution, AccessiBe has raised $28 million to show that its approach is an important part of making the entire web available to everyone.

It’s a problem often faced by small businesses: their website may not have been built to modern accessibility standards, and not only needs a deep dive by professionals to fix, but ongoing work to keep up to date and fix errors. This sort of work can be very expensive, and SMBs may not have the cash to lay out. This is not only a bummer for anyone with a disability who visits the site, but it exposes the business to legal action.

At the enterprise level accessibility is increasingly becoming part of the development process, and startups like Fable and Evinced are looking to push things forward there. For those whose development budgets compete with rent and food money, however, other approaches may be desired.

AccessiBe is one of a few new services called accessibility overlays that claim to provide total ADA compliance and other features just by installing a line of javascript. If it sounds too good to be true… well, it is and it isn’t.

What the overlay code does is scrub the whole website’s user-facing code for issues like unlabeled buttons, fields that aren’t addressable by keyboard navigation, images without alt text, and other common accessibility issues. AccessiBe’s system does so with the addition of machine learning to match features of the target site to those in its training database, so even if something is really poorly coded, it can still be recognized by its context or clear intention.

You can try it out yourself at a handful of websites by appending #showacsb: it’s live on Everlast, Tupperware, and Playmobil (among many others).

Screenshot of the Everlast website with accessiBe's overlay on the right with options to adjust text and visuals.

The result is a website that works in many ways as if it was designed with accessibility in mind, fixing a lot of the basic problems that prevent visitors with disabilities from using a site, and providing plenty of additional quality-of-life features like improving contrast, stopping animations, changing the font, etc. The overlay can be automatically activated or manually by users prompted by screen reader text that tells them how to do so.

AccessiBe’s agent scans the site regularly and updates what users will see, and the website owner pays monthly (from $40 to a couple hundred a month depending on the size of the site) to have the tool available. This demo video does a pretty good job of showing the problems the tool fixes.

You may wonder how this could be considered anything but good for accessibility, but there’s serious debate over the role of overlays in considerations for how the web should be made more accessible. The implication of such a tool is that all that’s needed to make any website accessible is a single line of code. This leads to a couple problems.

First, it’s questionable whether automated processes like accessiBe’s (and others aimed at developers, like Evinced and AudioEye) can actually catch and fix every accessibility problem. There are many that slip past even the best analysis and others that resist automated fixes. (The company offers a free assessment tool in case you’re curious what it would and wouldn’t catch at your website.)

AccessiBe CEO Shir Ekerling said that this concern has been ameliorated by recent improvements to the technology.

“AccessiBe scans and analyzes every website every day. We know, automatically, exactly what are the interactive elements of the site, where people click, put their mouse, or stay the longest,” he explained. “Combining this with probability algorithms we run (matching every site element to every site element of over 100,000 websites including an insane amount of artificial data), we are able to know exactly what each and every element of the site is and adjust it so it is accessible adhering to WCAG 2.1 level AA.” (The WCAG guidelines can be perused here.)

Likewise considerations that overlays can interfere with existing accessibility tools, for example if a user has a browser or add-on that automatically captions images or reads out text a certain way; Ekerling said accessiBe defers to user-side tools on these things. And it also works on mobile browsers, which many previous overlays didn’t.

There’s also the more philosophical question of whether, by having accessibility essentially something you can turn on and off, a site owner is maintaining a sort of “separate but equal access” to their content. That’s a big no-no in this field, like a restaurant having a separate dining room for people with wheelchairs rather than adding a ramp to the front door. Of course since accessiBe doesn’t make a separate site or permanently modify the source code, it clearly isn’t that. But it’s equally clear that the base site isn’t built to be accessible — it just has a layer of accessibility spread over the top.

The company’s position is that their overlay provides everything needed for ADA compliance and WCAG best practices, and as such constitutes a complete solution for making a website accessible. Others contend that this is not the case and at any rate that developers should not be incentivized to ignore accessibility while building because they think a third party service can provide it with one line of code.

accessiBe is also working on a user-side version that isn’t reliant on a website including the widget, something that could potentially be very helpful to a lot of people but do little to make the web more accessible in its fundamentals. That takes dedication by many independent actors and businesses. The recently hired Michael Hingson, “chief visionary officer,” acknowledges this while also asserting the usefulness of well-done overlays.

The company has raised $28M in the last year, all from K1 Investment Management. K1 initially invested $12M last may, but more than doubled that commitment after 2020 saw accessiBe tripled their ARR. Much of the cash will be used for further R&D and to consult and hire more people with disabilities for testing, feedback, and development.

Every website should be accessible, that much everyone can agree on. But it’s a long, complicated, and expensive road to get there. Tools like accessiBe may not be a permanent solution, but they can make a website more accessible tomorrow — and potentially less vulnerable to lawsuits alleging noncompliance with ADA rules — where deeper changes may take months or years to achieve.

#accessibility, #artificial-intelligence, #funding, #fundings-exits, #recent-funding, #startups

0

Learn about the importance of accessible product design at TechCrunch Sessions: Justice

When you are able to navigate a world that is designed for you, it’s easy to avoid thinking about how the world is designed for you. But it can be different if you are disabled.

At TechCrunch Sessions: Justice on March 3, we will examine the importance of ensuring accessible product design from the beginning. We’ll ask how the social and medical models of disability influence technological evolution. Integrating the expertise of disabled technologists, makers, investors, scientists, software engineers into the DNA of your company from the very beginning is vital to the pursuit of a functioning and equitable society. And could mean you don’t leave money on the table.

Join us at TechCrunch Sessions: Justice for a wide-ranging discussion as we attempt to answer these questions and further explore inclusive design with Cynthia Bennett, Mara Mills and Srin Madipalli.

Cynthia Bennett is a post-doc at Carengie Mellon University’s Human-Computer Interaction Institute, as well as a researcher at Apple. Her research focuses on  human-computer interaction, accessibility and Disability Studies, and, she says on her website, spans “the critique and development of HCI theory and methods to designing emergent accessible interactions with technology.” Her research includes Biographical Prototypes: Reimagining Recognition and Disability in Design and The Promise of Empathy: Design, Disability, and Knowing the “Other.”

Mara Mills is the Associate Professor of Media, Culture, and Communication at New York University and a co-founder and co-director of the NYU Center for Disability Studies. Mills research focuses on sound studies, disability studies and history. (You can hear her discuss the intersection of artificial intelligence and disability with Meredith Whittaker, co-founder of the AI Now Institute and Minderoo Research Professor at NYU, and Sara Hendren, professor at Olin College of Engineering and author of the recently published What Can a Body Do: How We Meet the Built World, on the TechCrunch Mixtape podcast here.)

Srin Madipalli is an investor and co-founder of Accomable, an online platform that helped users find accessible vacation properties, which he sold to Airbnb. His advocacy work focuses on disability inclusion iBe sure to snag your tickets here for just $5 here.n the workplace, as well as advising tech companies on accessibility.

Make sure you can join us for this conversation and more at TC Sessions: Justice on March 3. Secure your seat now!

 

#accessibility, #cynthia-bennett, #mara-mills, #srin-madipalli, #tc, #techcrunch-sessions-justice

0

Evinced raises $17M to speed up accessibility testing for the web

Making and keeping the web accessible is a full-time job, and like any other development role, accessibility tools need to evolve to keep up with the times. Evinced is a startup that promises both richer and faster checks of websites in production or in progress, and it just raised $17M to take its tools to the next level.

Because accessibility problems can happy in so many ways, it often takes a lot of manual code review to catch the errors. Even a team thinking about making their site fully accessible from the start — which should be everyone — can miss that this script doesn’t hook into that variable right if this menu is opened, and so on.

There’s automated code review, but it can be slow and bulky. Evinced is making a powerful, streamlined tool that checks a website in a fraction of a second while you’re using it, presenting the problems in a way that’s easy for devs to share and address. It also doesn’t trip up on the fancy, javascript-heavy web apps that millions use today.

Here’s an example of a modern website that looks fine but is obviously (for demo purposes) riddled with accessibility issues. The video gives a good breakdown of what this part of the Evinced product does:

Honestly, that’s how it feels like it ought to look, but existing enterprise-level tools probably aren’t quite so efficient. And as you can see, the tool responds instantly while the user (that is to say, the developer) proceeds through the various actions the site enables. It could be, after all, that auditing the site before anyone fills in a form or pulls down any menus could give a misleading green light.

The inspector also brings in a bit of AI in the form of smart rules and computer vision, so if an element looks like a menu or button but isn’t labeled correctly, it isn’t fooled. Those elements do have distinct styles and roles: if something can be clicked and turns into a list that the user chooses from, well, it’s a pulldown menu whether it’s called that or not.

Image of Evinced's tool pointing out accessibility problems on a webpage.

Image Credits: Evinced

Naturally there are also quick fixes suggested and the ability to easily export the issues for formal inspection by the boss, as well as other expected features for a web development tool. It’s available as a Chrome extension, or as an API or automated part of other analysis or commit actions, throwing its list of errors in with the rest.

The company formed back in 2018, when they started development. The next year they hooked up with a few large enterprises to see about integrating and testing within their ecosystems. Capitol One became their biggest customer and is now an investor.

“We have since deployed our products in production at Capital One (means they are used every day – and power their end-to-end accessibility operations – see the Capital One blog) and others. These are paying customers that have an enterprise license,” said Founder and CEO, Navin Thandani.

Indeed, as Capitol One explains:

Capital One partnered with Evinced early, to guide their development with a particular focus on: helping developers release accessible code integrating multiple automated testing steps through the build and deployment lifecycle building products that can automatically scan for accessibility across a full web property (including through logins and internal repositories), and do this fast.

Capital One partnered with Evinced early, to guide their development with a particular focus on: helping developers release accessible code integrating multiple automated testing steps through the build and deployment lifecycle building products that can automatically scan for accessibility across a full web property (including through logins and internal repositories), and do this fast.

We’ve seen Evinced discover as much as 10x more critical accessibility issues than we were previously finding through automated testing alone. An even greater number of issues are discovered when a site is more interactive, including keyboard and screen reader usability issues.

Automated testing on a large enterprise scale can be an extremely complex and time consuming effort. Evinced is speedy and reliable, with 40x faster execution, enabling us to cut our processing time in some cases from 4-5 days down to less than 3 hours (and is being further optimized).

Glowing words, even if they are from an investor (technically Capital One Ventures, but still).

The company’s $17M series A was co-led by M12 BGV, and Capital One Ventures and included previous investors Engineering Capital.

As a sort of debut celebration present, Evinced is announcing its free tiers of service, including an iOS app accessibility debugger, which should be helpful to all the folks making apps who don’t know a thing about WCAG guidelines and ARIA roles. There’s also a free “community edition” site scanner that admins can sign up to be approved for, and a free trial for enterprises that want to give it a shot.

#accessibility, #artificial-intelligence, #developer, #evinced, #funding, #fundings-exits, #recent-funding, #startups, #tc

0

White House, dark mode: Biden admin refreshes Presidency’s website, vows accessibility

WhiteHouse.gov, the official website for all Presidential actions and efforts, is among the first things to be changed up under the freshly inaugurated President Biden. A fashionable dark mode appeared, a large text toggle for straining eyes, and the webmaster has committed to making the whole site conform to the latest accessibility guidelines.

The look isn’t so very different from the previous administration’s site — they’re both fairly modern and minimal experiences, with big photos up front and tidy lists of priorities and announcements once you drill down into a category.

Animation showing dark and light modes on whitehouse.gov

Image Credits: White House

But one big design change implemented by the new administration that many will appreciate is the inclusion of a dark mode, or high contrast mode, and a large type toggle.

Dark modes have been around forever, but became de rigeur when Apple implemented its own system-wide versions on iOS and macOS a while back. It’s just easier on the eyes in many ways, and at any rate it’s nice to give users options.

The WhiteHouse.gov dark mode changes the headline type from a patriotic blue to an eye-friendly off-white, with links a calming Dijon. Even the White House logo itself goes from a dark blue background to full black with a white border. It’s all very tasteful, and if anything seems like a low contrast mode, not high.

The large type mode does what it says, making everything considerably bigger and easier to tap or click. The toggles, it must be said, are a bit over-prominent, but they’ll probably tweak that soon.

More important is the pledge in the accessibility section:

This commitment to accessibility for all begins with this site and our efforts to ensure all functionality and all content is accessible to all Americans.

Our ongoing accessibility effort works towards conforming to the Web Content Accessibility Guidelines (WCAG) version 2.1, level AA criteria.

The WCAG guidelines are a set of best practices for designing a website so that its content can be easily accessed by people who use screen readers, need captions for audio, or can’t use a mouse or touchscreen easily. The guidelines aren’t particularly hard to meet, but as many have pointed out, it’s harder to retrofit a website to be accessible than to design it for accessibility from the start.

One thing I noticed was that many of the photos on the White House website have alt text or visible captions attached — these help visually impaired visitors understand what’s in an image. Here’s an example:

Screenshot showing the alt text of a photo of VP Kamala Harris and her family

Image Credits: White House

 

 

 

 

Normally that alt text would be read out by a screen reader when it got to the image, but it’s generally not made visible.

Unless the metadata was stripped from the previous administration’s site (it’s archived here), none of the photos I checked had text descriptions there, so this is a big improvement. Unfortunately some photos (like the big header photo on the front page) don’t have descriptions, something that should probably be remedied.

Accessibility in other places will mean prompt inclusion of plaintext versions of governance items and announcements (versus PDFs or other documents), captions on official videos and other media, and as the team notes, lots of little improvements that make the site better for everyone who visits.

It’s a small thing in a way, compared with the changes expected to accompany the new administration, but small things tend to pile up and become big things.

As Microsoft’s Isaac Hepworth noted, there’s still lots of work to do, and that’s why U.S. Digital Services hid a little message in the source code:

Section of source code asking for help from the US Digital Services administration

Image Credits: White House

If you’re interested in helping out, sign up here.

#accessibility, #biden-administration, #design, #government, #white-house

0

Facebook and Instagram’s AI-generated image captions now offer far more details

Every picture posted to Facebook and Instagram gets a caption generated by an image analysis AI, and that AI just got a lot smarter. The improved system should be a treat for visually impaired users, and may help you find your photos faster in the future.

Alt text is a field in an image’s metadata that describes its contents: “A person standing in a field with a horse,” or “a dog on a boat.” This lets the image be understood by people who can’t see it.

These descriptions are often added manually by a photographer or publication, but people uploading photos to social media generally don’t bother, if they even have the option. So the relatively recent ability to automatically generate one — the technology has only just gotten good enough in the last couple years — has been extremely helpful in making social media more accessible in general.

Facebook created its Automatic Alt Text system in 2016, which is eons ago in the field of machine learning. The team has since cooked up many improvements to it, making it faster and more detailed, and the latest update adds an option to generate a more detailed description on demand.

The improved system recognizes 10 times more items and concepts than it did at the start, now around 1,200. And the descriptions include more detail. What was once “Two people by a building” may now be “A selfie of two people by the Eiffel Tower.” (The actual descriptions hedge with “may be…” and will avoid including wild guesses.)

But there’s more detail than that, even if it’s not always relevant. For instance, in this image the AI notes the relative positions of the people and objects:

The Facebook smartphone app showing detailed captions for an image.Obviously the people are above the drums, and the hats are above the people, none of which really needs to be said for someone to get the gist. But consider an image described as “A house and some trees and a mountain.” Is the house on the mountain or in front of it? Are the trees in front of or behind the house, or maybe on the mountain in the distance?

In order to adequately describe the image, these details should be filled in, even if the general idea can be gotten across with fewer words. If a sighted person wants more detail they can look closer or click the image for a bigger version — someone who can’t do that now has a similar option with this “generate detailed image description” command. (Activate it with a long press in the Android app or a custom action in iOS.)

Perhaps the new description would be something like “A house and some trees in front of a mountain with snow on it.” That paints a better picture, right? (To be clear, these examples are made up but it’s the sort of improvement that’s expected.)

The new detailed description feature will come to Facebook first for testing, though the improved vocabulary will appear on Instagram soon. The descriptions are also kept simple so they can be easily translated to other languages already supported by the apps, though the feature may not roll out in other countries simultaneously.

#accessibility, #alt-text, #artificial-intelligence, #captions, #facebook, #instagram, #social, #tc

0

Imagine being blind and trying to attend a virtual event. Try that next time you stage one.

How do you make a virtual event accessible for people who are blind or visually impaired?

When I started work on Sight Tech Global back in June this year, I was confident that we would find the answer to that question pretty quickly. With so many virtual event platforms and online ticketing options available to virtual event organizers, we were sure at least one would meet a reasonable standard of accessibility for people who use screen readers or other devices to navigate the Web.

Sadly, I was wrong about that. As I did my due diligence and spoke to CEOs at a variety of platforms, I heard a lot of “we’re studying WCAG [Web Content Accessibility Guidelines] requirements” or “our developers are going to re-write our front-end code when we have time.” In other words, these operations, like many others on the Web, had not taken the trouble to code their sites for accessibility at the start, which is the least costly and fairest approach, not to mention the one compliant with the ADA.

This realization was a major red flag. We had announced our event dates – Dec 2-3, 2020 – and there was no turning back. Dmitry Paperny, our designer, and I did not have much time to figure out a solution. No less important than the dates was the imperative that the event’s virtual experience work well for blind attendees, given that our event was really centered on that community.

We decided to take Occam’s razor to the conventions surrounding virtual event experiences and answer a key question: What was essential? Virtual event platforms tend to be feature heavy, which compounds accessibility problems. We ranked what really mattered, and the list came down to three things:

  • live-stream video for the “main stage” events
  • a highly navigable, interactive agenda
  • interactive video for the breakout sessions.

We also debated adding a social or networking element as well, and decided that was optional unless there was an easy, compelling solution.

The next question was what third-party tools could we use? The very good news was that YouTube and Zoom get great marks for accessibility. People who are blind are familiar with both and many know the keyboard commands to navigate the players. We discovered this largely by word of mouth at first and then discovered ample supporting documentation at YouTube and Zoom. So we chose YouTube for our main stage programming and Zoom for our breakouts. It’s helpful, of course, that it’s very easy to incorporate both YouTube and Zoom in a website, which became our plan.

Where to host the overall experience, was the next question. We wanted to be able to direct attendees to a single URL in order to join the event. Luckily, we had already built an accessible website to market the event. Dmitry had learned a lot in the course of designing and coding that site, including the importance of thinking about both blind and low-vision users. So we decided to add the event experience to our site itself – instead of using a third-party event platform – by adding two elements to the site navigation – Event (no longer live on the site) and Agenda.

The first amounted to a “page” (in WordPress parlance) that contained the YouTube live player embed, and beneath that text descriptions of the current session and the upcoming session, along with prominent links to the full Agenda. Some folks might ask, why place the agenda on a separate page? Doesn’t that make it more complicated? Good question, and the answer was one of many revelations that came from our partner Fable, which specializes in usability testing for people with disabilities. The answer, as we found time and again, was to imagine navigating with a screen reader, not your eyes. If the agenda were beneath the YouTube Player, it would create a cacophonous experience – imagine trying to listen to the programming and at the same time “read” (as in “listen to”) the agenda below. A separate page for the agenda was the right idea.

The Agenda page was our biggest challenge because it contained a lot of information, required filters and also, during the show, had different “states” – as in which agenda items were “playing now” versus upcoming versus already concluded. Dmitry learned a lot about the best approach to drop downs for filters and other details to make the agenda page navigable, and we reviewed it several times with Fable’s experts. We decided nonetheless to take the fairly unprecedented step of inviting our registered, blind event attendees to join us for a “practice event” a few days before the show in order to get more feedback. Nearly 200 people showed up for two sessions. We also invited blind screen reader experts, including Fable’s Sam Proulx and Facebook’s Matt King, to join us to answer questions and sort out the feedback.

It’s worth noting that there are three major screen readers: JAWS, which is used mostly by Windows’ users; VoiceOver, which is on all Apple products; and NVDA, which is open source and works on PCs running Microsoft Windows 7 SP1 and later. They don’t all work in the same way, and the people who use them range from experts who know hundreds of keyboard commands to occasional users who have more basic skills. For that reason, it’s really important to have expert interlocutors who can help separate good suggestions from simple frustrations.

The format for our open house (session one and session two) was a Zoom meeting, where we provided a briefing about the event and how the experience worked. Then we provided links to a working Event page (with a YouTube player active) and the Agenda page and asked people to give it a try and return to the Zoom session with feedback. Like so much else in this effort, the result was humbling. We had the basics down well, but we had missed some nuances, such as the best way to order information in an agenda item for someone who can only “hear” it versus “see” it. Fortunately, we had time to tune the agenda page a bit more before the show.

The practice session also reinforced that we had made a good move to offer live customer support during the show as a buffer for attendees who were less sophisticated in the use of screen readers. We partnered with Be My Eyes, a mobile app that connects blind users to sighted helpers who use the blind person’s phone camera to help troubleshoot issues. It’s like having a friend look over your shoulder. We recruited 10 volunteers and trained them to be ready to answer questions about the event, and Be My Eyes put them at the top of the list for any calls related to Sight Tech Global, which was listed under the Be My Eyes “event’ section.  Our event host, the incomparable Will Butler, who happens to be a vice-president at Be My Eyes, regularly reminded attendees to use Be My Eyes if they needed help with the virtual experience.

A month out from the event, we were feeling confident enough that we decided to add a social interaction feature to the show. Word on the street was that Slido’s basic Q&A features worked well with screen readers, and in fact Fable used the service for its projects. So we added Slido to the program. We did not embed a Slido widget beneath the YouTube player, which might have been a good solution for sighted participants, but instead added a link to each agenda session to a standalone Slido page, where attendees could add comments and ask questions without getting tangled in the agenda or the livestream.  The solution ended up working well, and we had more than 750 comments and questions on Slido during the show.

When Dec. 2 finally arrived, we were ready. But the best-laid plans often go awry, we were only minutes into the event when suddenly our live, closed-captioning broke. We decided to halt the show until we could bring that back up live, for the benefit of deaf and hard-of-hearing attendees. After much scrambling, captioning came back. (See more on captioning below).

Otherwise, the production worked well from a programming standpoint as well as accessibility. How did we do? Of the 2400+ registered attendees at the event, 45% said they planned to use screen readers. When we did a survey of those attendees immediately after the show, 95 replied and they gave the experience a 4.6/5 score. As far as the programming, our attendees (this time asked everyone – 157 replies) gave us a score of 4.7/5. Needless to say, we were delighted by those outcomes.

One other note concerned registration. At the outset, we also “heard” that one of the event registration platforms was “as good as it gets” for accessibility. We took that at face value, which was a mistake. We should have tested because comments for people trying to register as well as a low turnout of registration from blind people revealed after a few weeks that the registration site may have been better than the rest but was still really disappointing. It was painful, for example, to learn from one of our speakers that alt tags were missing from images (and there was no way to add them) and that the screen reader users had to tab through mountains of information in order to get to actionable links, such as “register.”

As we did with our approach to the website, we decided that the best course was to simplify. We added a Google Form as an alternative registration option. These are highly accessible. We instantly saw our registrations increase strongly, particularly among blind people. We were chagrined to realize that our first choice for registration had been excluding the very people our event intended to include.

We were able to use the Google Forms option because the event was free. Had we been trying to collect payment of registration fees, Google Form would not have been an option. Why did we make the event free to all attendees? There are several reasons. First, given our ambitions to make the event global and easily available to anyone interested in blindness, it was difficult to arrive at a universally acceptable price point. Second, adding payment as well as a “log-in” feature to access the event itself would create another accessibility headache. With our approach, anyone with the link to the Agenda or Event page could attend without any log-in demand or registration. We knew this would create some leakage in terms of knowing who attended the event – quite a lot in fact because we had 30% more attendees than registrants – but given the nature of the event we thought that losing out on names and emails was an acceptable price to pay considering the accessibility benefit.

If there is an overarching lesson from this exercise, it’s simply this: Event organizers have to roll up their sleeves and really get to the bottom of whether the experience is accessible or not. It’s not enough to trust platform or technology vendors, unless they have standout reputations in the community, as YouTube and Zoom do. It’s as important to ensure that the site or platform is coded appropriately (to WCAG standards, and using a tool like Google’s LightHouse) as it is to do real-world testing to ensure that the actual, observable experience of blind and low-vision users is a good one. At the end of the day, that’s what counts the most.

A final footnote. Although our event focused on accessibility issues for people who are blind or have low vision, we were committed from the start to include captions for people who would benefit. We opted for the best quality outcome, which is still human (versus AI) captioners, and we worked with VITAC to provide captions for the live Zoom and YouTube sessions and 3Play Media for the on-demand versions and the transcripts, which are now part of the permanent record. We also heard requests for “plain text” (no mark-up) versions of the transcripts in an easily downloadable version for people who use Braille-readers. We supplied those, as well. You can see how all those resources came together on pages  like this one, which contain all the information on a given session and are linked from the relevant section of the agenda.

 

#accessibility, #sight-tech-global, #tc, #techcrunch-include

0

Mixtape podcast: Artificial intelligence and disability

Welcome back to Mixtape, the TechCrunch podcast that looks at the human element that powers technology.

For this episode we spoke with Meredith Whittaker, co-founder of the AI Now Institute and Minderoo Research Professor at NYU; Mara Mills, associate professor of Media, Culture and Communication at NYU and co-director of the NYU Center for Disability Studies; and Sara Hendren, professor at Olin College of Engineering and author of the recently published What Can a Body Do: How We Meet the Built World.
It was a wide-ranging discussion about artificial intelligence and disability. Hendren kicked us off by exploring the distinction between the medical and social models of disability:

So in a medical model of disability, as articulated in disability studies, the idea is just that disability is a kind of condition or an impairment or something that’s going on with your body that takes it out of the normative average state of the body says something in your sensory makeup or mobility or whatever is impaired, and therefore, the disability kind of lives on the body itself. But in a social model of disability, it’s just an invitation to widen the aperture a little bit and include, not just the body itself and what it what it does or doesn’t do biologically. But also the interaction between that body and the normative shapes of the world.

When it comes to technology, Mills says, some companies work squarely in the realm of the medical model with the goal being a total cure rather than just accommodation, while other companies or technologies – and even inventors – will work more in the social model with the goal of transforming the world and create an accommodation. But despite this, she says, they still tend to have “fundamentally normative or mainstream ideas of function and participation rather than disability forward ideas.”

“The question with AI, and also just with old mechanical things like Brailers I would say, would be are we aiming to perceive the world in different ways, in blind ways, in minoritarian ways? Or is the goal of the technology, even if it’s about making a social, infrastructural change still about something standard or normative or seemingly typical? And that’s — there are very few technologies, probably for financial reasons, that are really going for the disability forward design.”

As Whittaker notes, AI by its nature is fundamentally normative.

“It draws conclusions from large sets of data, and that’s the world it sees, right? And it looks at what’s most average in this data and what’s an outlier. So it’s something that is consistently replicating these norms, right? If it’s trained on the data, and then it gets an impression from the world that doesn’t match the data it’s already seen, that impression is going to be an outlier. It won’t recognize that it won’t know how to treat that. Right. And there are a lot of complexities here. But I think, I think that’s something we have to keep in mind as sort of a nucleus of this technology, when we talk about its potential applications in and out of these sorts of capitalist incentives, like what is it capable of doing? What does it do? What does it act like? And can we think about it, you know, ever possibly in company encompassing the multifarious, you know, huge amounts of ways that disability manifests or doesn’t manifest.”

We talked about this and much much more on the latest episode of Mixtape, so you click play above and dig right in. And then subscribe wherever you listen to podcasts.

 

 

 

#accessibility, #mara-mills, #meredith-whittaker, #mixtape-podcast, #sara-hendren, #tc, #techcrunch-include

0

Ava expands its AI captioning to desktop and web apps, and raises $4.5M to scale

The worldwide shift to virtual workplaces has been a blessing and a curse to people with hearing impairments. Having office chatter occur in text rather than speech is more accessible, but virtual meetings are no easier to follow than in-person ones — which is why real-time captioning startup Ava has seen a huge increase in users. Riding the wave, the company just announced two new products and a $4.5 million seed round.

Ava previously made its name in the deaf community as a useful live transcription tool for real-life conversations. Start the app up and it would instantly hear and transcribe speech around you, color-coded to each speaker (and named if they activate a QR code). Extremely useful, of course, but when meetings stopped being in rooms and started being in Zooms, things got a bit more difficult.

“Use cases have shifted dramatically, and people are discovering the fact that most of these tools are not accessible,” co-founder and CEO Thibault Duchemin told TechCrunch.

And while some tools may have limited captioning built in (for example Skype and Google Meet), it may or may not be saved, editable, accurate, or convenient to review. For instance Meet’s ephemeral captions, while useful, only last a moment before disappearing, and are not specific to the speaker, making them of limited use for a deaf or hard of hearing person trying to follow a multi-person call. And the languages they are available in are limited as well.

As Duchemin explained, it began to seem much more practical to have a separate transcription layer that is not specific to any one service.

Illustration of a laptop and phone transcribing audio.

Image Credits: Ava

Thus Ava’s new product, a desktop and web app called Closed Captioning, which works with all major meeting services and online content, captioning it with the same on-screen display and making the content accessible via the same account. That includes things like YouTube videos without subtitles, live web broadcasts, and even audio-only content like podcasts, in more than 15 languages.

Individual speakers are labeled, automatically if an app supports it, like Zoom, or by having people in the meeting click a link that attaches their identity to the sound of their voice. (There are questions of privacy and confidentiality here, but they will differ case by case and are secondary to the fundamental capability of a person to participate.)

The transcripts all go to the person’s Ava app, letting them check through at their leisure or share with the rest of the meeting. That in itself is a hard service to find, Duchemin pointed out.

“It’s actually really complicated,” he said. “Today if you have a meeting with four people, Ava is the only technology where you can have accurate labeling of who said what, and that’s extremely valuable when you think about enterprise.” Otherwise, he said, unless someone is taking detailed notes — unlikely, expensive, and time-consuming — meetings tend to end up black boxes.

For such high quality transcription, speech-to-text AI isn’t good enough, he admitted. It’s enough to follow a conversation, but “we’re talking about professionals and students who are deaf or hard of hearing,” Duchemin said. “They need solutions for meetings and classes and in-person, and they aren’t ready to go full AI. They need someone to clean up the transcript, so we provide that service.”

Features of the Ava app.

Image Credits: Ava

Ava Scribe quickly brings in a human trained not in direct transcription but in the correction of the product of speech-to-text algorithms. That way a deaf person attending a meeting or class can follow along live, but also be confident that when they check the transcript an hour later it will be exact, not approximate.

Right now transcription tools are being used as value-adds to existing products and suites, he said — ways to attract or retain customers. They aren’t beginning with the community of deaf and hard of hearing professionals and designing around their needs, which is what Ava has striven to do.

The explosion in popularity and obvious utility of their platform has led to this $4.5M seed round, as well, led by Initialized Capital and Khosla Ventures.

Duchemin said they expected to double the size of their team with the money, and start really marketing and finding big customers. “We’re very specialized, so we need a strong business model to grow,” he said. A strong, unique product is a good place to start, though.

#accessibility, #artificial-intelligence, #ava, #captioning, #captions, #funding, #fundings-exits, #machine-learning, #recent-funding, #startups, #tc, #transcription

0

Cyberpunk 2077 draws criticism for seizure-inducing sequence with no warning or mitigation

One of the biggest games of the year, “Cyberpunk 2077,” is about to be released, but developer CD Projekt Red is already under fire for an early game sequence with the potential to induce seizures. Players with epilepsy should be warned that there is currently no way to skip this, and the visual feature will be repeated throughout the game.

Strobing lights can induce seizures in some people prone to them, but that hasn’t stopped many high-profile games from including them for effect. Usually there is a boilerplate warning on boot saying this is a possibility, but in most games it’s more of a warning that there may potentially be flashing lights of this type, for example if several flashbang grenades went off one after the other. Many games also offer an option to reduce the intensity of flashing lights or otherwise change their appearance, along with other options for accessibility.

“Cyberpunk” seems to tread especially dangerous territory fundamentally, as its game world is full of the kind of seedy, flickering-neon lighting one associates with a grimy, futuristic dystopia. But within the first few hours of the game there is a much more severe and thoughtlessly designed event that has already caused a reviewer at Game Informer to experience a seizure. It involves the (otherwise quite interesting) “braindances,” or BDs, which let your character relive experiences recorded by others, by donning a special headset… that boots up with intense flashing lights:

When “suiting up” for a BD, especially with Judy, V will be given a headset that is meant to onset the instance. The headset fits over both eyes and features a rapid onslaught of white and red blinking LEDs, much like the actual device neurologists use in real life to trigger a seizure when they need to trigger one for diagnosis purposes. If not modeled off of the IRL design, it’s a very spot-on coincidence, and because of that this is one aspect that I would personally advise you to avoid altogether. When you notice the headset come into play, look away completely or close your eyes. This is a pattern of lights designed to trigger an epileptic episode and it very much did that in my own personal playthrough.

You can see the event referred to in the screenshot above (taken afterwards, but you can see the device). I recall this moment quite clearly from my own playthrough, and remember thinking it was rather an intense light show indeed. Unfortunately for this person, it caused a serious episode and could do so for many others upon its release on the 10th.

Among the many options for changing the appearance of “Cyberpunk 2077,” there isn’t one for reducing flashing lights that I could find. I’ve asked CD Projekt Red about this and hopefully they can ship something to mitigate the issue at or near launch. The company did say on Twitter that it was looking into a solution.

#accessibility, #cd-projekt-red, #cyberpunk-2077, #gaming, #seizures

0

iPhones can now automatically recognize and label buttons and UI features for blind users

Apple has always gone out of its way to build features for users with disabilities, and Voiceover on iOS is an invaluable tool for anyone with a vision impairment — assuming every element of the interface has been manually labeled. But the company just unveiled a brand new feature that uses machine learning to identify and label every button, slider, and tab automatically.

Screen Recognition, available now in iOS 14, is a computer vision system that has been trained on thousands of images of apps in use, learning what a button looks like, what icons mean, and so on. Such systems are very flexible — depending on the data you give them, they can become expert at spotting cats, facial expressions, or as in this case the different parts of a user interface.

The result is that in any app now, users can invoke the feature and a fraction of a second later every item on screen will be labeled. And by “every,” they mean every — after all, screen readers need to be aware of every thing that a sighted user would see and be able to interact with, from images (which iOS has been able to create one-sentence summaries of for some time) to common icons (home, back) and context-specific ones like “…” menus that appear just about everywhere.

The idea is not to make manual labeling obsolete — developers know best how to label their own apps, but updates, changing standards, and challenging situations (in-game interfaces, for instance) can lead to things not being as accessible as they could be.

I chatted with Chris Fleizach from Apple’s iOS accessibility engineering team, and Jeff Bigham from the AI/ML accessibility team, about the origin of this extremely helpful new feature. (It’s described in a paper due to be presented next year.)

“We looked for areas where we can make inroads on accessibility, like image descriptions,” said Fleizach. “In iOS 13 we labeled icons automatically – Screen Recognition takes it another step forward. We can look at the pixels on screen and identify the hierarchy of objects you can interact with, and all of this happens on device within tenths of a second.”

The idea is not a new one, exactly; Bigham mentioned a screen reader, Outspoken, which years ago attempted to use pixel-level data to identify UI elements. But while that system needed precise matches, the fuzzy logic of machine learning systems and the speed of iPhones’ built-in AI accelerators means that Screen Recognition is much more flexible and powerful.

It wouldn’t have been possibly just a couple years ago — the state of machine learning and the lack of a dedicated unit for executing it meant that something like this would have been extremely taxing on the system, taking much longer and probably draining the battery all the while.

But once this kind of system seemed possible, the team got to work prototyping it with the help of their dedicated accessibility staff and testing community.

“VoiceOver has been the standard bearer for vision accessibility for so long. If you look at the steps in development for Screen Recognition, it was grounded in collaboration across teams — Accessibility throughout, our partners in data collection and annotation, AI/ML, and, of course, design. We did this to make sure that our machine learning development continued to push toward an excellent user experience,” said Bigham.

It was done by taking thousands of screenshots of popular apps and games, then manually labeling them as one of several standard UI elements. This labeled data was fed to the machine learning system, which soon became proficient at picking out those same elements on its own.

It’s not as simple as it sounds — as humans, we’ve gotten quite good at understanding the intention of a particular graphic or bit of text, and so often we can navigate even abstract or creatively designed interfaces. It’s not nearly as clear to a machine learning model, and the team had to work with it to create a complex set of rules and hierarchies that ensure the resulting screen reader interpretation makes sense.

The new capability should help make millions of apps more accessible, or just accessible at all, to users with vision impairments. You can turn it on by going to Accessibility settings, then VoiceOver, then VoiceOver Recognition, where you can turn on and off image, screen, and text recognition.

It would not be trivial to bring Screen Recognition over to other platforms, like the Mac, so don’t get your hopes up for that just yet. But the principle is sound, though the model itself is not generalizable to desktop apps, which are very different from mobile ones. Perhaps others will take on that task; the prospect of AI-driven accessibility features is only just beginning to be realized.

#accessibility, #apple, #apps, #artificial-intelligence, #mobile, #screen-readers, #tc, #voiceover

0