The proposal to spend $400 billion over eight years faces political challenges and a funding system not designed for the burden it has come to bear.
At first, being in water made me feel defeated. Now it’s transformed me.
President Biden understands that caregiving is infrastructure, and that all families need it.
“It was like rolling the dice, except for someone you’ve never met.”
Before 2020, theater often felt inaccessible to me, a luxury for those who were more able-bodied or lived in certain cities. Now I’m obsessed.
After losing part of a leg to cancer, she dominated the sport for nearly two decades, winning three Paralympic gold medals.
I will still have to face it with dignity.
She was passionate — and relentless — about making the city she loved navigable for everyone.
Shunned in school because of her disability, she devoted her life to the cause, organizing a historic sit-in that led to landmark federal legislation.
The pandemic has led to new contemplations of fragility, and sick or disabled artists are using new attention to imagine a more accessible art world.
An artificial retina would be an enormous boon to the many people with visual impairments, and the possibility is creeping closer to reality year by year. One of the latest advancements takes a different and very promising approach, using tiny dots that convert light to electricity, and virtual reality has helped show that it could be a viable path forward.
These photovoltaic retinal prostheses come from the École polytechnique fédérale de Lausanne, where Diego Ghezzi has been working on the idea for several years now.
Early retinal prosthetics were created decades ago, and the basic idea is as follows. A camera outside the body (on a pair of glasses, for instance) sends a signal over a wire to a tiny microelectrode array, which consists of many tiny electrodes that pierce the non-functioning retinal surface and stimulate the working cells directly.
The problems with this are mainly that powering and sending data to the array requires a wire running from outside the eye in — generally speaking a “don’t” when it comes to prosthetics, and the body in general. The array itself is also limited in the number of electrodes it can have by the size of each, meaning for many years the effective resolution in the best case scenario was on the order of a few dozen or hundred “pixels.” (The concept doesn’t translate directly because of the way the visual system works.)
Ghezzi’s approach obviates both these problems with the use of photovoltaic materials, which turn light into an electric current. It’s not so different from what happens in a digital camera, except instead of recording the charge as in image, it sends the current into the retina like the powered electrodes did. There’s no need for a wire to relay power or data to the implant, because both are provided by the light shining on it.
In the case of the EPFL prosthesis, there are thousands of tiny photovoltaic dots, which would in theory be illuminated by a device outside the eye sending light in according to what it detects from a camera. Of course, it’s still an incredibly difficult thing to engineer. The other part of the setup would be a pair of glasses or goggles that both capture an image and project it through the eye onto the implant.
We first heard of this approach back in 2018, and things have changed somewhat since then, as a new paper documents.
“We increased the number of pixels from about 2,300 to 10,500,” explained Ghezzi in an email to TechCrunch. “So now it is difficult to see them individually and they look like a continuous film.”
Of course when those dots are pressed right up against the retina it’s a different story. After all, that’s only 100×100 pixels or so if it were a square — not exactly high definition. But the idea isn’t to replicate human vision, which may be an impossible task to begin with, let alone realistic for anyone’s first shot.
“Technically it is possible to make pixel smaller and denser,” Ghezzi explained. “The problem is that the current generated decreases with the pixel area.”
So the more you add, the tougher it is to make it work, and there’s also the risk (which they tested) that two adjacent dots will stimulate the same network in the retina. But too few and the image created may not be intelligible to the user. 10,500 sounds like a lot, and it may be enough — but the simple fact is that there’s no data to support that. To start on that the team turned to what may seem like an unlikely medium: VR.
Because the team can’t exactly do a “test” installation of an experimental retinal implant on people to see if it works, they needed another way to tell whether the dimensions and resolution of the device would be sufficient for certain everyday tasks like recognizing objects and letters.
To do this, they put people in VR environments that were dark except for little simulated “phosphors,” the pinpricks of light they expect to create by stimulating the retina via the implant; Ghezzi likened what people would see to a constellation of bright, shifting stars. They varied the number of phosphors, the area they appear over, and the length of their illumination or “tail” when the image shifted, asking participants how well they could perceive things like a word or scene.
Their primary finding was that the most important factor was visual angle — the overall size of the area where the image appears. Even a clear image is difficult to understand if it only takes up the very center of your vision, so even if overall clarity suffers it’s better to have a wide field of vision. The robust analysis of the visual system in the brain intuits things like edges and motion even from sparse inputs.
This demonstration showed that the implant’s parameters are theoretically sound and the team can start working towards human trials. That’s not something that can happen in a hurry, and while this approach is very promising compared with earlier, wired ones, it will still be several years even in the best case scenario before it’s possible it could be made widely available. Still, the very prospect of a working retinal implant of this type is an exciting one and we’ll be following it closely.
He finished more than a thousand road races with his son Rick, who was in a wheelchair. They were best known for competing in the Boston Marathon.
Lingering symptoms from the coronavirus may turn out to be one of the largest mass disabling events in modern history.
People from across the continent told us about the ups and downs — mostly downs — of loving and streaming theater during a pandemic.
At 14, Sidiki Conde was paralyzed from the disease in Guinea. Now he’s an artist living in Manhattan.
People with disabilities are disproportionately employed in industries that have suffered in the pandemic.
Years after the Grenfell Tower fire in London killed 72 people, disabled residents of high-rises worry whether they would be able to evacuate if needed. “We’ve been an afterthought,” one activist says.
In its first hiring drive in over a decade, the continent’s space agency is looking to recruit disabled people and more women.
A Supreme Court case focused on a comedy routine mocking a disabled teenager could help shape the limits of free speech — and humor — in Canada.
The automated intelligence systems of Instagram and Facebook have repeatedly denied ads placed by small businesses that make stylish clothing for people with disabilities.
The state-sanctioned callous treatment of some of us as disposable has put everyone at risk.
Kiana Clay, who lost use of her dominant arm in an accident, is a fast-rising star who is pushing for inclusion in the 2022 Paralympics.
Lai Chi-wai fell short of his goal of ascending a skyscraper by rope. It hardly made his feat any less impressive.
The airline says it will permit service dogs only, following a move by the U.S. Department of Transportation to reclassify the types of service animals allowed on flights.
Gabriela Lena Frank, a composer born with high-moderate/near-profound hearing loss, describes her creative experience.
Like the fictional Beth Harmon in “The Queen’s Gambit,” she’s trying to find a way to get to Russia to compete. Unlike Beth, she’s blind.
She refused to be limited by her cerebral palsy. Her story was the subject of two widely read books and became an inspiration to many.
As a portrait artist, all I can do now is reconstruct the mysteries of who you are under the mask.
Charisma Jamison and Cole Sydnor, who have grown accustomed to sharing the joys and trials of their relationship with thousands of viewers on their show, “Roll With Cole & Charisma,” marry in Virginia.
A sexual health educator and counselor in Los Angeles, she challenged a dominant culture that viewed people with disabilities as asexual beings.
Emotional support animals are considered pets instead of service animals under the new rules, which go into effect next month.
Heidi Latsky’s “On Display” and Kinetic Light’s “Descent” show the broadness and diversity of the field of disability dance.
Audible crossing signals help visually impaired pedestrians. A court ordered New York City to come up with a plan to install more of the devices.
“This is one of the best representations of a disabled character I’ve ever seen,” Allen said.
“The Witches,” a film in which Anne Hathaway’s character has disfigured hands, has resurfaced the debate over depicting evil as disabled.
He learned to dance expressively long after his legs were amputated. A premier disabled performer on stages around the world, he opened the 2012 London Paralympics.
After an awkward fall 11 seconds into his first Boston University game left him a quadriplegic, he dedicated his life to advocacy for similarly disabled people.
The Supreme Court’s ruling to restrict access to voting last week is a reminder of the importance of disability rights laws for protecting the civil rights of all Americans.
The new museum in Colorado Springs is based on the idea that a wheelchair basketball player trains just as hard as any other basketball player.
Yes, you can help a cognitively impaired person participate in the election. But heed these two guidelines.
AI-based tools like computer vision and voice interfaces have the potential to be life-changing for people with disabilities, but the truth is those AI models are usually built with very little data sourced from those people. Microsoft is working with several nonprofit partners to help make these tools reflect the needs and everyday realities of people living with conditions like blindness and limited mobility.
Consider for example a computer vision system that recognizes objects and can describe what is, for example, on a table. Chances are that algorithm was trained with data collected by able people, from their point of view — likely standing.
A person in a wheelchair looking to do the same thing might find the system isn’t nearly as effective from that lower angle. Similarly a blind person will not know to hold the camera in the right position for long enough for the algorithm to do its work, so they must do so by trial and error.
Or consider a face recognition algorithm that’s meant to tell when you’re paying attention to the screen for some metric or another. What’s the likelihood that among the faces used to train that system, any significant amount have things like a ventilator, or a puff-and-blow controller, or a headstrap obscuring part of it? These “confounders” can significantly affect accuracy if the system has never seen anything like them.
Facial recognition software that fails on people with dark skin, or has lower accuracy on women, is a common example of this sort of “garbage in, garbage out.” Less commonly discussed but no less important is the visual representation of people with disabilities, or of their point of view.
Microsoft today announced a handful of efforts co-led by advocacy organizations that hope to do something about this “data desert” limiting the inclusivity of AI.
The first is a collaboration with Team Gleason, an organization formed to improve awareness around the neuromotor degenerative disease amyotrophic lateral sclerosis, or ALS (it’s named after former NFL star Steve Gleason, who was diagnosed with the disease some years back).
Their concern is the one above regarding facial recognition. People living with ALS have a huge variety of symptoms and assistive technologies, and those can interfere with algorithms that have never seen them before. That becomes an issue if, for example, a company wanted to ship gaze tracking software that relied on face recognition, as Microsoft would surely like to do.
“Computer vision and machine learning don’t represent the use cases and looks of people with ALS and other conditions,” said Team Gleason’s Blair Casey. “Everybody’s situation is different and the way they use technology is different. People find the most creative ways to be efficient and comfortable.”
Project Insight is the name of a new joint effort with Microsoft that will collect face imagery of volunteer users with ALS as they go about their business. In time that face data will be integrated with Microsoft’s existing cognitive services, but also released freely so others can improve their own algorithms with it.
They aim to have a release in late 2021. If the timeframe seems a little long, Microsoft’s Mary Bellard, from the company’s AI for Accessibility effort, pointed out that they’re basically starting from scratch and getting it right is important.
“Research leads to insights, insights lead to models that engineers bring into products. But we have to have data to make it accurate enough to be in a product in the first place,” she said. “The data will be shared — for sure this is not about making any one product better, it’s about accelerating research around these complex opportunities. And that’s work we don’t want to do alone.”
Another opportunity for improvement is in sourcing images from users who don’t use an app the same way as most. Like the person with impaired vision or in a wheelchair mentioned above, there’s a want of data from their perspective. There are two efforts aiming to address this.
One with City University of London is the expansion and eventual public release of the Object Recognition for Blind Image Training project, which is assembling a dataset for everyday for identifying everyday objects — a can of pop, a keyring — using a smartphone camera. Unlike other datasets, though, this will be sourced entirely from blind users, meaning the algorithm will learn from the start to work with the kind of data it will be given later anyway.
The other is an expansion of VizWiz to better encompass this kind of data. The tool is used by people who need help right away in telling, say, whether a cup of yogurt is expired or if there’s a car in the driveway. Microsoft worked with the app’s creator, Danna Gurari, to improve the app’s existing database of tens of thousands of images with associated questions and captions. They’re also working to alert a user when their image is too dark or blurry to analyze or submit.
Inclusivity is complex because it’s about people and systems that, perhaps without even realizing it, define “normal” and then don’t work outside of those norms. If AI is going to be inclusive, “normal” needs to be redefined and that’s going to take a lot of hard work. Until recently, people weren’t even talking about it. But that’s changing.
“This is stuff the ALS community wanted years ago,” said Casey. “This is technology that exists — it’s sitting on a shelf. Let’s put it to use. When we talk about it, people will do more, and that’s something the community needs as a whole.”
An estimated 38 million eligible voters have disabilities. It has always been hard for them to vote, and this year has brought even more obstacles.
A biking accident left Kirk Williams paralyzed, but he has traveled widely and inspired others to follow in his tire tracks.
For special-needs students, trying to return to the classroom, or just staying at home, presents a new set of challenges.
This Netflix documentary about the Paralympic Games, which spotlights a series of its most inspiring athletes, could double as an ad for the event.
These performers are creating a new template for the artist-as-activist, challenging their industry — and their audiences — to reconsider what inclusion really means.
The players say doctors use two scales — one for Black athletes, one for white — to determine eligibility for dementia claims.
The pandemic has made work and social life more accessible for many. People with disabilities are wondering whether virtual accommodations will last.
Our columnists and contributors give their rankings.
Google has updated its Lookout app, an AI toolkit for people with impaired vision, with two helpful new capabilities: scanning long documents and reading out food labels. Paper forms and similarly-shaped products at the store present a challenge for blind folks and this ought to make things easier.
Food labels, if you think about it, are actually a pretty difficult problem for a computer vision system to solve. They’re designed to be attention-grabbing and distinctive, but not necessarily highly readable or informative. If a sighted person can accidentally buy the wrong kind of peanut butter, what chance does someone who can’t read the label themselves have?
The new food label mode, then, is less about reading text and more about recognizing exactly what product it’s looking at. If the user needs to turn the can or bottle to give the camera a good look, the app will tell them so. It compares what it sees to a database of product images, and when it gets a match it reads off the relevant information: brand, product, flavor, other relevant information. If there’s a problem, the app can always scan the barcode as well.
Document scanning isn’t exactly exciting, but it’s good to have the option built in a straightforward way into a general-purpose artificial vision app. It works as you’d expect: Point your phone at the document (the app will help you get the whole thing in view) and it scans it for your screen reader to read out.
The “quick read” mode that the app debuted with last year, which watches for text in the camera view and reads it out loud, has gotten some speed improvements.
The update brings a few other conveniences to the app, which should run on any Android phone with 2 gigs of RAM and running version 6.0 or higher. It’s also now available in Spanish, German, French, and Italian.