Athletes reject the idea that they should be admired just for coping with disabilities, and not also for what they’ve accomplished.
Asya Miller and Lisa Czechowski, who have won three Paralympic medals as teammates, first competed together at the Sydney Games in 2000.
Kevin Brousard, a Paralympic athlete, said “I didn’t consider myself lovable,” until he met Megann Karch.
Our nerves’ electrical impulses are created by a class of proteins called ion channels, which let ions flow into and out of cells. But controlling the flow of ions has uses that go well beyond creating nerve impulses, and there are many other channels made by cells—and even some made by bacteria and other organisms that don’t have nerves.
Scientists have discovered channels that only allow ions to flow after being triggered by light of specific wavelengths. When placed back into nerve cells, the channels turned out to be useful, as they allowed researchers to activate nerves using nothing but light. This discovery created an entire field of research—optogenetics—which has demonstrated that even complicated behaviors like socializing can be controlled with light.
But light-activated nerve activity is also part of normal biology, in the form of our eyes. The development of channels as a research tool has raised the prospect of using them to treat failing vision. In an important proof of concept, researchers have now used a light-sensitive channel and some specialized goggles to allow someone who is otherwise blind to locate objects.
Using a technique called optogenetics, researchers added light-sensitive proteins to the man’s retina, giving him a blurry view of objects.
An artificial retina would be an enormous boon to the many people with visual impairments, and the possibility is creeping closer to reality year by year. One of the latest advancements takes a different and very promising approach, using tiny dots that convert light to electricity, and virtual reality has helped show that it could be a viable path forward.
These photovoltaic retinal prostheses come from the École polytechnique fédérale de Lausanne, where Diego Ghezzi has been working on the idea for several years now.
Early retinal prosthetics were created decades ago, and the basic idea is as follows. A camera outside the body (on a pair of glasses, for instance) sends a signal over a wire to a tiny microelectrode array, which consists of many tiny electrodes that pierce the non-functioning retinal surface and stimulate the working cells directly.
The problems with this are mainly that powering and sending data to the array requires a wire running from outside the eye in — generally speaking a “don’t” when it comes to prosthetics, and the body in general. The array itself is also limited in the number of electrodes it can have by the size of each, meaning for many years the effective resolution in the best case scenario was on the order of a few dozen or hundred “pixels.” (The concept doesn’t translate directly because of the way the visual system works.)
Ghezzi’s approach obviates both these problems with the use of photovoltaic materials, which turn light into an electric current. It’s not so different from what happens in a digital camera, except instead of recording the charge as in image, it sends the current into the retina like the powered electrodes did. There’s no need for a wire to relay power or data to the implant, because both are provided by the light shining on it.
In the case of the EPFL prosthesis, there are thousands of tiny photovoltaic dots, which would in theory be illuminated by a device outside the eye sending light in according to what it detects from a camera. Of course, it’s still an incredibly difficult thing to engineer. The other part of the setup would be a pair of glasses or goggles that both capture an image and project it through the eye onto the implant.
We first heard of this approach back in 2018, and things have changed somewhat since then, as a new paper documents.
“We increased the number of pixels from about 2,300 to 10,500,” explained Ghezzi in an email to TechCrunch. “So now it is difficult to see them individually and they look like a continuous film.”
Of course when those dots are pressed right up against the retina it’s a different story. After all, that’s only 100×100 pixels or so if it were a square — not exactly high definition. But the idea isn’t to replicate human vision, which may be an impossible task to begin with, let alone realistic for anyone’s first shot.
“Technically it is possible to make pixel smaller and denser,” Ghezzi explained. “The problem is that the current generated decreases with the pixel area.”
So the more you add, the tougher it is to make it work, and there’s also the risk (which they tested) that two adjacent dots will stimulate the same network in the retina. But too few and the image created may not be intelligible to the user. 10,500 sounds like a lot, and it may be enough — but the simple fact is that there’s no data to support that. To start on that the team turned to what may seem like an unlikely medium: VR.
Because the team can’t exactly do a “test” installation of an experimental retinal implant on people to see if it works, they needed another way to tell whether the dimensions and resolution of the device would be sufficient for certain everyday tasks like recognizing objects and letters.
To do this, they put people in VR environments that were dark except for little simulated “phosphors,” the pinpricks of light they expect to create by stimulating the retina via the implant; Ghezzi likened what people would see to a constellation of bright, shifting stars. They varied the number of phosphors, the area they appear over, and the length of their illumination or “tail” when the image shifted, asking participants how well they could perceive things like a word or scene.
Their primary finding was that the most important factor was visual angle — the overall size of the area where the image appears. Even a clear image is difficult to understand if it only takes up the very center of your vision, so even if overall clarity suffers it’s better to have a wide field of vision. The robust analysis of the visual system in the brain intuits things like edges and motion even from sparse inputs.
This demonstration showed that the implant’s parameters are theoretically sound and the team can start working towards human trials. That’s not something that can happen in a hurry, and while this approach is very promising compared with earlier, wired ones, it will still be several years even in the best case scenario before it’s possible it could be made widely available. Still, the very prospect of a working retinal implant of this type is an exciting one and we’ll be following it closely.
The life and work of Thomas Wiggins, who toured as “Blind Tom,” has been given more attention in recent years.
He was best known for his 12-volume memoir, which focused on the troubled modern history of his native India and his early struggles with blindness.
The home pregnancy test has long been lauded for giving women privacy and autonomy. But that’s not the case for everyone who takes it.
A blind runner issued a challenge to technologists last year to find a way for him to run safely without a guide. They did.
Music students who are blind and visually impaired are offered a five-week course. “For me this class is about being dance,” its teacher says.
Apple has packed an interesting new accessibility feature into the latest beta of iOS: a system that detects the presence of and distance to people in the view of the iPhone’s camera, so blind users can social distance effectively, among many other things.
The feature emerged from Apple’s ARKit, for which the company developed “people occlusion,” which detects people’s shapes and lets virtual items pass in front of and behind them. The accessibility team realized that this, combined with the accurate distance measurements provided by the lidar units on the iPhone 12 Pro and Pro Max, could be an extremely useful tool for anyone with a visual impairment.
Of course during the pandemic one immediately thinks of the idea of keeping six feet away from other people. But knowing where others are and how far away is a basic visual task that we use all the time to plan where we walk, which line we get in at the store, whether to cross the street, and so on.
The new feature, which will be part of the Magnifier app, uses the lidar and wide-angle camera of the Pro and Pro Max, giving feedback to the user in a variety of ways.
First, it tells the user whether there are people in view at all. If someone is there, it will then say how far away the closest person is in feet or meters, updating regularly as they approach or move further away. The sound corresponds in stereo to the direction the person is in the camera’s view.
Second, it allows the user to set tones corresponding to certain distances. For example, if they set the distance at six feet, they’ll hear one tone if a person is more than six feet away, another if they’re inside that range. After all, not everyone wants a constant feed of exact distances if all they care about is staying two paces away.
The third feature, perhaps extra useful for folks who have both visual and hearing impairments, is a haptic pulse that goes faster as a person gets closer.
Last is a visual feature for people who need a little help discerning the world around them, an arrow that points to the detected person on the screen. Blindness is a spectrum, after all, and any number of vision problems could make a person want a bit of help in that regard.
The system requires a decent image on the wide-angle camera, so it won’t work in pitch darkness. And while the restriction of the feature to the high end of the iPhone line reduces the reach somewhat, the constantly increasing utility of such a device as a sort of vision prosthetic likely makes the investment in the hardware more palatable to people who need it.
This is far from the first tool like this — many phones and dedicated devices have features for finding objects and people, but it’s not often that it comes baked in as a standard feature.
People detection should be available to iPhone 12 Pro and Pro Max running the iOS 14.2 release candidate that was just made available today. Details will presumably appear soon on Apple’s dedicated iPhone accessibility site.
Enrique Oliu, a blind radio broadcaster for the Tampa Bay Rays, relies on crowd noise and on-field sounds to do his job. This season, he’s had to adjust more than anyone.
So long to overhyped innovations. Hello to tech that embeds accessibility into everyday devices.
More than 6 million Americans have vision problems that cannot be corrected by glasses or contact lenses. Companies like IrisVision are creating headsets to help them see better.
Adults and children in the United States have been blinded, hospitalized, and, in some cases, even died after drinking hand sanitizers contaminated with the extremely toxic alcohol methanol, the Food and Drug Administration reports.
In an updated safety warning, the agency identified five more brands of hand sanitizer that contain methanol, a simple alcohol often linked to incorrectly distilled liquor that is poisonous if ingested, inhaled, or absorbed through the skin.
The newly identified products are in addition to nine methanol-containing sanitizers the FDA identified last month, which are all made by the Mexico-based manufacturer Eskbiochem SA de CV. According to FDA testing, one of the products contained 81 percent methanol and no ethanol, a safe alcohol typically used in hand sanitizers. At the time, the agency reported that it was “not aware of any reports of adverse events associated with these hand sanitizer products.”
Masks, enforced social distance and other public health measures intended to slow the spread of the coronavirus pose unique challenges to the 37 million American adults with impaired hearing.
He waged a successful campaign to overturn a State Department rule prohibiting the blind from becoming Foreign Service officers.
Just in time for Easter, the story of a blind state leader who is giving up his office to join the Jesuits.