The wildly popular map app will now tell you whether locations are suitable for people with access needs — and it’s thanks to a group of Googlers who worked on the feature in their “20% time.”
It’s a famous policy of the Californian search giant: Employees can spend 20% of their time working on other projects unrelated to their main jobs. Gmail, AdSense, and Google News all started as 20% projects.
These days, Google employees need to get permission from managers to get this time, and most don’t do it. Google HR boss Lazlo Bock says it has “waxed and waned” over time. But some still do — and Rio Akasaka is one of them.
Steve Mahan’s solo ride showed it’s time to take the car to market.
Now 63 and having lost his sight, Mahan has become one of those capsule-bound explorers. In October 2015, he became the first member of the public to ride in Google’s self-driving pod-like prototype, alone and on public roads. No steering wheel, no pedals, no human on board to step in should something go wrong.
Google’s Latest Accessibility Feature Is So Good, Everyone Will Use It
Though it was developed for users with severe motor impairment, Voice Access could revolutionize how anyone uses their phone.
Announced this week at I/O 2016 as something that will ship with Android N, Voice Access is a way for people with severe motor impairment to control every aspect of their phones using their voices. But once you see it in action, the broader impact of Voice Access is immediately obvious.
Here’s how it works. When Voice Access is installed, you can enable it with Android’s “Okay Google” command by just saying: “Okay Google, turn on Voice Access.” Once it’s on, it’s always listening—and you don’t have to use the Okay Google command anymore. With Voice Access, all of the UI elements that are normally tap targets are overlaid by a series of numbers. You can tell Voice Access to “tap” these targets by saying the corresponding number aloud.
But these numbers are actually meant to serve as a backup method of control: You can also just tell Voice Assistant what you want to do. For example, you could ask it to “open camera,” and then tell it to “tap shutter.” Best of all? Any app should work with Voice Access, as long as it’s already following Google’s accessibility guidelines.
Technically, Voice Access builds upon two things that Google’s been laying the groundwork on for a while now. The first is natural language processing, which allows Google Assistant to understand your voice.
Google’s Eve Andersson tells Co.Design how today’s accessibility problems could lead to improvements in robots, Google Maps, and even YouTube.
TEACHING AIS HOW TO NOTICE, NOT JUST SEE
Like Microsoft, which recently announced a computer vision-based accessibility project called Seeing AI, Google’s interested in how to convey visual information to blind users through computer vision and natural language processing. And like Microsoft, Google is dealing with the same problems: How do you communicate that information without just reading out loud an endless stream-of-conscious list of what a computer sees around itself—regardless of how trivial they may or may not be?
Thanks to Knowledge Graph and machine learning—the same principles that Google uses to let you search photos by content (like photos of dogs, or photos of people hugging)—Andersson tells me that Google is already good enough at identifying objects to decode them from a video stream in real time. So a blind user wearing a Google Glass-like wearable, or a body cam hooked up to a smartphone, could get real-world updates on what can be seen around them.
But again, the big accessibility problem that needs to be solved here is one of priority.
Much has been made recently of Google’s advances in natural language processing, or Google’s ability to understand and transcribe human speech. Google’s accessibility efforts lean heavily upon natural language processing, particularly its latest innovation, Voice Access. But Andersson says computers need to understand more than just speech. Forget natural language processing: computers need non-language processing.
TAKING NAVIGATION BEYOND GOOGLE MAPS
Sighted users are so used to taking directions from computers that many people (like me) can barely find their way around without first plugging an address into Waze. But moving sighted individuals from point A to point B, across well-plotted roads and highways, is navigation on macro scale. Things get much more complicated when you’re trying to direct a blind person down a busy city street, or from one store to another inside a shopping mall. Now, you’re directing people on a macro scale, but in an environment that is not as well understood or documented as roads are.
Google Maps now has an accessibility feature that tells whether or not a given place is wheelchair friendly. The feature is currently available only in select locations.
Google Maps has been ushering in new features on a regular basis and now the map is aiming to be wheelchair friendly. The maps have picked up a new feature that will tell you whether a particular location is wheelchair friendly or not. One can access this information by tapping on location summary (by tapping on the right arrow mark) and then scrolling down to “Amenities.” Just like the other fields if the place is accessible for folks with wheelchair the check mark next to “Wheelchair accessible entrance” will be marked.
All the big technology companies have dedicated teams working on accessibility — making software and hardware features that allow people with disabilities to use them. Employing people who have disabilities is one way to make sure tools are built with accessibility in mind…
Someone with perfect vision might not think to design traffic maps that can be understood by a colorblind user. YouTube engineer Ken Harrenstien, who is profoundly deaf, made it his mission to work on closed captioning for videos.
“If you don’t have an immediate family member or a friend who has a disability, you simply don’t know. It’s not that you want to exclude someone who has a disability, you just don’t know it,” said Astrid Weber, a user experience researcher at Google whose work has been influenced by a close friend with MS.
Weber collaborates with Google’s thousands of engineers and designers to make them think of accessibility while building products. She encourages employees to design with empathy, and to drop certain assumptions, like that everyone can touch an Android device or hear the sound an app makes.
Google recently added Voice Typing to Google Docs which really has taken speech recognition to the next level. By simply plugging in a microphone to a desktop computer students can start to use speech recognition immediately. I have found that the speech recognition that’s built into Google Docs to be very accurate and allows students to quickly get their ideas down. Voice Typing opens up all kinds of opportunities for students to quickly get their ideas down on the page. With the advent of this API other companies have been using and integrating the Voice Typing into their apps. You will now find Voice Typing integrated into Co:Writer Universal, WordQ, as well as Read and Write for Google Chrome.