Voice and gaze-controlled hand interactions in XR to enable people with hand disabilities.
Hand or finger tracking is a technique to calculate the position of the user’s fingers in real-time and display the user’s hand as 3D object in a computer.
In XR products, the use of hands (controllers or hand-tracking) are vital for accessing the full range of interactivity. How might we reproduce this for people with arm & hand disabilities?
In the U.S. alone, 1 out of 50 people are affected by limb paralysis (that's 5.4 million people) and another 2 million people are amputees.
The primary user would fall under one or more of these 4 cases.
People with limb paralysis, cerebral palsy, spine injury, amputees, etc.
In case of injuries like fractures, sprains, pain or other medical conditions.
When one or both hands are occupied with other tasks.
Example - for doctors during surgery procedures.
Employees, managers and clients who will have to use hand tracking devices as new technologies get adopted.
I interviewed 8 people, both from the disabled community as well as developers/designers in the XR industry.
I chatted with people at XR Access 2020 online conference and gathered insights from a panel discussion about mobilities and motricity needs to better understand their pain points with VR/AR technology.
“I can only use one of my hands, so it's difficult if the controller needs both hands.”
"I should be able to specify mobility needs, and carry it with me across devices and platforms.”
“Plug and play nature is what we are looking for in VR.”
I gathered feedback from industry professionals who have experience with hand tracking applications and received valuable insights from their projects.
“It is frustrating when interactions feel like they need to be as surgical as a mouse, not organic like a hand.”
"Grabbing objects in VR needs to have clear feedback to compensate for the lack of haptics.”
“Simulated hand physics must be improved to get the right feeling of grip and weight for different objects.”
Hand tracking Hackathon organized by The Glimpse Group Company in NYC.
In a team of 3, we create a slider interaction. This gave me a "hands-on" experience on the design process and functionality of such interactions.
Hand tracking Hackathon hosted by Futures NI and SideQuest.
In a team of 4, we create a gesture-based touchless UI for navigating through a multiple choice quiz in a museum.
To test the usability of Freehand Assistant, I chose to adapt a training scenario from Hololens 2 partner spotlights - PTC and Howden (watch). This scenario was also demoed by Microsoft at the keynote of Mobile World Congress 2019.
I used card sorting to prioritize the most recurring interactions.
They are categorized into 3 key features.
Freehand Assistant is designed to be as device-agnostic as possible. After doing a comparison of UI frameworks and VR/AR devices, I found that this was possible by using input modalities like voice and gaze between HMDs.
I asked people doing kitchen tasks to say their actions aloud, as if giving commands to one's hand. This experiment probed into how we verbalize motor decisions.
Often, a person would look at an object before verbalizing their action, and this act of looking would assume that the hand knows what is being "looked at" when receiving the motor command. This was a crucial insight. The user always expects that the hand shares the same mind as him/her.
Freehand with a digetic indicator (like wrist band, shapes or wings)
In my prototype, the user will undergo a VR training to repair a gear failure in one of the engines in a manufacturing plant.
I attached a laser onto a cap to simulate the gaze pointer and stuck a camera on my face to capture a first-person perspective. This helped me discard assumptions early.
For the lo-fi prototype, I created wireframes in Figma to mimic a VR scenario. It started with an on-boarding to introduce the voice and gaze combo-interaction style.
Voice command "teleport" activates a teleportation trajectory from the hand to the floor. This trajectory follows the user's gaze anywhere on the ground plane.
The user goes through a step by step training procedure where the task is to fix a gear failure in the engine of a manufacturing plant.
By saying "pick up" or "grab", objects will fly to your hand.
Move around the virtual space by aiming a path with your gaze.
Receive visual feedback for every action to communicate user's intention.
The next phase involves developing a mid-fidelity prototype and doing lots of user testing. Adding features and expanding functionality for more than just training scenarios is also on the radar.
To make it as robust as possible, more testing is definitely needed.
Accounting for other scenarios and adding features accordingly will be the next phase.
ML and natural language processing would increase the accuracy of voice recognition.
1. Categorize and prioritize.
Designing for XR is multi-dimensional. To reduce complexity and avoid getting overwhelmed by all the possibilities, it is useful to categorize the user's needs and prioritize functionality.
2. Learn from reality; simulate in virtuality.
Experiences are founded on our perception. Therefore by learning about how we sense things, in reality, we can understand how to better simulate in XR mediums.
3. Test in VR as often as possible.
Behavior in VR emerges out of playtesting in VR.
What if we could use more than two hands?
Designing for accessibility does not have to be limiting, but instead may become a gateway to unlocking capacities beyond cognitive and physical limitations.