BlindWays is the first multimodal 3D human motion benchmark for pedestrians who are blind, featuring data from 11 participants (varying in gender, age, visual acuity, onset of disability, mobility aid use, and navigation habits) in an outdoor navigation study. We provide rich two-level textual descriptions informed by third-person and egocentric videos.
A blind man with a guide dog is walking up a set of stairs, holding the handle in his left hand. He walks confidently at a relatively fast pace and continues walking after reaching the top of the stairs.
A blind man with a guide dog is walking up a set of stairs, holding the handle in his left hand. He walks confidently up 11 stairs without hesitation. He reaches the top and takes seven more steps forward.
A blind man with a cane in his right hand shuffles in place at one side of an intersection, turning in multiple directions in an attempt to orient himself. He maintains his cane in front of him tapping the ground.
A blind man with a cane in his right hand takes two steps forward, and then two side steps left while turning about 90 degrees to the right and tapping the ground in front of him with his cane. He then takes three small side steps to the right while tapping the ground in front of him with his cane. He then takes a small step to turn his body another 90 degrees to the right while tapping the ground in front of him with his cane.
A blind woman with a cane is walking to avoid obstacles in her path. She seems to try to enter the chapel. She is using the cane in her right hand to change her direction whenever there is an obstruction in front of her.
A blind woman with a cane in her right hand is moving ahead using her cane to find the obstacles in her path. She finds the green barrier in her path with the help of her cane, which she is moving in front of her in a right-to-left direction. Once she finds the green barrier, she turns towards her left and moves ahead, moving her cane from right to left direction.
People who are blind perceive the world differently than those who are sighted. This often translates to different motion characteristics; for instance, when crossing at an intersection, blind individuals may move in ways that could potentially be more dangerous, eg, exhibit higher veering from the path and employ touch-based exploration around curbs and obstacles that may seem unpredictable. Yet, the ability of 3D motion models to model such behavior has not been previously studied, as existing datasets for 3D human motion currently lack diversity and are biased toward people who are sighted. In this work, we introduce BlindWays, the first multimodal motion benchmark for pedestrians who are blind. We collect 3D motion data using wearable sensors with 11 blind participants navigating eight different routes in a real-world urban setting. Additionally, we provide rich textual descriptions that capture the distinctive movement characteristics of blind pedestrians and their interactions with both the navigation aid (eg, a white cane or a guide dog) and the environment. We benchmark state-of-the-art 3D human prediction models, finding poor performance with off-the-shelf and pre-training-based methods for our novel task. To contribute toward safer and more reliable autonomous systems that reason over diverse human movements in their environments, here is the public release of our novel text-and-motion benchmark.
This dataset contains contributions from 11 blind participants labelled with participant ids eg. P1, P2. Below is a brief overview of its structure: