Floor Plan Based Active Global Localization and Navigation Aid for Persons with Blindness and Low Visions

This work was supported in part by ARO under Grant W911NF-22-1-0028, and in part by the New York University Abu Dhabi (NYUAD) Center for Artificial Intelligence and Robotics (CAIR), funded by Tamkeen under the NYUAD Research Institute Award CG010

1Control/Robotics Research Laboratory (CRRL), Department of Electrical and Computer Engineering, NYU Tandon School of Engineering 2Rusk Institute of Rehabilitation, New York University Grossman School of Medicine

We thank Tanishq Bhansali for his help in conducting the experiments

Navigation assistance for visually impaired individuals.

Demonstrating active vs passive localization systems.

Overview of proposed algorithm

Flow chart illustrating the process
The 2D semantic point cloud generated during the agent's motion is used to create localization hypotheses (particle filter) for the agent's start location. The agent's current location, estimated using drift-corrected odometry, is used to give real-time local targets. Once localized, the agent is navigated to the desired final destination.

Abstract

Navigation of an agent, such as a person with blindness or low vision, in an unfamiliar environment poses substantial difficulties, even in scenarios where prior maps, like floor plans, are available. It becomes essential first to determine the agent's pose in the environment. The task's complexity increases when the agent also needs directions for exploring the environment to reduce uncertainty in the agent's position. This problem of active global localization typically involves finding a transformation to match the agent's sensor-generated map to the floor plan while providing a series of point-to-point directions for effective exploration. Current methods fall into two categories: learning-based, requiring extensive training for each environment, or non-learning-based, which generally depend on prior knowledge of the agent's initial position, or the use of floor plan maps created with the same sensor modality as the agent. Addressing these limitations, we introduce a novel system for real-time, active global localization and navigation for persons with blindness and low vision. By generating semantically informed real-time goals, our approach enables local exploration and the creation of a 2D semantic point cloud for effective global localization. Moreover, it dynamically corrects for odometry drift using the architectural floor plan, independent of the agent's global position and introduces a new method for real-time loop closure on reversal. Our approach's effectiveness is validated through multiple real-world indoor experiments, also highlighting its adaptability and ease of extension to any mobile robot.

Primary Contributions

  • Semantics-driven active global localization leveraging architectural floor plans and stereo-inertial sensors.
  • A dynamic approach for correcting the agent's time-varying odometry drift utilizing the floor plan, independent of prior knowledge of the agent's initial pose.
  • Implementation of loop closure for reversal through the application of ICP and Bundle Adjustment techniques.
  • Development of an efficient, real-time semantic end-to-end system designed to facilitate navigation assistance for persons with blindness and low vision.

Video

Drift Correction

Given a hypothesis for the agent's origin, its current pose is derived from noisy visual-inertial odometry. To mitigate the resulting drift, our strategy involves aligning the agent's locally observed 2D point cloud with the corresponding section of the floor plan.

Loop Closure on Reversal

Along with the floor plan based drift correction, using loop closures in SLAM to correct drift further improves performance. ICP algorithm is utilized to align the instantaneous point cloud from the depth image.

Comparison between our method vs AMCL and FSD

Quantitative Results

Metrics Image 1
Metrics Image 2

Metrics Image 3
Metrics Image 4

Demonstrating Our Method On Different Environments

Environment 1
Environment 1
Environment 2
Environment 2
Environment 3
Environment 3

BibTeX

@article{global2024localization,
  author    = {Goswami, R.G. and Sinha, H. and Amith, P.V. and Hari, J. and Krishnamurthy, P. and Rizzo, J. and Tzes, A. and Khorrami, F.},
  title     = {Floor Plan Based Active Global Localization and Navigation Aid for Persons with Blindness and Low Visions},
  journal   = {IEEE Robotics and Automation Letters (RA-L)},
  year      = {2024},
}