MRS-VPR: a multi-resolution sampling based global visual place recognition method

Published in IEEE ICRA, 2019

Recommended citation: Peng Yin, Rangaprasad Arun Srivatsan, Yin Chen, Xueqian Li, etc. (2019). "MRS-VPR: a multi-resolution sampling based global visual place recognition method." IEEE ICRA. http://maxtomCMU.github.io/files/08793853.pdf

Alt Text

Abstract: Place recognition and loop closure detection are challenging for long-term visual navigation tasks. SeqSLAM is considered to be one of the most successful approaches to achieve long-term localization under varying environmental conditions and changing viewpoints. SeqSLAM uses a bruteforce sequential matching method, which is computationally intensive. In this work, we introduce a multi-resolution samplingbased global visual place recognition method (MRS-VPR), which can significantly improve the matching efficiency and accuracy in sequential matching. The novelty of this method lies in the coarse-to-fine searching pipeline and a particle filterbased global sampling scheme, that can balance the matching efficiency and accuracy in the long-term navigation task. Moreover, our model works much better than SeqSLAM when the testing sequence is over a much smaller time scale than the reference sequence. Our experiments demonstrate that MRSVPR is efficient in locating short temporary trajectories within long-term reference ones without compromising on the accuracy compared to SeqSLAM.

Download paper here