Synchronous Adversarial Feature Learning for LiDAR based Loop Closure Detection

Published in IEEE ACC, 2018

Recommended citation: Peng Yin, Yuqing He, Lingyun Xu, Yan Peng, Jianda Han and Weiliang Xu. (2019). "Synchronous Adversarial Feature Learning for LiDAR based Loop Closure Detection." IEEE ACC. http://maxtomCMU.github.io/files/08431776.pdf

Abstract: Loop Closure Detection (LCD) is the essential module in the simultaneous localization and mapping (SLAM) task. In the current appearance-based SLAM methods, the visual inputs are usually affected by illumination, appearance and viewpoints changes. Comparing to the visual inputs, with the active property, light detection and ranging (LiDAR) based point-cloud inputs are invariant to the illumination and appearance changes. In this paper, we extract 3D voxel maps and 2D top view maps from LiDAR inputs, and the former could capture the local geometry into a simplified 3D voxel format, the later could capture the local road structure into a 2D image format. However, the most challenge problem is to obtain efficient features from 3D and 2D maps to against the viewpoints difference. In this paper, we proposed a synchronous adversarial feature learning method for the LCD task, which could learn the higher level abstract features from different domains without any label data. To the best of our knowledge, this work is the first to extract multi-domain adversarial features for the LCD task in real time. To investigate the performance, we test the proposed method on the KITTI odometry dataset. The extensive experiments results show that, the proposed method could largely improve LCD accuracy even under huge viewpoints differences.

test image size

Download paper here