FusionMapping: Learning Depth Prediction with Monocular Images and 2D Laser Scans

Published in Arxiv, 2019

Recommended citation: Peng Yin, Jianing Qian, Yibo Cao, David Held and Howie Choset. (2019) "FusionMapping: Learning Depth Prediction with Monocular Images and 2D Laser Scans" Arxiv. http://maxtomCMU.github.io/files/1912.00096.pdf

test image size

Abstract: Acquiring accurate three-dimensional depth information conventionally requires expensive multibeam LiDAR devices. Recently, researchers have developed a less expensive option by predicting depth information from two-dimensional color imagery. However, there still exists a substantial gap in accuracy between depth information estimated from twodimensional images and real LiDAR point-cloud. In this paper, we introduce a fusion-based depth prediction method, called FusionMapping. This is the first method that fuses colored imagery and two-dimensional laser scan to estimate depth information. More specifically, we propose an autoencoder-based depth prediction network and a novel point-cloud refinement network for depth estimation. We analyze the performance of our FusionMapping approach on the KITTI LiDAR odometry dataset and an indoor mobile robot system. The results show that our introduced approach estimates depth with better accuracy when compared to existing methods.

Download paper here