BEVCALIB model uses bird’s-eye view features for accurate LiDAR-camera calibration from raw data, demonstrating superior performance under various noise conditions.
Accurate LiDAR-camera calibration is fundamental to fusing multi-modal
perception in autonomous driving and robotic systems. Traditional calibration
methods require extensive data collection in controlled environments and cannot
compensate for the transformation changes during the vehicle/robot movement. In
this paper, we propose the first model that uses bird’s-eye view (BEV) features
to perform LiDAR camera calibration from raw data, termed BEVCALIB. To achieve
this, we extract camera BEV features and LiDAR BEV features separately and fuse
them into a shared BEV feature space. To fully utilize the geometric
information from the BEV feature, we introduce a novel feature selector to
filter the most important features in the transformation decoder, which reduces
memory consumption and enables efficient training. Extensive evaluations on
KITTI, NuScenes, and our own dataset demonstrate that BEVCALIB establishes a
new state of the art. Under various noise conditions, BEVCALIB outperforms the
best baseline in the literature by an average of (47.08%, 82.32%) on KITTI
dataset, and (78.17%, 68.29%) on NuScenes dataset, in terms of (translation,
rotation), respectively. In the open-source domain, it improves the best
reproducible baseline by one order of magnitude. Our code and demo results are
available at https://cisl.ucr.edu/BEVCalib.