代码链接:https://github.com/NKU-MobFly-Robotics/SPTG-LCC
论文链接:ieeexplore.ieee.org/abstract/document/11145812/
激光雷达与相机数据的融合具有广阔的应用前景,而标定是其关键前提。本文提出了一种激光雷达通用、无需标定目标的激光雷达-相机外参标定框架:SPTL-LCC。SPTL-LCC 的核心在于利用稠密点云和图像,在二维图像空间中提取精确的 3D-2D 对应关系。为确保获得稠密点云,我们提出了一种点云累积评估指标。在此基础上,我们提出了一种基于虚拟相机的多阶段激光雷达图像处理与优化方法。随后,利用多个图像匹配网络来估计激光雷达图像和相机图像之间的 2D-2D 对应关系,从而生成 3D-2D 对应关系。接着,我们提出了一种渐进式重投影误差最小化算法,根据这些对应关系来优化外参。SPTL-LCC 通过少量迭代即可提升激光雷达图像质量,提高了对应关系估计的准确性,从而实现了从粗到精的标定。个自采数据集上进行的大量实验表明,SPTL-LCC 在鲁棒性、准确性和实用性方面均优于当前最先进的方法,其平均平移误差小于 0.03 米,旋转误差小于 0.4°。此外,在 KITTI 数据集上的实验表明,SPTL-LCC 的性能甚至可以与在该数据集上训练的、但泛化能力较差的端到端方法相媲美。
Y. Wang, Y. Tong, R. Wang*, S. Zhang, Z. Song,X. Zhang. SPTL-LCC: Single-shot, Pixel-level, Target-free and LiDAR-type Agnostic LiDAR-Camera Extrinsic Self-Calibration.IEEE Transactions on Aerospace and Electronic Systems(accepted).
Abstract
The fusion of LiDAR and camera data holds vast application prospects, with calibration as a crucial prerequisite. In this paper, we propose SPTL-LCC, a LiDAR-type agnostic and target-free LiDAR-camera extrinsic calibration framework. The core of SPTL-LCC lies in extracting accurate 3D-2D correspondences in 2D image space using a densified point cloud and an image. To ensure the acquisition of a densified point cloud, a point cloud accumulation evaluation metric is proposed. Building upon this, a multi-stage LiDAR image processing and optimization method based on a virtual camera is proposed. Then, multiple image matching networks are employed to estimate 2D-2D correspondences between the LiDAR images and camera images, resulting in 3D-2D correspondences. A progressive reprojection error minimization algorithm is subsequently proposed to optimize the extrinsic parameters based on these correspondences. SPTL-LCC improves LiDAR image quality in a few iterations, enhancing correspondence estimation accuracy, thus achieving coarse-to-fine calibration. Extensive experiments on four self-recorded datasets show SPTL-LCC outperforms state-of-the-art methods in terms of robustness, accuracy, and practicality, with an average error of less than 0.03m in translation and 0.4° in rotation. Moreover, experiments on the KITTI dataset show SPTL-LCC even achieves comparable performance to end-to-end methods trained on this dataset, which exhibit poor generalization.