The data fusion of LiDAR and camera holds vast application prospects, with calibration as a crucial prerequisite. In this paper, we propose SPTG-LCC, a novel, general, and target-free LiDAR-camera extrinsic calibration framework. On the one hand, SPTG-LCC is open-source, which is very suitable for practitioners seeking a robust, general, and convenient target-free calibration tool. On the other hand, the four diverse datasets are open-source, which is very suitable for researchers to comprehensively evaluate feature-based target-free LiDAR-camera calibration methods.
Video Link: Video on Youtube
The following tasks will be completed quickly and step by step.
- Video Link: Video on Youtube Completed
- Paper Link:
our self-assembled sensor suites as follows, where the camera is the Realsense D455 and ZED 2i. Four diverse datasets were collected using these four suites, named FB-LCC-NS360, FB-LCC-NS70, FB-LCC-RS16, and FB-LCC-MEMS-M1, which are released for evaluating feature-based LiDAR-camera calibration methods. Moreover, sequence 00 on the public KITTI odometry benchmark is evenly divided into 67 LiDAR-camera data pairs as a dataset, named FB-LCC-RS-KITTI-VLP-64.
- Reference values of extrinsic parameters of the sensor suite in the dataset: Reference_calibration matrix.yaml Completed
- FB-LCC-NS360 : Baidu Cloud Disk Completed
- FB-LCC-NS70 : Baidu Cloud Disk Completed
- FB-LCC-RS16 : Baidu Cloud Disk Completed
- FB-LCC-MEMS-M1 : Baidu Cloud Disk Completed
- FB-LCC-RS-KITTI-VLP-64 : Baidu Cloud Disk Completed
- Docker images tool : Completed
- main code : Completed
- test code
- Optional motion-based initial guess : Completed
- Simplified code
- Simplified Docker image
- parameter description
git clone https://github.com/NKU-MobFly-Robotics/SPTG-LCC.git
Assuming the folder where the code is downloaded locally is: /home/wyw/SPTG-LCC
Check the folder where you cloned the code and replace the example folder(/home/wyw/SPTG-LCC) in all the following commands
Docker images: Baidu Cloud Disk
docker load -i sptg-lcc.tar
sudo docker run -it -v /home/wyw/SPTG-LCC:/calib_data -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY --net=host -e GDK_SCALE -e GDK_DPI_SCALE --privileged --gpus all -e NVIDIA_DRIVER_CAPABILITIES=all --name SPTG-LCC sptglcc:latest bash
sudo docker start SPTG-LCC
sudo docker exec -it SPTG-LCC bash
cd /calib_data/direct_lidar_camera
conda deactivate
source /opt/ros/noetic/setup.bash
catkin_make
Download the weight file and put it in the following folder respectively
cd /home/wyw/SPTG-LCC/matcher/Efficinet_LOFTR/EfficientLoFTR && mkdir weights
EfficientLoFTR weights: Baidu Cloud Disk
cd /home/wyw/SPTG-LCC/matcher/LightGlue
superpoint: Baidu Cloud Disk superpoint_lightglue Baidu Cloud Disk
cd /home/wyw/SPTG-LCC/mono_depth/Marigold/
mono_depth weights(The entire folder): Baidu Cloud Disk
cd /home/wyw/SPTG-LCC/SPTG-LCC/bag
The rosbag needs to contain the camera image topic, the intrinsic parameters topic, and the lidar topic.
Modify your topic name in the yaml file in the config folder
/home/wyw/SPTG-LCC/direct_lidar_camera/src/direct_visual_lidar_calibration/config
Rosbag example for testing : Baidu Cloud Disk
cd /home/wyw/SPTG-LCC
chmod +x LiDAR_Camera_calib.sh
./LiDAR_Camera_calib.sh
The final result is the latest txt file.
cd /home/wyw/SPTG-LCC/SPTG-LCC/results
chmod +x Alignment_Effect.sh
./Alignment_Effect.sh
cd /home/wyw/SPTG-LCC/SPTG-LCC/data
In addition, to deal with the problem of calibration failure in some extreme installation situations, we have released a motion-based LiDAR camera extrinsic parameters coarse calibration tool, which aims to provide a rough initial guess, and users can continue to use SPTG-LCC to complete the calibration based on the initial guess. This initial guess is fully automatic.
The Github repository link is as follows: LCC_init :https://github.com/af-doom/LCC_init
All the required C++ libraries and python libraries are packaged into the docker image. You only need to have an Ubuntu system (not dependent on a specific version) and have installed the NVIDIA driver (you can use the nvidia-smi command to test it).
If there is no docker on the host, you need to install docker and nvidia-container-toolkit so that docker can perform visualization and call the GPU.
wget http://fishros.com/install -O fishros && . fishros
https://docs.docker.com/engine/install/ubuntu/#install-using-the-repository
https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html
https://blog.csdn.net/dw14132124/article/details/140534628?spm=1001.2101.3001.6650.6&utm_medium=distribute.pc_relevant.none-task-blog-2%7Edefault%7EBlogCommendFromBaidu%7ERate-6-140534628-blog-140452570.235%5Ev43%5Epc_blog_bottom_relevance_base1&depth_1-utm_source=distribute.pc_relevant.none-task-blog-2%7Edefault%7EBlogCommendFromBaidu%7ERate-6-140534628-blog-140452570.235%5Ev43%5Epc_blog_bottom_relevance_base1&utm_relevant_index=13
Successful execution of the following command indicates successful installation
sudo apt-get install -y nvidia-container-toolkit
- We sincerely appreciate the following open-source projects: DVLC, KITTI, Lightglue, Efficient-LoFTR, Marigold, Superpoint.
- In particular, our code framework is based on DVLC(direct_visual_lidar_calibration), thanks to this great open-source work.