Skip to content

Latest commit

 

History

History
433 lines (291 loc) · 21.9 KB

README.md

File metadata and controls

433 lines (291 loc) · 21.9 KB

This is the official implementation of the paper 𝛼LiDAR: An Adaptive High-Resolution Panoramic LiDAR System (MobiCom 2024).

𝛼LiDAR: An Adaptive High-Resolution Panoramic LiDAR System

The performance of current LiDAR sensors is hindered by limited field of view (FOV), low resolution, and lack of flexible focusing capability. We introduce 𝛼LiDAR, an innovative LiDAR system that employs controllable actuation to provide a panoramic FOV, high resolution, and adaptable scanning focus. See our demos below $\color{red}{\textbf{(3.7k+ views)}}$:

Demo Video (YouTube)

alpha_lidar_demo_video

System Overview

The core concept of 𝛼LiDAR is to expand the operational freedom of a LiDAR sensor through the incorporation of a controllable, active rotational mechanism. This modification allows the sensor to scan previously inaccessible blind spots and focus on specific areas of interest in an adaptive manner. A major challenge with 𝛼LiDAR is that rapid rotations result in highly distorted point clouds. Our solution focuses on accurately estimating the LiDAR sensor’s pose during rapid rotations to effectively correct and reconstruct the point cloud.

Teaser)

Table of Contents

This repository contains the hardware specifications, software, and datasets for reproducing and evaluating 𝛼LiDAR:

How to Use This Repo:

We offer two approaches for reproducing 𝛼LiDAR:

Approach 1: Build the hardware from scratch, and then test 𝛼LiDAR's algorithm.

We provide a detailed hardware guideline, including design files and production parameters for mechanical and electronic parts, along with a step-by-step replication tutorial. Once the hardware is set up, users can collect datasets and test 𝛼LiDAR in real-time and on-site. To begin, please refer to the ⚙️ I. Hardware guideline and follow the instructions in the specified order.

Approach 2: Directly test 𝛼LiDAR's core algorithm with pre-collected datasets.

We also provide pre-collected datasets with 𝛼LiDAR's. These datasets allow the users to directly test the performance of 𝛼LiDAR's core software components. For this approach, please directly refer to 💽 II. Prepare code and datasets and # 📜 III. Software guideline.

⚙️ I. Hardware Guideline

𝛼LiDAR enhances its sensing ability by incorporating an activate rotation mechanism in the FoV-limited directions in the physical layer, therefore, implementing the complete hardware system is important. In this section, we will demonstrate the assembly of the hardware in detail, including the setup of sensors, mechanical structures and electronic components. We provide comprehensive Bill of Materials, 3D printing instructions, PCB designing and manufacturing details and firmware setup guides to ensure easy replication of 𝛼LiDAR. Following this guide, users can reproduce the full 𝛼LiDAR hardware system, as shown bellow, for data collection and performance validation.

Teaser)

1. Bill of Materials

This section provides all materials required to replicate 𝛼LiDAR, including sensors, mechanical parts, electronic components, etc. Please prepare these materials before building 𝛼LiDAR.

Starting with sensors and actuators, 𝛼LiDAR requires a LiDAR with PPS/GPRMC time synchronization support (which is commonly built into most LiDARs). 𝛼LiDAR also requires an IMU and an encoder for state estimation, and a gimbal motor as the actuator for the active motion. The encoder is usually integrated into the gimbal motor such as DJI GM6020. Specific models of these components are listed in the table below:

Click here to show the material list
Component Quantity Detail Link
Hesai Pandar XT-16 (*) 1 LiDAR with PPS+GPRMC sync support link
Yesense YIS100 1 IMU link
Robomaster GM6020 1 Motor / Encoder link

(*) Or other LiDAR sensors, e.g., Robosense RS16 / Livox MID360 / Livox Horizon.

Next, here are the mechanical and electronic parts required. Most of them can be purchased online with low cost. The purchase links are also provided if available.

Click here to show the material list
Component Quantity Detail Link
Mechanical Parts
LanBo PLA+ filement 1 3D printing filement link
Central mounting structure 1 3D printed part .step
Motor mounting base 1 3D printed part .step
Stand 1 3D printed part .step
Hex socket cap screws 18 7 x M3 x 10mm, 7 x M4 x 10mm, 4 x M4 x 25mm -
W1/4-20 screws 1 - -
Electronic Parts
Control board 1 PCB link
Host machine interface 1 PCB link
STM32F103C8 Dev Board 1 MCU link
SP3232 Module 1 RS232-TTL converter link
TJA1050 CAN Module 1 CAN controller link
CH340C SOP16 1 USB-Serial converter link
SL2.1A SOP16 1 USB hub controller link
0805 chip beads 1 EMI protection link
RJ45 Socket 3 Control board x1, interface board x2 link
XH2.54 Socket 5 8pin x1, 4pin x2, 2pin x2 link
Gigabit Ethernet cable 1 8-line, for data transmission -

2. Build the Mechanical Components

𝛼LiDAR's mechanical components including the structural parts for mounting the LiDAR, IMU, Motor, and the control board. Our design is optimized for 3D printing to facilitate rapid prototyping and reproducing.

2.1 Prepare the CAD Models:

First, download the following CAD models of the mechanical components. The user can preview the assembly diagram and the part files with FreeCAD software and A2plus plugin.

Click here to show the CAD files:
  • Assembly diagram:

hardware/Mechanical/assembly.FCStd

  • LiDAR-IMU-Motor central mounting structure (for Hesai Pandar XT16)

hardware/Mechanical/center_mounting_structure_hesai.step

  • LiDAR-IMU-Motor central mounting structure (for Robosense RS16)

hardware/Mechanical/center_mounting_structure_rs16.step

  • Motor mounting base

hardware/Mechanical/motor_mounting_base.step

  • Stand

hardware/Mechanical/stand.step

2.2 Build the Components with a 3D Printer

The users can then import the above STEP files into a 3D printing slicing software, set the printing parameters to generate the .gcode files, and then use a 3D printer to build the parts. For reference, we use a SnapMaker A350 3D printer, and use Ultimaker Cura 4.9.1 for model slicing.

Click here to show the key parameters for 3D printing:
  • Printing material: Lanbo PLA+
  • Temperature: 200C
  • Heat bed: 60C
  • Cooling fan: 100%
  • Nozzle diameter: 0.4mm
  • Layer height: 0.24mm
  • Wall line count: 4
  • Top-bottom layers: 4
  • Infill: 60% Gyroid
  • Support: On
  • Adhesion: On
  • Build plate adhesion type: Brim

To ensure that the printed parts have sufficient layer strength during rapid rotations, we require specific placement for the models in the slicing software. The orientation of the sliced models should be positioned like this:

3D Printing Preview

3. Setup the Electronic Components

𝛼LiDAR's electronic components mainly consists of two PCBs: the control board and the host machine interface.

The control board is mounted together with the Motor, LiDAR, IMU sensor, aggregating all sensor data onto a single RJ45 physical interface, and transmitting the data via a 8-line gigabit Ethernet cable;

the host machine interface splits the 8-line gigabit Ethernet cable into a 4-line 100Mbps Ethernet and a USB 2.0 interface, connecting to the host computer.

3.1 Preview the PCB Designing Files

Click here to show the PCB design files:
  • Control board PCB design file:

hardware/PCB/EasyEDA_PCB_control_board.json

  • Host machine interface PCB design file:

hardware/PCB/EasyEDA_PCB_host_machine_interface.json

The PCB design files can be imported and previewed in EasyEDA. After successful importation, the appearance of the PCB should look as shown in the following images:

PCB

3.2 Manufacture the PCBs

The PCBs can be manufactured using JLCPCB's PCB Prototype service, which has been integrated within EasyEDA.

Click here to show the key parameters for PCB fabrication:
  • Base Material: FR-4
  • Layers: 2
  • Product Type: Industrial/Consumer electronics
  • PCB Thickness: 1.6
  • Surface Finish: HASL(with lead)
  • Outer Copper Weight: 1oz
  • Min via hole size/diameter: 0.3mm
  • Board Outline Tolerance: +-0.2mm

To assemble the MCU and other electronic components onto the PCB, we can use JLCPCB's SMT service, or solder by hand. The fully assembled PCB is shown in the following images:

PCB

3.3 Uploading the Firmware

The firmware needs to be programmed into the STM32 MCU on the control board. It includes all the functionalities necessary for the board to operate properly, including data acquisition from multiple sensors, time synchronization, protocol conversion, etc.

The hex file of the firmware: hardware/Firmware/stm32f103c8t6.hex

To program the firmware onto the MCU, we need an ST-LINK V2 programmer. Please refer to the programming process outlined in theSTM32CubeProgrammer user manual.

3.4 Wire Everything Together

After preparing all the required sensors, actuators, and the assembled PCB, please connect the components according to the wiring diagram below. The diagram includes the PIN definitions for each sensor's XH2.54 interface, arranged in the same order as they appear on the actual PCB. The entire PCB (and the sensors) can be powered through a DC 24V input.

Wire

💽 II. Prepare Code and Datasets

We also provide multiple pre-collected datasets to test 𝛼LiDAR's performance if the hardware is not available. The dataset can be downloaded at Mega Drive or Baidu Pan (Code: u0tr).

Dataset description:

Test dataset Description Scene image and map
alpha_lidar_large_indoor.bag Indoor environment with narrow corridors and reflective floors data
alpha_lidar_various_scene.bag Hybrid indoor and outdoor environment data
alpha_lidar_15floors_staircase.bag 15-Floor narrow stairwell data

To use our code and datasets, first, clone this repository:

git clone https://github.com/HViktorTsoi/alpha_lidar.git

Download the datasets and save to this path:

${path_to_alpha_lidar}/datasets

${path_to_alpha_lidar} is the path to the source code just been cloned.

After downloading, the alpha_lidar/datasets directory should look like this:

|-- datasets
|---- alpha_lidar_15floors_staircase.bag
|---- alpha_lidar_large_indoor.bag
|---- alpha_lidar_large_indoor.fusion.gt
|---- alpha_lidar_various_scene.bag
|---- alpha_lidar_various_scene.f9p.gt

The *.bag files store the raw data (LiDAR point cloud, IMU and Encode messages), while the corresponding *.gt files store the ground truth data.

📜 III. Software Guideline

A step-by-step video tutorial for this section is available at https://www.youtube.com/watch?v=jFgedPY6zIM

In this section, we demonstrate how to run and evaluate 𝛼LiDAR's core software module, which addresses 𝛼LiDAR' main challenges: accurately estimating the LiDAR's poses and recovering undistorted LiDAR measurements under the rapid motion of both the LiDAR and the carrier.

We offer two methods for running the 𝛼LiDAR code: running with docker (we recommend) and building the source code from scratch.

Run with Docker (Recommended)

Prerequisites

1. Setup Docker Environment

First, pull our preconfigured environment from docker hub

docker pull hviktortsoi/ubuntu2004_cuda_ros:latest

Then, enter the docker directory in the source code

cd ${path_to_alpha_lidar}/software/docker

${path_to_alpha_lidar} is the path where the source code just been cloned.

Before start the container, configure xhost on the host machine by:

sudo xhost +si:localuser:root

Then launch and enter the docker container:

sudo docker-compose run alpha-lidar bash

2. Run and Evaluate

The following steps are all executed in the bash terminal inside the docker container.

2.1 Run αLiDAR

Taking the alpha_lidar_various_scene.bag dataset as an example, to launch αLiDAR's state estimation module, execute:

roslaunch state_estimation mapping_robosense.launch bag_path:=/datasets/alpha_lidar_large_indoor.bag

After launching, press space key in the bash terminal to begin data playback.

If everything is working smoothly, two RVIZ GUI windows will show up:

The first RVIZ window shows the visualization of αLiDAR's point cloud maps and estimated poses. result

The smaller second window shows the comparison result, which is naïvely stacking the raw point clouds without αLiDAR's pipeline.

result

Additionally, the bash terminal will display the debug information like data playback time, real-time latency, etc.

Users can use the mouse Left-click, Middle-scroll and Middle-click to move the viewpoint in the RVIZ GUI to observe a more comprehensive point cloud map.

During visualization, if the users lost the point cloud view, press z key in the RVIZ GUI to reset the viewpoint; If the point clouds are not clear, try increasing the Size (m) parameter (e.g., to 0.05) in the left configuration panel to make the point cloud more visible.

config

2.2 Evaluate αLiDAR's Performance

After completing data playback, press CTRL+C in the bash terminal to exit state estimation.

To evaluate αLiDAR's performance,execute:

rosrun state_estimation evaluation.py --gt_path /datasets/alpha_lidar_large_indoor.fusion.gt

It shows the evaluation results of trajectory precision, latency, FoV coverage, etc.

result

2.3 Other Datasets

For alpha_lidar_15floors_staircase.bag dataset, execute the following commands to run and evaluate:

# run
roslaunch state_estimation mapping_robosense.launch bag_path:=/datasets/alpha_lidar_15floors_staircase.bag
# evaluate
rosrun state_estimation evaluation.py 

For alpha_lidar_various_scene.bag dataset, execute the following commands to run and evaluate, note that a different .launch file is used here:

# run
roslaunch state_estimation mapping_hesai.launch bag_path:=/datasets/alpha_lidar_various_scene.bag
# evaluate
rosrun state_estimation evaluation.py --gt_path /datasets/alpha_lidar_various_scene.f9p.gt

Build Source from Scratch

Prerequisites

System-level requirements:

Remarks: The livox_ros_driver must be installed and sourced before run any 𝛼LiDAR's state_estimation launch file.

Remarks: How to source? The easiest way is add the line source $Livox_ros_driver_dir$/devel/setup.bash to the end of file ~/.bashrc, where $Livox_ros_driver_dir$ is the directory of the livox ros driver workspace (should be the ws_livox directory if you completely followed the livox official document).

Python requirements:

  • Python 3.7 or later
  • evo
  • opencv-python
  • ros_numpy
  • transforms3d

1. Build

Enter the catkin workspace directory:

cd ${path_to_alpha_lidar}/software/alpha_lidar_ws

${path_to_alpha_lidar} is the path where the source code just been cloned.

Then compile the code

catkin_make -DCATKIN_WHITELIST_PACKAGES="state_estimation"

and source

source ./devel/setup.bash

2. Run and Evaluate

This process follows the same steps as outlined in the Run with Docker (Recommended), please refer to this section 2. Run and Evaluate for detailed instructions.

License

This repository is released under the MIT license. See LICENSE for additional details.