Skip to content

Latest commit

 

History

History
68 lines (60 loc) · 6.04 KB

README.md

File metadata and controls

68 lines (60 loc) · 6.04 KB

NvBlox

NvBlox package for ROS 2 Humble based online mapping for the Agile Robotics and Perception Lab.

NOTE: This package is designed to interface with the whole ARPL mapping pipeline (OpenVINS+disparity+PointCloud_Manager+NvBlox). As a result, all the config files and launch files for this package are located in arpl_autonomy_stack package.

Prerequisites

  1. ROS 2 Humble
  2. CUDA
  3. NVIDIA VPI
  4. CMake version >= 3.22
  5. Disparity
  6. PointCloud Manager

Common issues while building

1) CMake version is too low

At the time of this writing, the lab uses JetPack 5.1.2, which installs CMake version 3.16.3 by default. Please check the CMake version by running the command:

cmake --version

2) The CUDA compiler identification is unknown. Failed to detect a default CUDA architecture.

This occurs because CMake couldn't find the Nvidia CUDA compiler (NVCC).

  1. Make sure you have installed CUDA. You can see if its installed by running
jtop

and navigating to the INFO section where the CUDA version is displayed. If the CUDA version is not displayed and instead says NO, it means CUDA is not installed.To install CUDA and VPI, run the command:

sudo apt-get install nvidia-jetpack

This will install both CUDA and VPI libraries on the Jetson.

  1. Make sure the PATH variable in the environment contains the path to where the CUDA compiler (NVCC) is installed. You can see if this is the case by running the command:
nvcc --version

If the command returns: bash: nvcc: command not found it means the PATH variable does not contain the path to where the CUDA compiler is installed. Usually, it is installed in /usr/local/<your_cuda_version>/bin Find the location of the NVCC compiler and add it to the PATH variable in the bash scipt

export PATH=/usr/local/<your_cuda_version>/bin:$PATH

Source the bash script and run nvcc --version again. It should display the version of your NVCC compiler now.

3) CMAKE_CUDA_ARCHITECTURES must be non-empty if set

This is related to the previous error. After resolving the previous issue, clean the build and install directories of any nvblox packages and build again.

4) Nav2 not found

NvBlox interfaces with ROS 2's navigation stack called Nav2. Since we don't use Nav2, place a COLCON_IGNORE script inside the nvblox_nav2 package (This should already be there in this repo).

How to Run

  1. Launch the localization pipeline (e.g. OpenVINS), and make sure it is publishing tf2 frames from world to drone body.
  2. Launch the controller pipeline. The mapping pipeline does not need the controller to run, however, in the current stack, the controller pipeline launches tf2.launch.py, which defines important static transforms required by NvBlox. If you do not want to launch the controller, you can just launch the tf2.launch.py script located in arpl_autonomy/launch/race_platformt/tf2.launch.py
  3. Launch the mapping pipeline: ros2 launch arpl_autonomy mapping.launch.py name:=$MAV_NAME, where $MAV_NAME is the name of your drone.
  4. Run RVIZ2 on your station and subscribe to the $MAV_NAME/nvblox_node/static_occupancy topic. Make sure the reference frame in RVIZ2 is world (or whatever you have set in the nvblox launch file, located in arpl_autonomy/launch/race_platform/perception/mapping/mapping.launch.py). NvBlox publishes the octomap as a PointCloud2 message, so for best viewing results, make sure the size of each point matches the size of the voxels set in the nvblox_base.yaml file (default is 0.05) and choose the Style as 'Boxes'. You can also set Color Transformer to Axis Colors.

Issues while running

If NvBlox doesn't output anything, its usually one of these things:

  1. Incorrect or incomplete TF trees: Make sure that the pose_frame and the global_frame parameters are set properly and there exists a valid transformation between the two frames. Vanilla NvBlox typically uses three frames; the global and pose frames set by the user, and the sensor frame, which it gets from the image_depth topic. Our version of NvBlox does not use the sensor frame, and instead relies solely on the two frames set by the user in the launch file.
  2. Incorrect depth topic or bad depth image: NvBlox uses the depth map (usually generated by the RealSense depth module, but in our case, generated by the disparity nodelet) to create the 3D environment. Make sure NvBlox is receiving the depth maps and they are of the correct format. NvBlox requires that the depth maps be an image with the encoding CV_16UC1 (unsigned 16 bit, single channel) and the values inside range from 0 (indicating very close or invalid readings) to 65535 (the maximum possible distance). Make sure the images fed into NvBlox match these specifications.
  3. Incorrect camera_info_topic: NvBlox also uses the metadata from the /cam1/infra1/camera_info topic to construct the 3D scene. Make sure this topic is sending data to NvBlox.
  4. As of this writing, the full mapping pipeline (OpenVINS+Disparity+PointCloud Manager+NvBlox) has only been shown to work on the Nvidia Jetson Orin NX and NOT the Jetson Xavier NX. Testing on the Xavier NX shows that the pipeline is too heavy in terms of computation, as well as the number of messages published and subscribed to. (After a while, in which the pipeline works, either ROS2 starts dropping messages or the Xavier freezes due to the heavy load).
  5. On testing with ROS2 using WiFi with the full mapping pipeline, it was found that the ping increases dramatically as the drone moves away from the wifi router/station. This seems to be because NvBlox messages are very heavy and ROS2 communication protocols are insufficent to handle it. Present solution is to reduce the rate of publishing the static occupancy map, as well as increase the voxel size to reduce the amount of data being published. Both these parameters are located in arpl_autonomy_stack/config/race/default/perception/nvblox_base.yaml. In future, perhaps using Zenoh to handle communication could solve the issue.