FAEL is a systematic framework for Fast Autonomous Exploration for Large-scale environment. In FAEL, a fast preprocessing of environmental information is presented, providing fundamental information to support high-frequency path planning. An path optimization formulation that comprehensively considers key factors about fast exploration is introduced. Further, an heuristic algorithm is devised to solve the NP-hard optimization problem, which empirically finds optimal solution in real time. Our method is demonstrated to completes exploration with the least time and shortest movement distance compared to state-of-the-art methods at the time of publication.
Real-world experiment video: video
Contributors: Junlong Huang, Boyu Zhou, Zhengping Fan, Yilin Zhu, Yingrui Jie, Longwei Li and Hui Cheng from SYSU RAPID Lab.
Related Papers:
- Junlong Huang, Boyu Zhou, Zhengping Fan, Yilin Zhu, Yingrui Jie, Longwei Li, Hui Cheng*, FAEL: Fast Autonomous Exploration for Large-Scale Environments with a Mobile Robot, IEEE Robotics and Automation Letters, vol. 8, no. 3, pp. 1667-1674, March 2023
Please cite our paper if you use this project in your research:
@article{10015689,
author={Huang, Junlong and Zhou, Boyu and Fan, Zhengping and Zhu, Yilin and Jie, Yingrui and Li, Longwei and Cheng, Hui},
journal={IEEE Robotics and Automation Letters},
title={FAEL: Fast Autonomous Exploration for Large-scale Environments With a Mobile Robot},
year={2023},
volume={8},
number={3},
pages={1667-1674},
doi={10.1109/LRA.2023.3236573}}
- This project has been tested on Ubuntu 20.04(ROS Noetic), run the following commands to install required tools:
sudo apt-get install ros-noetic-navigation \
ros-noetic-gazebo-* \
ros-noetic-gazebo-ros-control* \
ros-noetic-controller-* \
ros-noetic-tf2-* \
ros-noetic-octomap-* \
ros-noetic-velodyne-* \
ros-noetic-pointgrey-camera-description \
ros-noetic-twist-mux \
ros-noetic-teleop-twist-joy \
ros-noetic-lms1xx \
ros-noetic-interactive-marker-twist-server \
libgoogle-glog-dev \
libignition-common3-graphics-dev \
libignition-common3-profiler-dev \
python3-tk
- Before running FAEL, we recommend that you download gazebo_models in advance, and put them in directory
~/.gazebo/models
.
Then simply clone and compile our package (using ssh here):
cd ${YOUR_WORKSPACE_PATH}/src
git clone [email protected]:SYSU-RoboticsLab/FAEL.git
cd ../
catkin_make
After compilation you can start a sample exploration demo. You need to open 3 terminals at the same time, and run the following 3 commands in sequence.
- Open the simulation environment in the first terminal:
source devel/setup.bash && roslaunch exploration_manager sim_env.launch
- Run the local_planner in the second terminal:
source devel/setup.bash && roslaunch exploration_manager robot_move.launch
- Start the exploration in the third terminal:
source devel/setup.bash && roslaunch exploration_manager explorer.launch
By default you can see an outdoor environment scene_1.world. The mobile robot will automatically start exploring after the file explorer.launch
is run.
The exploration environments in our simulator are represented by *.world
files. We provide some simulation environments in package sim_env
, which can be selected in sim_env.launch
, and you also need to select the corresponding parameter map_area in visualization_tools.launch
. The corresponding relationship is as follows:
- scene_1.world : 4582
- scene_2.world : 4146
- scene_3.world : 9601
- scene_4.world : 12129
Noted: If you find any other problem, please point it out and we will try our best to address it.
If you find that the planned path is obviously weird and the exploration efficiency drops significantly compared with our paper, it is possibly caused by the degradation of the point cloud. In this case, we recommend that you use a discrete graphics card or a better discrete graphics card.
Noted: We assume that there are 2 graphics cards (integrated graphics card and discrete graphics card) on your computer.
As we use GPU for Velodyne VLP-16 LiDAR in simulation, if you use the integrated graphics card on ubuntu, the point cloud of Lidar would degrade, resulting in failed exploration. To address this problem, we recommend that you switch to discrete graphics card. Take Intel Graphics Card and Nvidia Graphics Card as an example here.
- Install Nvidia Graphics Card Driver
Open up a new terminal window and enter the following command to see which driver is recommended for your specific card.
ubuntu-drivers devices
Take nvidia-driver-470
as an example, enter the following command to install.
sudo apt-get install nvidia-driver-470
- Switch to Nvidia Graphics Card
sudo prime-select nvidia
sudo reboot
We use jackal for ground robot simulation, RotorS for generic odometry sensor simulation, cmu_planner for local_planner and terrain_analysis, and ufomap for 3D environment representation. We really appreciate these open source projects!