Official Implementation of "Revisiting Out-of-Distribution Detection in LiDAR-based 3D Object Detection" by Michael Kösel, Marcel Schreiber, Michael Ulrich, Claudius Gläser, and Klaus Dietmayer.
conda create -n mmood3d python=3.8 -y
conda activate mmood3d
Please make sure to have CUDA 11.1 installed and in your PATH.
# install pytorch
pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html
# install openmim, used for installing mm packages
pip install -U openmim
# install mm packages
mim install "mmengine" "mmcv==2.1.0" "mmdet==3.2.0" "mmdet3d==1.4.0"
# workaround issues with tensorboard
pip install setuptools==59.5.0 Pillow==9.5.0
Assuming your terminal is in the mmood3d directory:
pip install -v -e .
-
Download nuScenes [https://www.nuscenes.org/download].
-
Copy or softlink the files into the
data/
directory. The structure of the data directory should be as follows:
data
├── nuscenes
│ ├── v1.0-trainval (nuScenes files)
│ ├── sweeps (nuScenes files)
│ ├── samples (nuScenes files)
│ ├── nuscenes_gt_database (See step 3)
│ ├── nuscenes_dbinfos_train.pkl (See step 3)
│ ├── nuscenes_infos_train.pkl (See step 3)
└ └── nuscenes_infos_val.pkl (See step 3)
- Generate the annotation files. This will put the annotation files into the
data/
directory by default. The process can take some while. Note, that we use a custom version of create_data.py, therefore data generated by the official mmdetection3d code is not compatible.
python tools/create_data.py nuscenes --root-path ./data/nuscenes --out-dir ./data/nuscenes --extra-tag nuscenes
We provide weights for the CenterPoint detector trained on nuScenes with frames containing OOD objects removed. Download from here and place the checkpoint into the checkpoints/
directory.
We provide the configurations we use in the paper in the mmood3d/configs
directory. When you want to train, e.g., with the full version:
# single gpu
python tools/train.py mmood3d/configs/ood/ood.py
# multiple gpu (replace "num_gpu" with the number of available GPUs) - 8 GPU's are reccomended.
./tools/dist_train.sh mmood3d/configs/ood/ood.py num_gpu
In order to reproduce the results of the paper, please use 8 GPU's, so that the learning rate remains unchanged.
You need to retrain the base detector if you use a different dataset or change the OOD class settings.
We provide two configurations for CenterPoint trained on nuScenes. The full_eval
configuration does not exclude OOD frames from the validation set, following the default nuScenes settings.
# single gpu
python tools/train.py mmood3d/configs/ood/centerpoint/base_centerpoint_voxel01_second_secfpn_8xb4_cyclic_20e_nus_3d_known_full_eval.py
# multiple gpu (replace "num_gpu" with the number of available GPUs) - 8 GPU's are recommended.
./tools/dist_train.sh tools/train.py mmood3d/configs/ood/centerpoint/base_centerpoint_voxel01_second_secfpn_8xb4_cyclic_20e_nus_3d_known_full_eval.py num_gpu
This project is open-sourced under the AGPL-3.0 license. See the LICENSE file for details.
For a list of other open source components included in this project, see the file 3rd-party-licenses.txt.
This software is a research prototype only and shall only be used for test-purposes. This software must not be used in or for products and/or services and in particular not in or for safety-relevant areas. It was solely developed for and published as part of the publication "Revisiting Out-of-Distribution Detection in LiDAR-based 3D Object Detection" and will neither be maintained nor monitored in any way.
If you find our code or paper useful, please cite
@inproceedings{koesel2024revisiting,
title={Revisiting Out-of-Distribution Detection in LiDAR-based 3D Object Detection},
author={Kösel, Michael and Schreiber, Marcel and Ulrich, Michael and Gläser, Claudius and Dietmayer, Klaus},
booktitle={2024 IEEE Intelligent Vehicles Symposium (IV)},
year={2024},
pages={2806-2813},
doi={10.1109/IV55156.2024.10588849}
}