Skip to content

Latest commit

 

History

History
executable file
·
140 lines (87 loc) · 5.82 KB

README.md

File metadata and controls

executable file
·
140 lines (87 loc) · 5.82 KB

SALUDA: Surface-based Automotive Lidar Unsupervised Domain Adaptation

Björn Michele1,3   Alexandre Boulch1    Gilles Puy1    Tuan-Hung Vu1    Renaud Marlet1,2   Nicolas Courty3   

1 Valeo.ai, Paris, France  2 LIGM, Ecole des Ponts, Univ Gustave Eiffel, CNRS, Marne-la-Vallée, France

3 CNRS, IRISA, Univ. Bretagne Sud, Vannes, France


Arxiv


SALUDA has been accepted as a SPOTLIGHT at 3DV 2024


Overview


💡 Overview

Learning models on one labeled dataset that generalize well on another domain is a difficult task, as several shifts might happen between the data domains. This is notably the case for lidar data, for which models can exhibit large performance discrepancies due for instance to different lidar patterns or changes in acquisition conditions. This paper addresses the corresponding Unsupervised Domain Adaptation (UDA) task for semantic segmentation. To mitigate this problem, we introduce an unsupervised auxiliary task of learning an implicit underlying surface representation simultaneously on source and target data. As both domains share the same latent representation, the model is forced to accommodate discrepancies between the two sources of data. This novel strategy differs from classical minimization of statistical divergences or lidar-specific state-of-the-art domain adaptation techniques. Our experiments demonstrate that our method achieves a better performance than the current state of the art in synthetic-to-real and real-to-real scenarios.

More resources: Slides, Poster


🎓 Citation

@article{michele2024saluda,
  title={{SALUDA}: Surface-based Automotive Lidar Unsupervised Domain Adaptation},
  author={Michele, Bjoern and Boulch, Alexandre and Puy, Gilles and Vu, Tuan-Hung and Marlet, Renaud and Courty, Nicolas},
  booktitle={2024 International Conference on 3D Vision (3DV)},
  year={2024},
  organization={IEEE}
}

🧰 Dependencies

This code was implemented and tested with python 3.10, PyTorch 1.11.0 and CUDA 11.3. The backbone is implemented with version 1.4.0 of Torchsparse(Exact commit) Additionally, Sacred 0.8.3 is used.


💾 Datasets

For our experiments we use the following datasets: nuScenes, SemanticKITTI, SynthLiDAR and SemanticPOSS

Please note that we use in all our experiments the official SubDataset of SynthLiDAR. The datasets should be placed in: data/


💪 Training

  1. Step: Source/Target training with surface reconstruction regularization
nuScenes to SemanticKITTI
python train_single_back.py --name='SALUDA_ns_sk' with da_ns_sk

SynthLiDAR to SemanticKITTI
python train_single_back.py --name='SALUDA_syn_sk' with da_syn_sk"

nuScenes to SemanticPOSS
python train_single_back.py --name='SALUDA_ns_poss' with da_ns_poss

SynthLiDAR to SemanticPOSS (5 cm voxel size)
python train_single_back.py --name='SALUDA_syn_poss' with da_syn_poss

In a second step the previously obtained models are further refined with a self-training. For this we rely on the code-basis of CoSMix, but adapt the code so that it contains only a simple self-training. We provide more details in the folder self_training


🏁 Evaluation

Evaluation of a SALUDA model on nuScenes to SemanticKITTI:

python eval.py --name='EVAL_SALUDA_ns_sk' with da_ns_sk network_decoder=InterpAllRadiusNoDirsNet network_decoder_k=1.0 save_dir=results_val/ ckpt_path_model=path/to/folder

Evaluation on SyntheticLiDAR to SemanticKITTI:

python eval.py --name='EVAL_SALUDA_syn_sk' with da_syn_sk network_decoder=InterpAllRadiusNoDirsNet network_decoder_k=1.0 save_dir=results_val/ ckpt_path_model=path/to/folder

🐘 Model zoo

DA Setting Method Backbone Link
nuScenes to SemanticKITTI SALUDA w/o ST TorchSparse-MinkUNet CKPT
SyntheticLiDAR to SemanticKITTI SALUDA w/o ST TorchSparse-MinkUNet CKPT

The checkpoint should be placed in a folder. The link to this folder should be given to the "ckpt_path_model" parameter.


🏅 Acknowledgments

This project would not have been possible without many community resources and repositories. Among them:

Please, consider acknowleding these projects.


📝 License

This work released under the terms of the Apache 2.0 license.