Skip to content

Latest commit

 

History

History
45 lines (37 loc) · 3.34 KB

README.md

File metadata and controls

45 lines (37 loc) · 3.34 KB

UOD_Universal_Oneshot_Detection arXiv

Official PyTorch implementation for MICCAI 2023 paper:

UOD: universal one-shot detection of anatomical landmarks
Heqin Zhu, Quan Quan, Qingsong Yao, Zaiyi Liu, S. Kevin Zhou

results

Introduction

One-shot medical landmark detection gains much attention and achieves great success for its label-efficient training process. However, existing one-shot learning methods are highly specialized in a single domain and suffer domain preference heavily in the situation of multi-domain unlabeled data. Moreover, one-shot learning is not robust that it faces performance drop when annotating a sub-optimal image. To tackle these issues, we resort to developing a domain-adaptive one-shot landmark detection framework for handling multi-domain medical images, named Universal One-shot Detection (UOD). UOD consists of two stages and two corresponding universal models which are designed as combinations of domain-specific modules and domain-shared modules. In the first stage, a domain-adaptive convolution model is self-supervised learned to generate pseudo landmark labels. In the second stage, we design a domain-adaptive transformer to eliminate domain preference and build the global context for multi-domain data. Even though only one annotated sample from each domain is available for training, the domain-shared modules help UOD aggregate all one-shot samples to detect more robust and accurate landmarks. We investigated both qualitatively and quantitatively the proposed UOD on three widely-used public X-ray datasets in different anatomical domains (i.e., head, hand, chest) and obtained state-of-the-art performances in each domain.

Train and Test

Stage I

CUDA_VISIBLE_DEVICES=1 nohup python3 train_ssl.py --run_dir .runs/stg1  --run_name RUNNAME --config config_ssl.yaml  --oneshot_id_list 3188 126 JPCLN035  --data_list hand head jsrt --model uvgg --batch_size 6 --phase train -x 1 -e 1500 &>log_stg1 &

Stage II

After Stage I, update pseudo_path in config.yaml with corresponding path.

nohup python3 main.py -p train -d .runs/stg2 -r RUNNAME_stg2 --model DATR -b 4 -e 300 -C config.yaml --sigma 10 --data_list head hand jsrt -g 1 -x 1 --use_layerscale &> log_stg2 &

Summary

python3 summary.py -v --SDR 2 2.5 3 4  -r .runs/stg2

Citation

@inproceedings{zhu2023uod,
  title={UOD: Universal One-Shot Detection of Anatomical Landmarks},
  author={Zhu, Heqin and Quan, Quan and Yao, Qingsong and Liu, Zaiyi and Zhou, S Kevin},
  booktitle={International Conference on Medical Image Computing and Computer-Assisted Intervention},
  pages={24--34},
  year={2023},
  organization={Springer}
}

LICENSE

Apache-2.0

Acknowledgements