Skip to content

Multilabel DeepEdit

Andres Diaz-Pinto edited this page Jun 13, 2022 · 12 revisions

Multilabel DeepEdit generalizes the DeepEdit App. This means it addresses the single and multilabel segmentation tasks. Similar to DeepEdit, this App combines the power of two models in one single architecture: automatic inference, as a standard segmentation method (i.e. UNet), and interactive segmentation using clicks as the DeepGrow.

Training schema:

Similar to the single label DeepEdit, the training process of the multilabel DeepEdit App involves a combination of simulated clicks and standard training. As shown in the next figure, the input of the network is a concatenation of N tensors: image, and one channel per label including background; those channels contain the simulated points or clicks. This model has two types of training: For some iterations, tensors representing the clicks for each label are zeros and for the other part of the iterations, clicks are simulated so the model can receive inputs for interactive segmentation. For the clicks simulation, we developed new DeepGrow transforms that support multilabel click simulation.

DeepEdit Schema for Training

To train this model, we used the Beyond the Cranial Vault Challenge dataset, which is composed of 30 images each one with 13 labels:

  1. spleen
  2. right kidney
  3. left kidney
  4. gallbladder
  5. esophagus
  6. liver
  7. stomach
  8. aorta
  9. inferior vena cava
  10. portal vein and splenic vein
  11. pancreas
  12. right adrenal gland
  13. left adrenal gland

If you would like to train the multilabel deepedit App on the BTCV dataset or any other dataset, you should first define the dictionary label_names in the main file before you start the training process.

For demo purposes, we trained the multilabel DeepEdit App using the DynUnet and the UNETR network on 7 organs:

label_names = { "spleen": 1, "right kidney": 2, "left kidney": 3, "liver": 6, "stomach": 7, "aorta": 8, "inferior vena cava": 9, "background": 0, }

Results

The next image shows a sample result obtained using the multilabel DeepEdit App and DynUNet as backbone:

DynUNet

The next image shows a sample result obtained using the multilabel DeepEdit App and UNETR as backbone:

UNETR

If the user wants to start this App, please choose whether to use DynUNet or UNETR as backbone:

  • For DynUnet network:

monailabel start_server -a /PATH_TO_APPS/radiology/ -s /PATH_TO_DATASET/ --conf models deepedit

  • For UNETR network:

monailabel start_server -a /PATH_TO_APPS/radiology/ -s /PATH_TO_DATASET/ --conf models deepedit --conf network unetr

Clone this wiki locally