Skip to content

DeepGrow

Andres Diaz-Pinto edited this page Apr 8, 2022 · 24 revisions

DeepGrow is an interactive segmentation model where the user guides the segmentation with positive and negative clicks. The positive clicks are intended to guide the segmentation towards the region of interest, while the negative clicks are used for neglecting the background (cf. [1]).

The Training process of a DeepGrow model is different compared to traditional deep learning segmentation due to a simulation process of positive and negative guidance (clicks) involved in the training process. The positive and negative guidance maps are based on the false negatives and false positives which are dependent on the predictions. Both DeepGrow 2D & 3D allow the user to annotate only one label at a time. DeepGrow 2D allows the user to annotate the image one slice at a time, whereas DeepGrow 3D can annotate whole volumes.

image

MONAI Label employs DeepGrow for the annotation of 3D medical images (Magnetic Resonance (MR) or Computed Tomography (CT)).

Hint: For the 2D version of the DeepGrow Apps, points in the Slicer plugin should be provided in the Axial view of the image.

References:

[1] Sakinis, Tomas, et al. "Interactive segmentation of medical images through fully convolutional neural networks." arXiv preprint arXiv:1903.08205 (2019).

Clone this wiki locally