Skip to content

DeepGrow

Alvin Ihsani edited this page Jul 1, 2021 · 24 revisions

DeepGrow is an interactive segmentation model, where the user can guide the segmentation with positive and negative clicks. The positive clicks are intended to guide the segmentation towards the region of interest while the negative clicks are used for neglecting the background. It is based on prior work from [1].

The Training process of a DeepGrow model is a different as compared to a traditional Deep learning 3D Segmentation model due to a simulation process of positive and negative guidance involved in the training process. The positive and negative guidance maps are based on the false negative and false positives which is dependent upon the segmentation predictions made. Both versions 2D & 3D of DeepGrow model function upon data pairs of image and binary mask labels. Specifically for DeepGrow 2D the 3D volumetric data has to be split into slices while for DeepGrow 3D isotropic/anisotropic patches are utilized. image

DeepGrow as a model can generalize to multiple imaging modalities such Magnetic Resonance Imaging, Computed Tomography etc.

It can also be adapted as an application in the MONAI Label framework. Here the user can directly leverage simultaneous training of the Deepgrow in the background and utilize it at the same time to annotate more samples to add to the training data pool.

Please note: Deepgrow training for both 2D & 3D is done on pairs of image & binary label mask data

References:

[1] Sakinis, Tomas, et al. "Interactive segmentation of medical images through fully convolutional neural networks." arXiv preprint arXiv:1903.08205 (2019).

Clone this wiki locally