Deep MIB - segmentation using Deep Learning
The deep learning tool (Deep MIB) provides access to training of deep convolutional networks over the user data and utilization of those networks for image segmentation tasks.
Back to Index --> User Guide --> Menu --> Tools Menu
Contents
Overview
For details of deep learning with DeepMIB please refer to the following tutorials:
DeepMIB: 2D U-net for image segmentation
DeepMIB: 3D U-net for image segmentation
The typical workflow consists of two parts:
- network training
- image prediction
The pretrained network is saved to disk and can be distributed to predict (the Predict tab) unseen datasets.
Please refer to the documentation below for details of various options available in MIB.

Network panel
The upper part of Deep MIB is occupied with the Network panel. This
panel is used to select one of the available architectures.
Always start a new project with selection of the architecture:
- 2D U-net, is a convolutional neural network that was
developed for biomedical image segmentation at the Computer Science
Department of the University of Freiburg, Germany. Segmentation of a 512
x 512 image takes less than a second on a modern GPU (Wikipedia)
References:
- Ronneberger, O., P. Fischer, and T. Brox. "U-Net: Convolutional Networks for Biomedical Image Segmentation." Medical Image Computing and Computer-Assisted Intervention (MICCAI). Vol. 9351, 2015, pp. 234-241 (link)
- Create U-Net layers for semantic segmentation (link)
- 2D SegNet, is a convolutional network that was
developed for segmentation of normal images University of Cambridge, UK.
It is less applicable for the microscopy dataset than U-net.
References:
- Badrinarayanan, V., A. Kendall, and R. Cipolla. "Segnet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation." arXiv. Preprint arXiv: 1511.0051, 2015 (link)
- Create SegNet layers for semantic segmentation (link) - 3D U-net, a variation of U-net, suitable for for semantic
segmentation of volumetric images.
References:
- Cicek, Ö., A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger. "3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation." Medical Image Computing and Computer-Assisted Intervention, MICCAI 2016. MICCAI 2016. Lecture Notes in Computer Science. Vol. 9901, pp. 424-432. Springer, Cham (link)
- Create 3-D U-Net layers for semantic segmentation of volumetric images (link) - 3D U-net anisotropic, a hybrid U-net that is a combination of
2D and 3D U-nets. The top layer of the network has 2D convolutions and 2D
max pooling operation, while the rest of the steps are done in 3D. As
result, it is better suited for datasets with anisotropic voxels
This button is only available when either the Train or Predict tab is selected. When the Train tab is selected, press of the this button defines a file for saving the network. When the Predict tab is selected, a user can choose a file with the pretrained network to be used for prediction. For ease of navigation the button is color-coded to match the active tab.
Directories and Preprocessing tab
This tab allows choosing directories with images and specifying certain preprocessing parameters.
- Directory with images and models for training [required
only for training]:
use these widgets to select directory that contain images and model to be used for training. In the scheme of a typical project above, this directory is 1_Training. Collection of images in this folder will be processed and split into the training (placed into 3_Results\TrainImages and 3_Results\TrainLabels directories) and validation datasets (placed into 3_Results\ValidationImages and 3_Results\ValidationLabels directories) as shown on a figure above.
The file extension dialog on the right-hand side can be used to specify extension of the image files. The Bio checkbox toggles standard or Bio-format readers for reading the images
- These are requirements for files:
- For 2D networks, each file should contain a single 2D image. The
ground truth models should also be provided in the same directory, but in
a single *.model file:
- For 3D networks, each file should contain a stack of images,
the image files should be accompanied with the *.model
files.
Number of model files should match the number of image files:
Tip! if you have only one segmented dataset you can split it into several datasets using Menu->File->Chopped images->Export operation.
- For 2D networks, each file should contain a single 2D image. The
ground truth models should also be provided in the same directory, but in
a single *.model file:
Important! It is not possible to use numbers as names of materials, please name materials in a sensible way!
use these widgets to specify directory with images for prediction (in the scheme above 2_Prediction). The images placed here will be preprocessed and saved to 3_Results\Prediction images directory.
For 2D networks the files should contain individual 2D images, while for 3D networks individual 3D datasets.
If the ground truth data for the prediction datasets is known, it can also be placed as *.model file(s) into the same directory (2_Prediction) using the same rules as for the training models. If the models are present, they are also processed and copied to 3_Results\PredictionImages\GroundTruthLabels. These models can be used for evaluation of results (see the Predict tab below for details).
The file extension dialog on the right-hand side can be used to specify extension of the image files. The Bio checkbox toggles standard or Bio-format readers for reading the images.
use these widgets to specify the main work directory; results and all preprocessed images are stored there.
All subfolders inside this directory are automatically created by Deep MIB:
Details of directories
- PredictionImages, place for the prepocessed images for prediction
- PredictionImages\GroundTruthLabels, place for ground truth models for prediction images, when available
- PredictionImages\ResultsModels, place for generated models after prediction. The 2D models can be combined in MIB by selecting the files using the Shift+left mouse click during loading
- PredictionImages\ResultsScores, place for generated prediction scores (probability) for each material. The score is scaled between 0 and 255
- ScoreNetwork, for accuracy and loss score plots, when the Export training plots option of the Train tab is ticked
- TrainImages, images to be used for training
- TrainLabels, models accompanying images to be used for training
- ValidationImages, images to be used for validation during training
- ValidationLabels, models accompanying images for validation
Train tab
This tab contains settings for assembling the network and training. Before processing further please finish the preprocessing part, see above.
Before starting the training process it is important to check and if needed modify the settings. Also, use the Network filename button in the Network panel to select filename for the network.
- Input patch size, this is important field that has to be defined
based on available memory of GPU, dimensions of the training dataset and
number of color channels.
The patch size defines dimensions of a single image block that will be targeted into the network for training. The dimensions are always defined with 4 numbers representing height, width, depth, colors of the image patch (for example, type "572 572 1 2" to specify a 2D patch of 572x572 pixels with 2 color channels).
The patches are taken randomly from the volume and number of those patches can be specified in the Patches per image field. - Padding defines type of the convolution padding, depending on
the selected padding the Input patch size may have to be adjusted.
- same - zero padding is applied to the inputs to convolution layers such that the output and input feature maps are the same size
- valid - zero padding is not applied to the inputs to convolution layers. The convolution layer returns only values of the convolution that are computed without zero padding. The output feature map is smaller than the input feature map.
- Number of classes - number of materials of the model including Exterior, specified as a positive number
- Encoder depth - number of encoding and decoding layers in the network. U-Net is composed of an encoder subnetwork and a corresponding decoder subnetwork. The depth of these networks determines the number of times the input image is downsampled or upsampled during processing. The encoder network downsamples the input image by a factor of 2^D, where D is the value of EncoderDepth. The decoder network upsamples the encoder network output by a factor of 2^D.
- Patches per image - specifies number of patches that will be taken from each image or 3D dataset. Number of patches per image can be estimated from a ratio between dimensions of each image and the input patch multiplied by number of augmentation filters uses
- Mini Batch Size - number of patches processed at the same time by the network. More patches speed up the process, but it is important to understand that the resulting loss is averaged for the whole mini-batch. Number of mini batches depends on amount of GPU memory
- NumFirstEncoderFilters - number of output channels for the first encoder stage. In each subsequent encoder stage, the number of output channels doubles. The unetLayers function sets the number of output channels in each decoder stage to match the number in the corresponding encoder stage
- Filter size - convolutional layer filter size; typical values are in the range [3, 7]
- Segmentation layer - specifies the output layer of the network:
- pixelClassificationLayer - semantic segmentation with the crossentropyex loss function
- dicePixelClassificationLayer - semantic segmentation using generalized Dice loss to alleviate the problem of class imbalance in semantic segmentation problems. Generalized Dice loss controls the contribution that each class makes to the loss by weighting classes by the inverse size of the expected region
- dicePixelCustomClassificationLayer - a modification of the dice loss, with better control for rare classes
- Input layer settings button - can be used to specify data normalization during training, see the info header of the dialog or press the Help button of the dialog for details
- Training settings button - define multiple parameters used for training, for details please refer to
trainingOptions
function
Tip, setting the Plots switch to "none" in the training settings may speed up the training time by up to 25%
- Check network button - press to preview and check the network.
The standalone version of MIB shows only limited information about the
network and does not check it:
- Augmentation - augment data during training. For small training sets augmentation provides an easy way to extend amount of training data using various methods. Depending on the selected 2D or 3D network various augmentation operations available. These operations are configurable using the 2D and 3D settings buttons right under the augmentation checkbox
- Save progress after each epoch when ticked Deep MIB stores training checkpoints after each epoch to 3_Results\ScoreNetwork directory. It will be possible to choose any of those networks and continue training from that checkpoint. If the checkpoint networks are present, a choosing dialog is displayed upon press of the Train button
- Export training plots when ticked accuracy and loss scores are saved to 3_Results\ScoreNetwork directory. Deep MIB uses the network filename as a template and generates a file in Matlab format (*.score) and several files in CSV format

After the training, the network is saved to a file specified in the Network filename editbox of the Network panel.
Predict tab
The trained networks can be loaded to Deep MIB and used for prediction of
new datasets.
To start with prediction:
- select a file with the desired network in the Network filename editbox of the Network panel. Upon loading, the corresponding fields of the Train panel will be updated with the settings used for training of the loaded network
- specify correct directory with the images for prediction: Directories and Preprocessing tab -> Directory with images for prediction
- specify directory for the results: Directories and Preprocessing tab -> Directory with resulting images
- press the Preprocess button to perform data preprocessing
- finally switch back to the Predict tab and press the Predict button
- Overlapping tiles, available for the same
convolutional padding, during prediction crops the edges of the predicted
patches, which improves the segmentation, but takes more time. See comparison of results on the image below:
- Explore activations
Activations explorer brings the possibility for detailed evaliation of the network. The images processed images should be located in 3_Results\PredictionImages directory.
Here is the description of the options:
- Image has a list of all preprocessed images for prediction.
Selection of an image in this list will load a patch, which is equal to
the input size of the selected network
The arrows on the right side of the dropdown can be used to load previous or next image in this list - Layer contains a list of all layers of the selected network. Selection of a layer, starts prediction and acquiry of activation images
- Z1, X1, Y1, this spinners make possible to shift the patch across the image. Shifting of the patch does not automatically update the activation image. To update the activation image press the Update button
- Patch Z, change the z value within the loaded activation patch, it is used only for 3D networks
- Filter Id, change of this spinner brings various activation layers into the view
- Update press to calculate the activation images for the currently displayed patch
- Collage press to make a collage image of the current network
layer activations:
- Image has a list of all preprocessed images for prediction.
Selection of an image in this list will load a patch, which is equal to
the input size of the selected network
- Load images and models press this button after the prediction to open original images and result of the segmentation in the currently active buffer of MIB
- Load prediction scores press to load the resulting score images (predictions) into the currently active buffer of MIB
- Evaluate segmentation when the datasets for prediction are accompanied with
ground truth models, press of this button allows to calculate various
precision metrics. The evaluation results can be exported to Matlab or saved in Matlab or Excel
formats to 3_Results\PredictionImages\GroundTruthLabels directory, see more in the
Directories and Preprocessing section above.
For details of the metrics refer to Matlab documentation for evaluatesemanticsegmentation function
Options tab
Some additional options and settings are available in this tab
Config files panel
This panel brings access to loading or saving Deep MIB config files.
The config files contain all settings of Deep MIB including the network name and input and output directories but excluding the actual trained network. Normally, these files are automatically created during the training process and stored next to the network *.mibDeep files also in Matlab format using the *.mibCfg extension.
Alternatively, the files can be saved manually by pressing the Save button.
Back to Index --> User Guide --> Menu --> Tools Menu