Skip to content

Deep MIB - Predict Tab

Back to MIB | User interface | DeepMIB

Settings for efficient prediction (inference) and semantic segmentation model generation in Microscopy Image Browser.


How to start the prediction (inference) process

Predict tab interface

Prediction (inference) requires a pretrained network. if you lack one, train it first. pretrained networks can be loaded into Deep MIB for segmenting new datasets.

Steps to start prediction

  1. Select the pretrained network file in Network filename... in the Network panel. This updates the Train panel with training settings. Alternatively, load a config file via Options tab → Config files → Load>
  2. Verify the prediction images directory in Directories and Preprocessing tab → Directory with images for prediction
  3. Confirm the output directory in Directories and Preprocessing tab → Directory with resulting images
  4. If needed (usually not), preprocess (convert) files:
    • Set Preprocess for: Prediction in the Directories and Preprocessing tab
    • Click Preprocess
  5. Switch to the Predict tab and press Predict

Settings section

Settings section

The Settings section configures prediction parameters.

Prediction engine: selects the tiling engine:

  • Legacy used until MIB 2.83
  • Blocked-image recommended for later versions (supports Dynamic masking and 2D Patch-wise)

: (for Padding: same) tiles the patches with overlap to minimize edge artefacts and improve segmentation. Define overlap percentage in %%...

Overlapping vs non-overlapping mode

Overlapping tiles comparison
Same padding without overlap may show edge artifacts, reduced with overlapping mode.

: skips prediction on certain tiles using on-the-fly masking, configured via the Settings button button
Eye previews masking on the current Image View panel image

Dynamic masking settings and preview

Dynamic masking settings
Masking method keeps blocks above or below the Intensity threshold value...

Intensity threshold value... sets the intensity threshold for prediction

inclusion threshold (0-1)... fraction of pixels above/below threshold to keep a tile

Masking preview
Masking preview

Example of patch-wise segmentation with dynamic masking
  • Green: predicted nuclei
  • Red: predicted background
  • Uncolored: skipped patches
    Patch-wise with masking

Padding, %% pads images symmetrically to reduce edge artifacts
Downsample factor for images downsamples prediction images to match training (if applicable), upsampling results to original size and in case of networks with 2 classes are also smoothed
Batch size for prediction... sets the number of patches processed by GPU simultaneously, limited by GPU memory
Model files selects output format for models (CSV for patch-wise)

list of available image formats for the model files
  • MIB Model format: .model files, loadable in MIB or MATLAB (model = load('filename.model', '-mat');)
  • TIF compressed format: LZW-compressed TIF, pixels encode classes (1, 2, 3, etc.)
  • TIF uncompressed format: uncompressed TIF, pixels encode classes

Score files selects output format for prediction score maps

list of available image formats for the score files
  • Do not generate skips score files for better performance
  • Use AM format AmiraMesh, compatible with MIB, Fiji, or Amira
  • Use Matlab non-compressed format .mibImg, loadable in MIB or MATLAB (model = load('filename.mibImg', '-mat');)
  • Use Matlab compressed format compressed .mibImg, loadable in MIB or MATLAB
  • Use Matlab non-compressed format (range 0-1) .mibImg with 0-1 range, MATLAB-only (model = load('filename.mat');)
  • : (2D patch-wise only) upsamples downsampled patch-wise predictions to match original image size

Explore activations

Activations explorer

The Activations explorer evaluates network performance in detail. It is possible to see details of weights and activation images for all layers and filters.

  • Image lists preprocessed prediction images. select one to load a patch matching the network’s input size. use arrows to navigate
  • Layer lists network layers. selecting a layer triggers prediction and activation image generation
  • Z1..., X1..., Y1... shifts the patch across the image. Update activations with Update
  • Patch Z... adjusts Z within 3D network activation patches
  • Filter Id... cycles through activation layers
  • Update recalculates activations for the current patch
  • Collage creates a collage of current layer activations
Snapshot with the generated collage of activation images

Collage of activation images


Preview results section

Preview results section

Load images and models loads original images and segmentations into MIB’s active buffer post-prediction

Load models loads segmentations over the current MIB image. Requires that the image is already preloaded.
Load prediction scores loads score images (probabilities) into the active buffer

Evaluate segmentation calculates precision metrics if ground truth labels exist in Labels under PredictionImages. Material names must match training data

Details of the Evaluate segmentation operation

Select metrics to compute:
Evaluation metrics settings dialog
Results show a confusion matrix (0-100 scale), class metrics (Accuracy, IoU, MeanBFScore), and global metrics:
Evaluation results
Additional metrics (label occurrence, Sørensen-Dice coefficient) are available via the dropdown:
Evaluation options
Export results to MATLAB, Excel, or CSV in 3_Results/PredictionImages/ResultsModels (see Directories and Preprocessing). Details of the evaluation procedure are in MATLAB evaluatesemanticsegmentation


Back to MIB | User interface | DeepMIB