Deep Learning Segmentation
In this tutorial we will learn how to train the Deep Learning Segmentation model as well as how to apply the trained model for generating segmentation masks on unseen images.


What is Segmentation?

Segmentation - is an image processing task which goal is to achieve accurate classification of every pixel on the image. This allows to detect the objects of interest while preserving their geometric structure, e. g., cells segmentation on biomedical images. In turn, the outcome of segmentation can be used for further image analysis, e. g., cells count, space measure, plotting, etc.
For example, a binary segmentation of a neural structure can be visualized as follows:
Original neural structure image (on the left) together with the segmented binary mask (on the right)
Here every pixel of the original image gets assigned with the one of two classes: class 1 for foreground (white color) and class 0 for background (black color). The sample is taken from ISBI Challenge: Segmentation of neuronal structures in EM stacks.

What is Deep Learning?

Deep Learning is a class of Machine Learning techniques based on identifying the data representations through training artificial Neural Networks. In turn, Convolutional Neural Networks (CNN) is a subfield of Deep Learning mostly applied to the imagery data. Deep Learning model is a set of weights which are tuned automatically in the course of training and which are applied to the set of raw images in order to get the segmentation mask. One of the most used segmentation models is the UNet model.
How can we train the UNet model and further apply it to the set of our input raw images?

Train Workflow on APEER

On APEER you can train the UNet model by using publicly available workflow: Train 2D Segmentation Model. The workflow consists of 7 modules, the output of the last module (Supervised Segmentation Trainer) is the trained keras model together with training history.
"Train 2D Segmentation Model" Workflow on APEER

Step by Step Tutorial

  1. 1.
    Collect the data
    In order to train the segmentation model we need the set of raw images together with the ground-truth masks. By ground-truth masks we imply the set of labeled images where every pixel is assigned to a certain class. Note that the raw images and the corresponding ground-truth masks must have the same name and be equal in amount.
  2. 2.
    ZIP images
    Separately ZIP the set of images and the set of ground-truth masks. Note that you are supposed to ZIP the images from inside the folder they are located, but not the folder itself (this rule applies to the ground-truth as well).
  3. 3.
    Upload images and masks
    Open the workflow editor. Click on the first module called Input defined then in the opened window click on Reset button, after that click on Upload Files (in case you want to connect to the files in the web or your files are already on APEER click on the respective buttons). Now, you are able to Drag&Drop the zipped file with the raw images. For uploading the zipped file with masks go to the second module (Single File Upload) and do: Settings -> Select file -> Reset -> Upload Files -> Drag&Drop.
  4. 4.
    Tune the Hyperparameters
    You can change the parameters of the Patches Generator and the Supervised Segmentation Trainer modules. The shape of the patch is recommended to be slightly larger than the objects which are about to be segmented, in case the size of the objects is not clear - use the predefined shape or the shape of your choice. Tuning the hyperparameters of the Supervised Segmentation Trainer is more tricky as the different set of parameters can increase of decrease the performance of the model and it is not clear in advance which set is the best one. For more information about the module parameters visit the description page of respective modules.
  5. 5.
    Run the Workflow
    Click on the RUN button and wait until the workflow finishes its job. After that you will be able to download the keras model file (model.h5) together with the training history file.

Predict Workflow on APEER

Training a good segmentation model is a very important task, nevertheless it is not enough. What we really want is to apply our model on the set of unseen (not labeled) images and to get automatic segmentation. This functionality is also available on APEER through running Predict 2D Segmentation Workflow.
"Predict 2D Segmentation" Workflow on APEER

Step by Step Tutorial

  1. 1.
    ZIP images
    ZIP the set of images. Note that you are supposed to ZIP the images from inside the folder they are located, but not the folder itself.
  2. 2.
    Upload images
    Upload the image to the first Input defined module.
  3. 3.
    Upload keras model
    Upload the pretrained keras model (this can be an output of the Train 2D Segmentation Model) to the Single File Upload module.
  4. 4.
    Run the Workflow
    Click on the RUN button and wait until the workflow finishes its job. The output is the set of prediction masks for every corresponding input image.
Last modified 2yr ago