Docs
Search…
Simplify your objective
The APEER Machine Learning toolset uses powerful neural networks which can solve many complex segmentation tasks. The more complex the task, the more annotations you need to provide for the network to learn that task. For very complex tasks it can be a lot of effort to create datasets with large amounts of annotations. To reduce the amount of annotations required we always try to simplify the task first with these steps:

Standardize imaging conditions

Imaging conditions can have a strong influence on the complexity of the task. The more you can standardize imaging parameters, the easier it becomes for the algorithm to learn that task and the less annotations you will have to create. If you apply the following rules when collecting images for your dataset, the resulting dataset will be optimized for using the APEER Machine Learning toolset:
  • use the same illumination parameters for similar intensity histograms
  • use same magnification and binning for objects of similar pixel size
  • keep size of individual regions/objects to be segmented below 512*512 pixels

Standardize experimental conditions

In addition to the standardization of your optical parameters it can help a lot to standardize other experimental parameters as well. The goal is to acquire images where your objects/regions of interest are as standardized as possible. The following measures can help if they are applicable to your use case:
  • use the same sample preparation (microscopy images)
  • keep size, location and orientation of your region of interest constant
  • keep background homogeneous
  • keep object density low

Avoid to categorize continuous signals

It is tempting to invent segmentation categories for problems that can be solved much easier with post-processing. An example would be to try training an algorithm to segment to two classes "small-cells" and "large-cells", where the only difference between the classes is the fact that the latter comprises larger cells than the first. It is certainly possible to train such an algorithm with many annotated cells which cover the exact size boundary that marks the transition from "small" to "large" cells. However, such challenges are much easier solved by filtering the post-processed segmentation result of a generic algorithm which is segmenting cells of all sizes.
Another example of a case with a not needed category is the segmentation of separate classes for objects that are "in" and "out" of focus. Better train a model for all objects that you can recognize and use the mean or maximum pixel intensity within the segmented area to determine if an object is sufficiently in focus for your application.

Start simple and increase complexity as needed

The ultimate goal of developing a ML-powered segmentation model is often to achieve robust segmentation of many classes across different imaging conditions. However it might be difficult for the algorithm to learn all of the complexity at once if your are initially only providing a few annotated objects within that large parameter space. Therefore, we recommend the following step-by-step approach:
  1. 1.
    Annotate one class in a dataset with homogeneous imaging conditions
  2. 2.
    Train a semantic segmentation model
  3. 3.
    Refine your segmentation model according to the section Improve your segmentation
  4. 4.
    Optional: Only if your object density is high and many objects are connected, switch to training an instance segmentation model to start teaching the model how to separate connected objects
  5. 5.
    Optional: Add more classes once you are satisfied with the performance of the previous class
  6. 6.
    Optional: Add more variability in imaging and experimental conditions and keep refining your model
These instructions provided here will not only help your algorithm to best learn the task from the annotated dataset that you used for training. It will also help to create a segmentation model which generalizes and performs well on data that is acquired later in future standardized experiments.
If you have any questions about the ML Toolset on APEER please reach out to us at [email protected]
Last modified 5mo ago