Docs
Search…
Improve your segmentation
Improving your model is an iterative process. Here we're sharing our best practices on how to efficiently improve your ML segmentation.

Data centric model development

We are employing a data-centric approach (Andrew Ng, 2021) to help you developing a robust ML segmentation model as fast as possible. This means that we are providing you with the tools necessary to iteratively build the ideal training dataset. Such a dataset comprises just enough annotations at the locations where they matter the most to achieve the segmentation robustness that you require.
In particular for complex tasks, start with a simple subset of your task and add complexity while you build your annotated dataset:
  1. 1.
    Start with one class that you want to segment
  2. 2.
    Annotate (at first around 50) objects/regions in images that seem similar (e.g. from one experiment)
  3. 3.
    Train and see if the segmentation of that class is sufficiently accurate
  4. 4.
    To make the algorithm more robust, add images with more variability (e.g. from different experiments) and repeat step 2 and 3
  5. 5.
    If the first class is segmented well across all images, iteratively annotate and train all other classes
This process is efficient, as it allows you to focus during your valuable annotation time on hard-to-learn examples and avoids wasting your time on easy examples. Due to the gradual increase in complexity it allows you to build your intuition about which image features are hard to learn for the algorithm.
If your goal is to develop a model that performs well across many different imaging conditions, it is important that your final annotated dataset reflects the variability of expected future data. Such inter-dataset variability can be caused by different experimental setups or acquisition parameters (illumination, magnification, exposure, differences in samples, ...)

Best Practices

  • Don't waste your time annotating objects/regions that the algorithm has already learned to segment.
  • Inspect the segmentations from your most recent training to learn which regions the algorithm couldn't segment and focus your annotations on these regions.
  • Add background border with at least one pixel thickness between objects if the algorithm has trouble separating them.
  • Rare classes are harder to learn. Try finding more training images to provide more examples of rare or other hard-to-learn cases.
If you have any questions about the ML Toolset on APEER please reach out to us at [email protected]
Last modified 5mo ago