Abstract :
[en] The study of complex diseases relies on large amounts of data to build models toward precision
medicine. Such data acquisition is feasible in the context of high-throughput screening, in which
the quality of the results relies on the accuracy of the image analysis. Although state-of-the-art
solutions for image segmentation employ deep learning approaches, the high cost of manually
generating ground truth labels for model training hampers the day-to-day application in experimental
laboratories. Alternatively, traditional computer vision-based solutions do not need expensive
labels for their implementation. Our work combines both approaches by training a deep learning
network using weak training labels automatically generated with conventional computer vision
methods. Our network surpasses the conventional segmentation quality by generalising beyond
noisy labels, providing a 25% increase of mean intersection over union, and simultaneously reducing
the development and inference times. Our solution was embedded into an easy-to-use graphical
user interface that allows researchers to assess the predictions and correct potential inaccuracies
with minimal human input. To demonstrate the feasibility of training a deep learning solution on a
large dataset of noisy labels automatically generated by a conventional pipeline, we compared our
solution against the common approach of training a model from a small manually curated dataset
by several experts. Our work suggests that humans perform better in context interpretation, such as
error assessment, while computers outperform in pixel-by-pixel fne segmentation. Such pipelines are
illustrated with a case study on image segmentation for autophagy events. This work aims for better
translation of new technologies to real-world settings in microscopy-image analysis.
Disciplines :
Life sciences: Multidisciplinary, general & others
Engineering, computing & technology: Multidisciplinary, general & others
Scopus citations®
without self-citations
0