Reference : Generalising from conventional pipelines using deep learning in high‑throughput scree...
Scientific journals : Article
Life sciences : Multidisciplinary, general & others
Engineering, computing & technology : Multidisciplinary, general & others
Systems Biomedicine
http://hdl.handle.net/10993/48972
Generalising from conventional pipelines using deep learning in high‑throughput screening workfows
English
Garcia Santa Cruz, Beatriz mailto [University of Luxembourg > Luxembourg Centre for Systems Biomedicine (LCSB) > >]
Sölter, Jan []
Gomez Giro, Gemma [University of Luxembourg > Luxembourg Centre for Systems Biomedicine (LCSB) > Developmental and Cellular Biology >]
Saraiva, Claudia [University of Luxembourg > Luxembourg Centre for Systems Biomedicine (LCSB) > Developmental and Cellular Biology >]
Sabaté Soler, Sonia [University of Luxembourg > Faculty of Science, Technology and Medicine (FSTM) > Department of Life Sciences and Medicine (DLSM) >]
Modamio Chamarro, Jenifer []
Barmpa, Kyriaki [University of Luxembourg > Luxembourg Centre for Systems Biomedicine (LCSB) > Developmental and Cellular Biology >]
Schwamborn, Jens Christian [University of Luxembourg > Luxembourg Centre for Systems Biomedicine (LCSB) > Developmental and Cellular Biology >]
Hertel, Frank [University of Luxembourg > Faculty of Science, Technology and Communication (FSTC) > >]
Jarazo, Javier [University of Luxembourg > Luxembourg Centre for Systems Biomedicine (LCSB) > >]
Husch, Andreas [University of Luxembourg > Luxembourg Centre for Systems Biomedicine (LCSB) > Interventional Neuroscience >]
6-Jul-2022
Scientific Reports
Nature Publishing Group
Yes
International
2045-2322
London
United Kingdom
[en] complex disease ; high-throughput screening ; image analysis ; deep learning approaches ; microscopy-image analysis
[en] The study of complex diseases relies on large amounts of data to build models toward precision
medicine. Such data acquisition is feasible in the context of high-throughput screening, in which
the quality of the results relies on the accuracy of the image analysis. Although state-of-the-art
solutions for image segmentation employ deep learning approaches, the high cost of manually
generating ground truth labels for model training hampers the day-to-day application in experimental
laboratories. Alternatively, traditional computer vision-based solutions do not need expensive
labels for their implementation. Our work combines both approaches by training a deep learning
network using weak training labels automatically generated with conventional computer vision
methods. Our network surpasses the conventional segmentation quality by generalising beyond
noisy labels, providing a 25% increase of mean intersection over union, and simultaneously reducing
the development and inference times. Our solution was embedded into an easy-to-use graphical
user interface that allows researchers to assess the predictions and correct potential inaccuracies
with minimal human input. To demonstrate the feasibility of training a deep learning solution on a
large dataset of noisy labels automatically generated by a conventional pipeline, we compared our
solution against the common approach of training a model from a small manually curated dataset
by several experts. Our work suggests that humans perform better in context interpretation, such as
error assessment, while computers outperform in pixel-by-pixel fne segmentation. Such pipelines are
illustrated with a case study on image segmentation for autophagy events. This work aims for better
translation of new technologies to real-world settings in microscopy-image analysis.
Researchers ; Professionals ; General public
http://hdl.handle.net/10993/48972
10.1038/s41598-022-15623-7
https://www.nature.com/articles/s41598-022-15623-7

File(s) associated to this reference

Fulltext file(s):

FileCommentaryVersionSizeAccess
Open access
Generalising_sr_publish.pdfPublisher postprint4.48 MBView/Open

Bookmark and Share SFX Query

All documents in ORBilu are protected by a user license.