Reference : Generalising from Conventional Pipelines: A Case Study in Deep Learning-Based High-Th...
E-prints/Working papers : Already available on another site
Life sciences : Multidisciplinary, general & others
Systems Biomedicine
http://hdl.handle.net/10993/48972
Generalising from Conventional Pipelines: A Case Study in Deep Learning-Based High-Throughput Screening
English
Garcia Santa Cruz, Beatriz mailto [University of Luxembourg > Luxembourg Centre for Systems Biomedicine (LCSB) > >]
Sölter, Jan []
Gomez Giro, Gemma mailto [University of Luxembourg > Luxembourg Centre for Systems Biomedicine (LCSB) > Developmental and Cellular Biology >]
Saraiva, Claudia mailto [University of Luxembourg > Luxembourg Centre for Systems Biomedicine (LCSB) > Developmental and Cellular Biology >]
Sabaté Soler, Sonia mailto [University of Luxembourg > Faculty of Science, Technology and Medicine (FSTM) > Department of Life Sciences and Medicine (DLSM) >]
Modamio Chamarro, Jenifer []
Barmpa, Kyriaki mailto [University of Luxembourg > Luxembourg Centre for Systems Biomedicine (LCSB) > Developmental and Cellular Biology >]
Schwamborn, Jens Christian mailto [University of Luxembourg > Luxembourg Centre for Systems Biomedicine (LCSB) > Developmental and Cellular Biology >]
Hertel, Frank mailto [University of Luxembourg > Faculty of Science, Technology and Communication (FSTC) > >]
Jarazo, Javier mailto [University of Luxembourg > Luxembourg Centre for Systems Biomedicine (LCSB) > >]
Husch, Andreas mailto [University of Luxembourg > Luxembourg Centre for Systems Biomedicine (LCSB) > Interventional Neuroscience >]
18-Oct-2021
v-1
No
[en] complex disease ; high-throughput screening ; image analysis ; deep learning approaches ; microscopy-image analysis
[en] The study of complex diseases relies on large amounts of data to build models toward precision medicine. Such data acquisition is feasible in the context of high-throughput screening, in which the quality of the results relies on the accuracy of the image analysis. Although state-of-the-art solutions for image segmentation employ deep learning approaches, the high cost of manually generating ground truth labels for model training hampers the day-to-day application in experimental laboratories. Alternatively, traditional computer vision-based solutions do not need expensive labels for their implementation. Our work combines both approaches by training a deep learning network using weak training labels automatically generated with conventional computer vision methods. Our network surpasses the conventional segmentation quality by generalising beyond noisy labels, providing a 25 % increase of mean intersection over union, and simultaneously reducing the development and inference times. Our solution was embedded into an easy-to-use graphical user interface that allows researchers to assess the predictions and correct potential inaccuracies with minimal human input. To demonstrate the feasibility of training a deep learning solution on a large dataset of noisy labels automatically generated by a conventional pipeline, we compared our solution against the common approach of training a model from a small manually curated dataset by several experts. Our work suggests that humans perform better in context interpretation, such as error assessment, while computers outperform in pixel-by-pixel fine segmentation. Such pipelines are illustrated with a case study on image segmentation for autophagy events. This work aims for better translation of new technologies to real-world settings in microscopy-image analysis.
Researchers
http://hdl.handle.net/10993/48972
10.21203/rs.3.rs-991404/v1
10.21203/rs.3.rs-991404/v1

There is no file associated with this reference.

Bookmark and Share SFX Query

All documents in ORBilu are protected by a user license.