References of "Garcia Santa Cruz, Beatriz 50027886"
     in
Bookmark and Share    
Full Text
Peer Reviewed
See detailFrom tech to bench: Deep Learning pipeline for image segmentation of high-throughput high-content microscopy data
Garcia Santa Cruz, Beatriz UL; Jarazo, Javier UL; Saraiva, Claudia UL et al

Poster (2019, November 29)

Automation of biological image analysis is essential to boost biomedical research. The study of complex diseases such as neurodegenerative diseases calls for big amounts of data to build models towards ... [more ▼]

Automation of biological image analysis is essential to boost biomedical research. The study of complex diseases such as neurodegenerative diseases calls for big amounts of data to build models towards precision medicine. Such data acquisition is feasible in the context of high-throughput screening in which the quality of the results relays on the accuracy of image analysis. Although the state-of-the-art solutions for image segmentation employ deep learning approaches, the high cost of manual data curation is hampering the real use in current biomedical research laboratories. Here, we propose a pipeline that employs deep learning not only to conduct accurate segmentation but also to assist with the creation of high-quality datasets in a less time-consuming solution for the experts. Weakly-labelled datasets are becoming a common alternative as a starting point to develop real-world solutions. Traditional approaches based on classical multimedia signal processing were employed to generate a pipeline specifically optimized for the high-throughput screening images of iPSC fused with rosella biosensor. Such pipeline produced good segmentation results but with several inaccuracies. We employed the weakly-labelled masks produced in this pipeline to train a multiclass semantic segmentation CNN solution based on U-net architecture. Since a strong class imbalance was detected between the classes, we employed a class sensitive cost function: Dice coe!cient. Next, we evaluated the accuracy between the weakly-labelled data and the trained network segmentation using double-blind tests conducted by experts in cell biology with experience in this type of images; as well as traditional metrics to evaluate the quality of the segmentation using manually curated segmentations by cell biology experts. In all the evaluations the prediction of the neural network overcomes the weakly-labelled data quality segmentation. Another big handicap that complicates the use of deep learning solutions in wet lab environments is the lack of user-friendly tools for non-computational experts such as biologists. To complete our solution, we integrated the trained network on a GUI built on MATLAB environment with non-programming requirements for the user. This integration allows conducting semantic segmentation of microscopy images in a few seconds. In addition, thanks to the patch-based approach it can be employed in images with different sizes. Finally, the human-experts can correct the potential inaccuracies of the prediction in a simple interactive way which can be easily stored and employed to re-train the network to improve its accuracy. In conclusion, our solution focuses on two important bottlenecks to translate leading-edge technologies in computer vision to biomedical research: On one hand, the effortless obtention of high-quality datasets with expertise supervision taking advantage of the proven ability of our CNN solution to generalize from weakly-labelled inaccuracies. On the other hand, the ease of use provided by the GUI integration of our solution to both segment images and interact with the predicted output. Overall this approach looks promising for fast adaptability to new scenarios. [less ▲]

Detailed reference viewed: 63 (11 UL)
Full Text
Peer Reviewed
See detailDeep Learning Quality Control for High-Throughput High-Content Screening Microscopy Images
Garcia Santa Cruz, Beatriz UL; Jarazo, Javier UL; Schwamborn, Jens Christian UL et al

Poster (2019, October 10)

Automation of biological image analysis is essential to boost biomedical research. The study of complex diseases such as neurodegenerative diseases calls for big amounts of data to build models towards ... [more ▼]

Automation of biological image analysis is essential to boost biomedical research. The study of complex diseases such as neurodegenerative diseases calls for big amounts of data to build models towards precision medicine. Such data acquisition is feasible in the context of high-throughput high-content screening (HTHCS) in which the quality of the results relays on the accuracy of image analysis. Deep learning (DL) yields great performance in image analysis tasks especially with big amounts of data such as the produced in HTHCS contexts. Such DL and HTHCS strength is also their biggest weakness since DL solutions are highly sensitive to bad quality datasets. Hence, accurate Quality Control (QC) for microscopy HTHCS becomes an essential step to obtain reliable pipelines for HTHCS analysis. Usually, artifacts found on these platforms are the consequence of out-of-focus and undesirable density variations. The importance of accurate outlier detection becomes essential for both the training process of generic ML solutions (i.e. segmentation or classification) and the QC of the input data such solution will predict on. Moreover, during the QC of the input dataset, we aim not only to discard unsuitable images but to report the user on the quality of its dataset giving the user the choice to keep or discard the bad images. To build the QC solution we employed fluorescent microscopy images of rosella biosensor generated in the HTHCS platform. A total of 15 planes ranging from -6z to +7z steps to the two optimum planes. We evaluated 27 known focus measure operators and concluded that they have low sensitivity in noisy conditions. We propose a CNN solution which predicts the focus error based on the distance to the optimal plane, outperforming the evaluated focus operators. This QC allows for better results in cell segmentation models based on U-Net architecture as well as promising improvements in image classification tasks. [less ▲]

Detailed reference viewed: 45 (3 UL)
Full Text
Peer Reviewed
See detailThe Virtual Metabolic Human database: integrating human and gut microbiome metabolism with nutrition and disease
Noronha, Alberto UL; Modamio Chamarro, Jennifer UL; Jarosz, Yohan UL et al

in Nucleic Acids Research (2018)

A multitude of factors contribute to complex diseases and can be measured with ‘omics’ methods. Databases facilitate data interpretation for underlying mechanisms. Here, we describe the Virtual Metabolic ... [more ▼]

A multitude of factors contribute to complex diseases and can be measured with ‘omics’ methods. Databases facilitate data interpretation for underlying mechanisms. Here, we describe the Virtual Metabolic Human (VMH, www.vmh.life) database encapsulating current knowledge of human metabolism within five interlinked resources ‘Human metabolism’, ‘Gut microbiome’, ‘Disease’, ‘Nutrition’, and ‘ReconMaps’. The VMH captures 5180 unique metabolites, 17 730 unique reactions, 3695 human genes, 255 Mendelian diseases, 818 microbes, 632 685 microbial genes and 8790 food items. The VMH’s unique features are (i) the hosting of the metabolic reconstructions of human and gut microbes amenable for metabolic modeling; (ii) seven human metabolic maps for data visualization; (iii) a nutrition designer; (iv) a user-friendly webpage and application-programming interface to access its content; (v) user feedback option for community engagement and (vi) the connection of its entities to 57 other web resources. The VMH represents a novel, interdisciplinary database for data interpretation and hypothesis generation to the biomedical community. [less ▲]

Detailed reference viewed: 152 (23 UL)