Last 7 days
Bookmark and Share    
Full Text
Peer Reviewed
See detailClassification of red blood cell shapes in flow using outlier tolerant machine learning
Kihm, A.; Kaestner, L.; Wagner, Christian UL et al

in PLoS Computational Biology (2018)

Detailed reference viewed: 66 (0 UL)
Full Text
Peer Reviewed
See detailAntimargination of microparticles and platelets in the vicinity of branching vessels
Bächer, C.; Kihm, A.; Schrack, L. et al

in Biophysical Journal (2018)

Detailed reference viewed: 67 (0 UL)
Full Text
Peer Reviewed
See detailMining Fix Patterns for FindBugs Violations
Liu, Kui UL; Kim, Dongsun; Bissyande, Tegawendé François D Assise UL et al

in IEEE Transactions on Software Engineering (2018)

Several static analysis tools, such as Splint or FindBugs, have been proposed to the software development community to help detect security vulnerabilities or bad programming practices. However, the ... [more ▼]

Several static analysis tools, such as Splint or FindBugs, have been proposed to the software development community to help detect security vulnerabilities or bad programming practices. However, the adoption of these tools is hindered by their high false positive rates. If the false positive rate is too high, developers may get acclimated to violation reports from these tools, causing concrete and severe bugs being overlooked. Fortunately, some violations are actually addressed and resolved by developers. We claim that those violations that are recurrently fixed are likely to be true positives, and an automated approach can learn to repair similar unseen violations. However, there is lack of a systematic way to investigate the distributions on existing violations and fixed ones in the wild, that can provide insights into prioritizing violations for developers, and an effective way to mine code and fix patterns which can help developers easily understand the reasons of leading violations and how to fix them. In this paper, we first collect and track a large number of fixed and unfixed violations across revisions of software. The empirical analyses reveal that there are discrepancies in the distributions of violations that are detected and those that are fixed, in terms of occurrences, spread and categories, which can provide insights into prioritizing violations. To automatically identify patterns in violations and their fixes, we propose an approach that utilizes convolutional neural networks to learn features and clustering to regroup similar instances. We then evaluate the usefulness of the identified fix patterns by applying them to unfixed violations. The results show that developers will accept and merge a majority (69/116) of fixes generated from the inferred fix patterns. It is also noteworthy that the yielded patterns are applicable to four real bugs in the Defects4J major benchmark for software testing and automated repair. [less ▲]

Detailed reference viewed: 102 (3 UL)
Full Text
Peer Reviewed
See detailDeep neural network with high-order neuron for the prediction of foamed concrete strength
Nguyen, Tuan; Kashani, Alireza; Ngo, Tuan et al

in Computer-Aided Civil and Infrastructure Engineering (2018)

The article presents a deep neural network model for the prediction of the compressive strength of foamed concrete. A new, high-order neuron was developed for the deep neural network model to improve the ... [more ▼]

The article presents a deep neural network model for the prediction of the compressive strength of foamed concrete. A new, high-order neuron was developed for the deep neural network model to improve the performance of the model. Moreover, the cross-entropy cost function and rectified linear unit activation function were employed to enhance the performance of the model. The present model was then applied to predict the compressive strength of foamed concrete through a given data set, and the obtained results were compared with other machine learning methods including conventional artificial neural network (C-ANN) and second-order artificial neural network (SO-ANN). To further validate the proposed model, a new data set from the laboratory and a given data set of high-performance concrete were used to obtain a higher degree of confidence in the prediction. It is shown that the proposed model obtained a better prediction, compared to other methods. In contrast to C-ANN and SO-ANN, the proposed model can genuinely improve its performance when training a deep neural network model with multiple hidden layers. A sensitivity analysis was conducted to investigate the effects of the input variables on the compressive strength. The results indicated that the compressive strength of foamed concrete is greatly affected by density, followed by the water-to-cement and sand-to-cement ratios. By providing a reliable prediction tool, the proposed model can aid researchers and engineers in mixture design optimization of foamed concrete. [less ▲]

Detailed reference viewed: 120 (12 UL)
Full Text
Peer Reviewed
See detailNumerical–experimental observation of shape bistability of red blood cells flowing in a microchannel
Guckenberger, A.; Kihm, A.; John, T. et al

in European Physical Journal E. Soft Matter (2018)

Detailed reference viewed: 74 (0 UL)
Full Text
Peer Reviewed
See detailTowards Purposeful Enterprise Modeling for Enterprise Analysis
de Kinderen, Sybren; Ma, Qin UL

in 2018 International Conference on Information Management & Management Science (2018)

Detailed reference viewed: 164 (3 UL)
Full Text
Peer Reviewed
See detailDecomposing Models through Dependency Graphs
Ma, Qin UL; Kelsen, Pierre UL

in 12th International Symposium on Theoretical Aspects of Software Engineering (2018)

Detailed reference viewed: 95 (5 UL)
Full Text
Peer Reviewed
See detailModeling in Support of Multi-Perspective Valuation of Smart Grid Initiatives
Kaczmarek-Heß, Monika; de Kinderen, Sybren; Ma, Qin UL et al

in IEEE 12th International Conference on Research Challenges in Information Science (2018)

Detailed reference viewed: 59 (0 UL)
Full Text
Peer Reviewed
See detailEnabling Value Co-Creation in Customer Journeys with VIVA
Razo-Zapata, Iván; Chew, Eng; Ma, Qin UL et al

in Joint International Conference of Service Science and Innovation and Serviceology (2018)

Detailed reference viewed: 46 (2 UL)
Full Text
See detailKrichever-Novikov type algebras. Definitions and Results
Schlichenmaier, Martin UL

E-print/Working paper (2018)

Detailed reference viewed: 23 (0 UL)
Full Text
Peer Reviewed
See detailHow does it feel to be a Third Country - Considerations on Brexit for Financial Services
Zetzsche, Dirk Andreas UL; Lehmann, M.

in Complexity's Embrace: The International Law Implications of Brexit (2018)

Detailed reference viewed: 39 (4 UL)
Full Text
Peer Reviewed
See detailPersonalized risk prediction of postoperative cognitive impairment - rationale for the EU-funded BioCog project.
Winterer, G.; Androsova, Ganna UL; Bender, O. et al

in European psychiatry : the journal of the Association of European Psychiatrists (2018), 50

Postoperative cognitive impairment is among the most common medical complications associated with surgical interventions - particularly in elderly patients. In our aging society, it is an urgent medical ... [more ▼]

Postoperative cognitive impairment is among the most common medical complications associated with surgical interventions - particularly in elderly patients. In our aging society, it is an urgent medical need to determine preoperative individual risk prediction to allow more accurate cost-benefit decisions prior to elective surgeries. So far, risk prediction is mainly based on clinical parameters. However, these parameters only give a rough estimate of the individual risk. At present, there are no molecular or neuroimaging biomarkers available to improve risk prediction and little is known about the etiology and pathophysiology of this clinical condition. In this short review, we summarize the current state of knowledge and briefly present the recently started BioCog project (Biomarker Development for Postoperative Cognitive Impairment in the Elderly), which is funded by the European Union. It is the goal of this research and development (R&D) project, which involves academic and industry partners throughout Europe, to deliver a multivariate algorithm based on clinical assessments as well as molecular and neuroimaging biomarkers to overcome the currently unsatisfying situation. [less ▲]

Detailed reference viewed: 125 (4 UL)
Full Text
Peer Reviewed
See detailDextran adsorption onto red blood cells revisited: single cell quantification by laser tweezers combined with microfluidics
Lee, K.; Shirshin, E.; Rovnyagina, N. et al

in Biomedical Optics Express (2018)

Detailed reference viewed: 73 (0 UL)
Full Text
Peer Reviewed
See detailEntanglement of Approximate Quantum Strategies in XOR Games
Ostrev, Dimiter UL; Vidick, Thomas

in Quantum Information and Computation (2018), 18(7&8), 06170631

We characterize the amount of entanglement that is sufficient to play any XOR game near-optimally. We show that for any XOR game $G$ and $\eps>0$ there is an $\eps$-optimal strategy for $G$ using $\lceil ... [more ▼]

We characterize the amount of entanglement that is sufficient to play any XOR game near-optimally. We show that for any XOR game $G$ and $\eps>0$ there is an $\eps$-optimal strategy for $G$ using $\lceil \eps^{-1} \rceil$ ebits of entanglement, irrespective of the number of questions in the game. By considering the family of XOR games CHSH($n$) introduced by Slofstra (Jour. Math. Phys. 2011), we show that this bound is nearly tight: for any $\eps>0$ there is an $n = \Theta(\eps^{-1/5})$ such that $\Omega(\eps^{-1/5})$ ebits are required for any strategy achieving bias that is at least a multiplicative factor $(1-\eps)$ from optimal in CHSH($n$). [less ▲]

Detailed reference viewed: 52 (3 UL)
Full Text
Peer Reviewed
See detailExpanding the use of spectral libraries in proteomics.
Deutsch, Eric W.; Perez-Riverol, Yasset; Chalkley, Robert J. et al

in Journal of proteome research (2018)

The 2017 Dagstuhl Seminar on Computational Proteomics provided an opportunity for a broad discussion on the current state and future directions of the generation and use of peptide tandem mass ... [more ▼]

The 2017 Dagstuhl Seminar on Computational Proteomics provided an opportunity for a broad discussion on the current state and future directions of the generation and use of peptide tandem mass spectrometry spectral libraries. Their use in proteomics is growing slowly, but there are multiple challenges in the field that must be addressed to further increase the adoption of spectral libraries and related techniques. The primary bottlenecks are the paucity of high quality and comprehensive libraries and the general difficulty of adopting spectral library searching into existing workflows. There are several existing spectral library formats, but none capture a satisfactory level of metadata; therefore a logical next improvement is to design a more advanced, Proteomics Standards Initiative-approved spectral library format that can encode all of the desired metadata. The group discussed a series of metadata requirements organized into three designations of completeness or quality, tentatively dubbed bronze, silver, and gold. The metadata can be organized at four different levels of granularity: at the collection (library) level, at the individual entry (peptide ion) level, at the peak (fragment ion) level, and at the peak annotation level. Strategies for encoding mass modifications in a consistent manner and the requirement for encoding high-quality and commonly-seen but as-yet-unidentified spectra were discussed. The group also discussed related topics, including strategies for comparing two spectra, techniques for generating representative spectra for a library, approaches for selection of optimal signature ions for targeted workflows, and issues surrounding the merging of two or more libraries into one. We present here a review of this field and the challenges that the community must address in order to accelerate the adoption of spectral libraries in routine analysis of proteomics datasets. [less ▲]

Detailed reference viewed: 113 (9 UL)
Full Text
Peer Reviewed
See detailExploring the Potential of a Global Emerging Contaminant Early Warning Network through the Use of Retrospective Suspect Screening with High-Resolution Mass Spectrometry.
Alygizakis, Nikiforos A.; Samanipour, Saer; Hollender, Juliane et al

in Environmental science & technology (2018), 52(9), 5135-5144

A key challenge in the environmental and exposure sciences is to establish experimental evidence of the role of chemical exposure in human and environmental systems. High resolution and accurate tandem ... [more ▼]

A key challenge in the environmental and exposure sciences is to establish experimental evidence of the role of chemical exposure in human and environmental systems. High resolution and accurate tandem mass spectrometry (HRMS) is increasingly being used for the analysis of environmental samples. One lauded benefit of HRMS is the possibility to retrospectively process data for (previously omitted) compounds that has led to the archiving of HRMS data. Archived HRMS data affords the possibility of exploiting historical data to rapidly and effectively establish the temporal and spatial occurrence of newly identified contaminants through retrospective suspect screening. We propose to establish a global emerging contaminant early warning network to rapidly assess the spatial and temporal distribution of contaminants of emerging concern in environmental samples through performing retrospective analysis on HRMS data. The effectiveness of such a network is demonstrated through a pilot study, where eight reference laboratories with available archived HRMS data retrospectively screened data acquired from aqueous environmental samples collected in 14 countries on 3 different continents. The widespread spatial occurrence of several surfactants (e.g., polyethylene glycols ( PEGs ) and C12AEO-PEGs ), transformation products of selected drugs (e.g., gabapentin-lactam, metoprolol-acid, carbamazepine-10-hydroxy, omeprazole-4-hydroxy-sulfide, and 2-benzothiazole-sulfonic-acid), and industrial chemicals (3-nitrobenzenesulfonate and bisphenol-S) was revealed. Obtaining identifications of increased reliability through retrospective suspect screening is challenging, and recommendations for dealing with issues such as broad chromatographic peaks, data acquisition, and sensitivity are provided. [less ▲]

Detailed reference viewed: 86 (5 UL)
Full Text
Peer Reviewed
See detailMind the Gap: Mapping Mass Spectral Databases in Genome-Scale Metabolic Networks Reveals Poorly Covered Areas.
Frainay, Clement; Schymanski, Emma UL; Neumann, Steffen et al

in Metabolites (2018), 8(3),

The use of mass spectrometry-based metabolomics to study human, plant and microbial biochemistry and their interactions with the environment largely depends on the ability to annotate metabolite ... [more ▼]

The use of mass spectrometry-based metabolomics to study human, plant and microbial biochemistry and their interactions with the environment largely depends on the ability to annotate metabolite structures by matching mass spectral features of the measured metabolites to curated spectra of reference standards. While reference databases for metabolomics now provide information for hundreds of thousands of compounds, barely 5% of these known small molecules have experimental data from pure standards. Remarkably, it is still unknown how well existing mass spectral libraries cover the biochemical landscape of prokaryotic and eukaryotic organisms. To address this issue, we have investigated the coverage of 38 genome-scale metabolic networks by public and commercial mass spectral databases, and found that on average only 40% of nodes in metabolic networks could be mapped by mass spectral information from standards. Next, we deciphered computationally which parts of the human metabolic network are poorly covered by mass spectral libraries, revealing gaps in the eicosanoids, vitamins and bile acid metabolism. Finally, our network topology analysis based on the betweenness centrality of metabolites revealed the top 20 most important metabolites that, if added to MS databases, may facilitate human metabolome characterization in the future. [less ▲]

Detailed reference viewed: 83 (3 UL)
Full Text
Peer Reviewed
See detailPerformance of combined fragmentation and retention prediction for the identification of organic micropollutants by LC-HRMS.
Hu, Meng; Muller, Erik; Schymanski, Emma UL et al

in Analytical and bioanalytical chemistry (2018), 410(7), 1931-1941

In nontarget screening, structure elucidation of small molecules from high resolution mass spectrometry (HRMS) data is challenging, particularly the selection of the most likely candidate structure among ... [more ▼]

In nontarget screening, structure elucidation of small molecules from high resolution mass spectrometry (HRMS) data is challenging, particularly the selection of the most likely candidate structure among the many retrieved from compound databases. Several fragmentation and retention prediction methods have been developed to improve this candidate selection. In order to evaluate their performance, we compared two in silico fragmenters (MetFrag and CFM-ID) and two retention time prediction models (based on the chromatographic hydrophobicity index (CHI) and on log D). A set of 78 known organic micropollutants was analyzed by liquid chromatography coupled to a LTQ Orbitrap HRMS with electrospray ionization (ESI) in positive and negative mode using two fragmentation techniques with different collision energies. Both fragmenters (MetFrag and CFM-ID) performed well for most compounds, with average ranking the correct candidate structure within the top 25% and 22 to 37% for ESI+ and ESI- mode, respectively. The rank of the correct candidate structure slightly improved when MetFrag and CFM-ID were combined. For unknown compounds detected in both ESI+ and ESI-, generally positive mode mass spectra were better for further structure elucidation. Both retention prediction models performed reasonably well for more hydrophobic compounds but not for early eluting hydrophilic substances. The log D prediction showed a better accuracy than the CHI model. Although the two fragmentation prediction methods are more diagnostic and sensitive for candidate selection, the inclusion of retention prediction by calculating a consensus score with optimized weighting can improve the ranking of correct candidates as compared to the individual methods. Graphical abstract Consensus workflow for combining fragmentation and retention prediction in LC-HRMS-based micropollutant identification. [less ▲]

Detailed reference viewed: 82 (1 UL)