Publications generated thanks to the UL HPC Platform
Bookmark and Share    
Full Text
Peer Reviewed
See detailGene family information facilitates variant interpretation and identification of disease-associated genes in neurodevelopmental disorders
Lal, Dennis; May, Patrick UL; Perez-Palma, Eduardo et al

in Genome Medicine (2020), 12(28),

Background: Classifying pathogenicity of missense variants represents a major challenge in clinical practice during the diagnoses of rare and genetic heterogeneous neurodevelopmental disorders (NDDs ... [more ▼]

Background: Classifying pathogenicity of missense variants represents a major challenge in clinical practice during the diagnoses of rare and genetic heterogeneous neurodevelopmental disorders (NDDs). While orthologous gene conservation is commonly employed in variant annotation, approximately 80% of known disease-associated genes belong to gene families. The use of gene family information for disease gene discovery and variant interpretation has not yet been investigated on genome-wide scale. We empirically evaluate whether paralog conserved or non-conserved sites in human gene families are important in NDDs. Methods: Gene family information was collected from Ensembl. Paralog conserved sites were defined based on paralog sequence alignments. 10,068 NDD patients and 2,078 controls were statistically evaluated for de novo variant burden in gene families. Results: We demonstrate that disease-associated missense variants are enriched at paralog conserved sites across all disease groups and inheritance models tested. We developed a gene family de novo enrichment framework that identified 43 exome-wide enriched gene families including 98 de novo variant carrying genes in NDD patients of which 28 represent novel candidate genes for NDD which are brain expressed and under evolutionary constraint. Conclusion: This study represents the first method to incorporate gene-family information into a statistical framework to interpret variant data for NDDs and to discover newly NDD -associated genes. [less ▲]

Detailed reference viewed: 55 (0 UL)
Full Text
Peer Reviewed
See detailExcess of singleton loss-of-function variants in Parkinson's disease contributes to genetic risk.
Bobbili, Dheeraj Reddy; Banda, Peter UL; Krüger, Rejko UL et al

in Journal of Medical Genetics (2020)

Background Parkinson’s disease (PD) is a neurodegenerative disorder with complex genetic architecture. Besides rare mutations in high-risk genes related to monogenic familial forms of PD, multiple ... [more ▼]

Background Parkinson’s disease (PD) is a neurodegenerative disorder with complex genetic architecture. Besides rare mutations in high-risk genes related to monogenic familial forms of PD, multiple variants associated with sporadic PD were discovered via association studies. Methods We studied the whole-exome sequencing data of 340 PD cases and 146 ethnically matched controls from the Parkinson’s Progression Markers Initiative (PPMI) and performed burden analysis for different rare variant classes. Disease prediction models were built based on clinical, non-clinical and genetic features, including both common and rare variants, and two machine learning methods. Results We observed a significant exome-wide burden of singleton loss-of-function variants (corrected p=0.037). Overall, no exome-wide burden of rare amino acid changing variants was detected. Finally, we built a disease prediction model combining singleton loss-of-function variants, a polygenic risk score based on common variants, and family history of PD as features and reached an area under the curve of 0.703 (95% CI 0.698 to 0.708). By incorporating a rare variant feature, our model increased the performance of the state-of-the-art classification model for the PPMI dataset, which reached an area under the curve of 0.639 based on common variants alone. Conclusion The main finding of this study is to highlight the contribution of singleton loss-of-function variants to the complex genetics of PD and that disease risk prediction models combining singleton and common variants can improve models built solely on common variants. [less ▲]

Detailed reference viewed: 45 (2 UL)
Full Text
See detailOptimized Collision Search for STARK-Friendly Hash Challenge Candidates
Udovenko, Aleksei UL

E-print/Working paper (2020)

In this note, we report several solutions to the STARK-Friendly Hash Challenge: a competition with the goal of finding collisions for several hash functions designed specifically for zero-knowledge proofs ... [more ▼]

In this note, we report several solutions to the STARK-Friendly Hash Challenge: a competition with the goal of finding collisions for several hash functions designed specifically for zero-knowledge proofs (ZKP) and multiparty computations (MPC). We managed to find collisions for 3 instances of 91-bit hash functions. The method used is the classic parallel collision search with distinguished points from van Oorshot and Wiener (1994). As this is a general attack on hash functions, it does not exhibit any particular weakness of the chosen hash functions. The crucial part is to optimize the implementations to make the attack cost realistic, and we describe several arithmetic tricks. [less ▲]

Detailed reference viewed: 54 (2 UL)
Full Text
See detailMarkov Chain Monte Carlo and the Application to Geodetic Time Series Analysis
Olivares Pulido, German UL; Teferle, Felix Norman UL; Hunegnaw, Addisu UL

in Montillet, Jean-Philippe; Bos, Machiel (Eds.) Geodetic Time Series Analysis in Earth Sciences (2020)

The time evolution of geophysical phenomena can be characterised by stochastic time series. The stochastic nature of the signal stems from the geophysical phenomena involved and any noise, which may be ... [more ▼]

The time evolution of geophysical phenomena can be characterised by stochastic time series. The stochastic nature of the signal stems from the geophysical phenomena involved and any noise, which may be due to, e.g., un-modelled effects or measurement errors. Until the 1990's, it was usually assumed that white noise could fully characterise this noise. However, this was demonstrated to be not the case and it was proven that this assumption leads to underestimated uncertainties of the geophysical parameters inferred from the geodetic time series. Therefore, in order to fully quantify all the uncertainties as robustly as possible, it is imperative to estimate not only the deterministic but also the stochastic parameters of the time series. In this regard, the Markov Chain Monte Carlo (MCMC) method can provide a sample of the distribution function of all parameters, including those regarding the noise, e.g., spectral index and amplitudes. After presenting the MCMC method and its implementation in our MCMC software we apply it to synthetic and real time series and perform a cross-evaluation using Maximum Likelihood Estimation (MLE) as implemented in the CATS software. Several examples as to how the MCMC method performs as a parameter estimation method for geodetic time series are given in this chapter. These include the applications to GPS position time series, superconducting gravity time series and monthly mean sea level (MSL) records, which all show very different stochastic properties. The impact of the estimated parameter uncertainties on sub-sequentially derived products is briefly demonstrated for the case of plate motion models. Finally, the MCMC results for weekly downsampled versions of the benchmark synthetic GNSS time series as provided in Chapter 2 are presented separately in an appendix. [less ▲]

Detailed reference viewed: 21 (0 UL)
Full Text
See detailUsing the Vertical Land Movement estimates from the IGS TIGA combined solution to derive Global Mean Sea Level changes
Bogusz, Janusz; Hunegnaw, Addisu UL; Teferle, Felix Norman UL et al

Scientific Conference (2019, December 13)

Global mean sea level (GMSL) is now widely recognized to have risen between 1 to 2 mm/yr depending on location since the 20th century. Prior to the satellite altimetry era, GMSL was primarily estimated ... [more ▼]

Global mean sea level (GMSL) is now widely recognized to have risen between 1 to 2 mm/yr depending on location since the 20th century. Prior to the satellite altimetry era, GMSL was primarily estimated from a set of secular tide gauge records relative to coastal benchmarks. Recent measurements of GPS (Global Positioning System) have been demonstrated as a useful tool of a direct estimate of Vertical Land Motion (VLM) induced by both long and short-term geophysical and human-induced processes in a geocentric reference frame. This presentation will provide the results of a combination performed using the CATREF software of three independent GPS daily solutions provided by British Isles continuous GNSS Facility – University of Luxembourg consortium (BLT), German Research Centre for Geosciences (GFZ) and University of La Rochelle (ULR) under the auspices of the Tide Gauge Benchmark Monitoring (TIGA) Working Group (WG), that results in a spatially comprehensive map of VLM near or close to tide gauge benchmarks. The combination was performed in accordance with the second re-processing campaign (repro2) of the IGS (International GNSS Service). Long coastal tide gauge records from the archives maintained at the Permanent Service for Mean Sea Level (PSMSL) were extracted for relative sea level estimates. To cross-compare the sea level rates over the years, we employed observations between 1900-2016. Then, the time series were cut and analyzed separately, ceteris paribus, for the period 1960-2016. This analysis was aimed at a cross-comparison of relative sea level trends and their changes over the years. The stochastic part of the tide gauge records was analyzed with Maximum Likelihood Estimation (MLE) and assumed several different combinations of noise models with the Bayesian Information Criterion (BIC) providing a means to identify the preferred one. The relative sea level estimates were corrected by the inverted barometric effect to the tide-gauge records using data from the 20th century Reanalysis project version V2C, the effect of wind stress on the surface of the ocean in both, zonal and meridional components, as well as Pacific Decadal Oscillation (PDO) and the North Pacific Gyre Oscillation (NPGO) influencing Pacific tide gauge records. The GPS-based velocities were corrected by Glacial Isostatic Adjustment (GIA) effect using ICE-6G(VM5a) model with associated geoid rate and post seismic decays using ITRF2014 estimates. Also, environmental loading models were employed to account for present-day elastic loading in VLM. The Mean Sea Level (MSL) trends from tide gauges and VLM-corrected MSL trends using GIA model (TG+GIA) and the TIGA combination (TG+TIGA) were determined. Our final reconstruction of GMSL based on the MSL records from 1900 to 2016 where the VLM uncertainty is smaller than 0.7 mm/yr indicate a long-term trend of 1.75 +/- 0.2 mm/yr and is in good agreement with several similar determinations. [less ▲]

Detailed reference viewed: 29 (0 UL)
Full Text
See detailConsolidating Observation of Land and Sea Level Changes around South Georgia Island
Teferle, Felix Norman UL; Hunegnaw, Addisu UL; Hibbert, Angela et al

Poster (2019, December 13)

With its mid-ocean location in the Southern Atlantic Ocean South Georgia Island is in a key position for the oceanic and geodetic global monitoring networks. Since 2013 the tide gauge at King Edward Point ... [more ▼]

With its mid-ocean location in the Southern Atlantic Ocean South Georgia Island is in a key position for the oceanic and geodetic global monitoring networks. Since 2013 the tide gauge at King Edward Point (KEP) with GLOSS ID 187 has been monitored using a GNSS station nearby on Brown Mountain. By accurately geo-referencing the tide gauge and monitoring any vertical land movements, a continuous record of its datum within the Permanent Service for Mean Sea Level (PSMSL) can be established, which in turn makes the recorded and averaged sea levels useful for long-term studies and satellite altimetry calibrations. In 2014 another GNSS station was installed at KEP after local subsidence was sus-pected and later on three additional GNSS stations came to service at the periphery of the main island, making it possible to monitor uplift/subsidence wider afield. Further-more, together with four precise levelling campaigns of the KEP benchmark network in 2013, 2014 and two in 2017, it has also been possible to investigate the very local character of the vertical motions near KEP, i.e. the stability of the jetty upon which the tide gauge is mounted. In this study, we will present the results from the GNSS and precise levelling meas-urements, and will discuss their impact on the sea level record from the KEP tide gauge and nearby satellite altimetry sea surface heights. This study comes at a timely manner as during the Austral Summer 2019/2020 the jetty will be stabilized and en-larged, and consequently the current tide gauge will be replaced by a new one. Our measurements show that uplift is observed all over South Georgia Island while the ar-ea at KEP and particularly the jetty with tide gauge are subsiding relative to the rest of the island. In contrast, results for the tide gauge record show a lower magnitude of ob-served sea level rise than expected from nearby satellite altimetry. We will revisit all geodetic and oceanic observations in an attempt to improve the agreement between these measurements to summarize the status before the work at the jetty begins. [less ▲]

Detailed reference viewed: 74 (7 UL)
Full Text
Peer Reviewed
See detailAutomatic Software Tuning of Parallel Programs for Energy-Aware Executions
Varrette, Sébastien UL; Pinel, Frédéric UL; Kieffer, Emmanuel UL et al

in Proc. of 13th Intl. Conf. on Parallel Processing and Applied Mathematics (PPAM 2019) (2019, December)

For large scale systems, such as data centers, energy efficiency has proven to be key for reducing capital, operational expenses and environmental impact. Power drainage of a system is closely related to ... [more ▼]

For large scale systems, such as data centers, energy efficiency has proven to be key for reducing capital, operational expenses and environmental impact. Power drainage of a system is closely related to the type and characteristics of workload that the device is running. For this reason, this paper presents an automatic software tuning method for parallel program generation able to adapt and exploit the hardware features available on a target computing system such as an HPC facility or a cloud system in a better way than traditional compiler infrastructures. We propose a search based approach combining both exact methods and approximated heuristics evolving programs in order to find optimized configurations relying on an ever-increasing number of tunable knobs i.e., code transformation and execution options (such as the num- ber of OpenMP threads and/or the CPU frequency settings). The main objective is to outperform the configurations generated by traditional compiling infrastructures for selected KPIs i.e., performance, energy and power usage (for both for the CPU and DRAM), as well as the runtime. First experimental results tied to the local optimization phase of the proposed framework are encouraging, demonstrating between 8% and 41% improvement for all considered metrics on a reference benchmark- ing application (i.e., Linpack). This brings novel perspectives for the global optimization step currently under investigation within the presented framework, with the ambition to pave the way toward automatic tuning of energy-aware applications beyond the performance of the current state-of-the-art compiler infrastructures. [less ▲]

Detailed reference viewed: 63 (13 UL)
Full Text
Peer Reviewed
See detailIon-induced interactions in a Tomonaga-Luttinger liquid
Michelsen, Andreas Nicolai Bock UL; Valiente, Manuel; Zinner, Nikolaj Thomas et al

in Physical Review. B (2019), 100(20),

We investigate the physics of a Tomonaga-Luttinger liquid of spin-polarized fermions superimposed on an ion chain. This compound system features (attractive) long-range interspecies interactions. By means ... [more ▼]

We investigate the physics of a Tomonaga-Luttinger liquid of spin-polarized fermions superimposed on an ion chain. This compound system features (attractive) long-range interspecies interactions. By means of density matrix renormalization group techniques we compute the Tomonaga-Luttinger-liquid parameter and speed of sound as a function of the relative atom/ion density and the two quantum defect parameters, namely, the even and odd short-range phases which characterize the short-range part of the atom-ion polarization potential. The presence of ions is found to allow critical tuning of the atom-atom interaction, and the properties of the system are found to depend significantly on the short-range phases due to the atom-ion interaction. These latter dependencies can be controlled, for instance, by manipulating the ions' internal state. This allows modification of the static properties of the quantum liquid via external driving of the ionic impurities. [less ▲]

Detailed reference viewed: 37 (9 UL)
Full Text
Peer Reviewed
See detailKinetic Control of Parallel versus Antiparallel Amyloid Aggregation via Shape of the Growing Aggregate
Hakami Zanjani, Ali Asghar UL; Reynolds, Nicholas; Zhang, Afang et al

in Nature Scientific Reports (2019)

By combining atomistic and higher-level modelling with solution X-ray diffraction we analyse self-assembly pathways for the IFQINS hexapeptide, a bio-relevant amyloid former derived from human lysozyme ... [more ▼]

By combining atomistic and higher-level modelling with solution X-ray diffraction we analyse self-assembly pathways for the IFQINS hexapeptide, a bio-relevant amyloid former derived from human lysozyme. We verify that (at least) two metastable polymorphic structures exist for this system which are substantially different at the atomistic scale, and compare the conditions under which they are kinetically accessible. We further examine the higher-level polymorphism for these systems at the nanometre to micrometre scales, which is manifested in kinetic differences and in shape differences between structures instead of or as well as differences in the small-scale contact topology. Any future design of structure based inhibitors of the IFQINS steric zipper, or of close homologues such as TFQINS which are likely to have similar structures, should take account of this polymorphic assembly. [less ▲]

Detailed reference viewed: 84 (5 UL)
Full Text
Peer Reviewed
See detailEnrichment of damaging missense variants in genes related with axonal guidance signalling in sporadic Meniere’s disease
Gallego-Martinez, Alvaro; Requena, Teresa; Roman-Naranjo, Pablo et al

in Journal of Medical Genetics (2019)

INTRODUCTION: Meniere's disease (MD) is a rare inner ear disorder with a significant genetic contribution defined by a core phenotype: episodic vertigo, sensorineural hearing loss and tinnitus. It has ... [more ▼]

INTRODUCTION: Meniere's disease (MD) is a rare inner ear disorder with a significant genetic contribution defined by a core phenotype: episodic vertigo, sensorineural hearing loss and tinnitus. It has been mostly described in sporadic cases, familial cases being around 10% of the observed individuals. It is associated with an accumulation of endolymph in the inner ear, but the molecular underpinnings remain largely unknown. The main molecular pathways showing higher differentially expressed genes in the supporting cells of the inner ear are related to cochlea-vestibular innervation, cell adhesion and leucocyte extravasation. In this study, our objective is to find a burden of rare variants in genes that interact with the main signalling pathways in supporting cells of the inner ear in patients with sporadic MD. METHODS: We designed a targeted-sequencing panel including genes related with the main molecular pathways in supporting cells and sequenced 860 Spanish patients with sporadic MD. Variants with minor allele frequencies <0.1 in the gene panel were compared with three independent reference datasets. Variants were classified as loss of function, missense and synonymous. Missense variants with a combined annotation-dependent depletion score of >20 were classified as damaging missense variants. RESULTS: We have observed a significant burden of damaging missense variants in few key genes, including the NTN4 gene, associated with axon guidance signalling pathways in patients with sporadic MD. We have also identified active subnetworks having an enrichment of rare variants in sporadic MD. CONCLUSION: The burden of missense variants in the NTN4 gene suggests that axonal guidance signalling could be a novel pathway involved in sporadic MD. [less ▲]

Detailed reference viewed: 70 (7 UL)
Full Text
See detailHigh Performance Parallel Coupling of OpenFOAM+XDEM
Besseron, Xavier UL; Pozzetti, Gabriele; Rousset, Alban UL et al

Presentation (2019, June 21)

Detailed reference viewed: 175 (16 UL)
Full Text
See detailShort Introduction to the Roofline Model
Besseron, Xavier UL

Presentation (2019, June 20)

Detailed reference viewed: 62 (5 UL)
Full Text
Peer Reviewed
See detailDisplacement based polytopal elements a strain smoothing and scaled boundary approach
Bordas, Stéphane UL; Natarajan, Sundararajan

Scientific Conference (2019, May 03)

Detailed reference viewed: 147 (9 UL)
Full Text
Peer Reviewed
See detailAmazon Elastic Compute Cloud (EC2) versus In-House HPC Platform: A Cost Analysis
Emeras, Joseph; Varrette, Sébastien UL; Plugaru, Valentin UL et al

in IEEE Transactions on Cloud Computing (2019), 7(2), 456-468

Abstract—While High Performance Computing (HPC) centers continuously evolve to provide more computing power to their users, we observe a wish for the convergence between Cloud Computing (CC) and High ... [more ▼]

Abstract—While High Performance Computing (HPC) centers continuously evolve to provide more computing power to their users, we observe a wish for the convergence between Cloud Computing (CC) and High Performance Computing (HPC) platforms, with the commercial hope to see Cloud Computing (CC) infrastructures to eventually replace in-house facilities. If we exclude the performance point of view where many previous studies highlight a non-negligible overhead induced by the virtualization layer at the heart of every Cloud middleware when running a HPC workload, the question of the real cost-effectiveness is often left aside with the intuition that, most probably, the instances offered by the Cloud providers are competitive from a cost point of view. In this article, we wanted to assert (or infirm) this intuition by analyzing what composes the Total Cost of Ownership (TCO) of an in-house HPC facility operated internally since 2007. This Total Cost of Ownership (TCO) model is then used to compare with the induced cost that would have been required to run the same platform (and the same workload) over a competitive Cloud IaaS offer. Our approach to address this price comparison is three-fold. First we propose a theoretical price-performance model based on the study of the actual Cloud instances proposed by one of the major Cloud IaaS actors: Amazon Elastic Compute Cloud (EC2). Then, based on the HPC facility TCO analysis we propose a hourly price comparison between our in-house cluster and the equivalent EC2 instances. Finally, based on the experimental benchmarking on the local cluster and on the Cloud instances we propose an update of the former theoretical price model to reflect the real system performance. The results obtained advocate in general for the acquisition of an in-house HPC facility, which balances the common intuition in favor of Cloud Computing platforms, would they be provided by the reference Cloud provider worldwide. [less ▲]

Detailed reference viewed: 21 (3 UL)
Full Text
Peer Reviewed
See detailBiallelic VARS variants cause developmental encephalopathy with microcephaly that is recapitulated in vars knockout zebrafish
Siekierska, Aleksandra; Stamberger, Hannah; Deconinck, Tine et al

in Nature Communications (2019), 10(1), 708

Aminoacyl tRNA synthetases (ARSs) link specific amino acids with their cognate transfer RNAs in a critical early step of protein translation. Mutations in ARSs have emerged as a cause of recessive, often ... [more ▼]

Aminoacyl tRNA synthetases (ARSs) link specific amino acids with their cognate transfer RNAs in a critical early step of protein translation. Mutations in ARSs have emerged as a cause of recessive, often complex neurological disease traits. Here we report an allelic series consisting of seven novel and two previously reported biallelic variants in valyl-tRNA synthetase (VARS) in ten patients with a developmental encephalopathy with microcephaly, often associated with early-onset epilepsy. In silico, in vitro, and yeast complementation assays demonstrate that the underlying pathomechanism of these mutations is most likely a loss of protein function. Zebrafish modeling accurately recapitulated some of the key neurological disease traits. These results provide both genetic and biological insights into neurodevelopmental disease and pave the way for further in-depth research on ARS related recessive disorders and precision therapies. [less ▲]

Detailed reference viewed: 134 (8 UL)
Peer Reviewed
See detailSecurity, reliability and regulation compliance in Ultrascale Computing System
Bouvry, Pascal UL; Varrette, Sébastien UL; Wasim, Muhammad Umer UL et al

in Zomaya, A. Y.; Carretero, J.; Jeannot, E. (Eds.) Ultrascale Computing Systems (2019)

Ultrascale Computing Systems (UCSs) are envisioned as large-scale complex systems joining parallel and distributed computing systems that will be two to three orders of magnitude larger than today’s ... [more ▼]

Ultrascale Computing Systems (UCSs) are envisioned as large-scale complex systems joining parallel and distributed computing systems that will be two to three orders of magnitude larger than today’s systems (considering the number of Central Process Unit (CPU) cores). It is very challenging to find sustainable solutions for UCSs due to their scale and a wide range of possible applications and involved technologies. For example, we need to deal with heterogeneity and cross fertilization among HPC, large-scale distributed systems, and big data management. One of the challenges regarding sustainable UCSs is resilience. Another one, which attracted less interest in the literature but becomes more and more crucial with the expected convergence with the Cloud computing paradigm, is the notion of regulation in such system to assess the Quality of Service (QoS) and Service Level Agreement (SLA) proposed for the use of these platforms. This chapter covers both aspects through the reproduction of two articles: [1] and [2]. [less ▲]

Detailed reference viewed: 128 (25 UL)
Peer Reviewed
See detailEnergy aware ultrascale systems
Oleksiak, Ariel; Lefèvre, Laurent; Alonso, Pedro et al

in Carretero, J.; Jeannot, E.; Zomaya, A.Y. (Eds.) Ultrascale Computing Systems (2019)

Energy consumption is one of the main limiting factors for the design of ultrascale infrastructures. Multi-level hardware and software optimizations must be designed and explored in order to reduce energy ... [more ▼]

Energy consumption is one of the main limiting factors for the design of ultrascale infrastructures. Multi-level hardware and software optimizations must be designed and explored in order to reduce energy consumption for these largescale equipment. This chapter addresses the issue of energy efficiency of ultrascale systems in front of other quality metrics. The goal of this chapter is to explore the design of metrics, analysis, frameworks and tools for putting energy awareness and energy efficiency at the next stage. Significant emphasis will be placed on the idea of “energy complexity,” reflecting the synergies between energy efficiency and quality of service, resilience and performance, by studying computation power, communication/data sharing power, data access power, algorithm energy consumption, etc. [less ▲]

Detailed reference viewed: 19 (0 UL)
Peer Reviewed
See detailA Full-Cost Model for Estimating the Energy Consumption of Computing Infrastructures
Orgerie, Anne-Cecile; Varrette, Sébastien UL

in Zomaya, A. Y; Carretero, J.; Jeannot, E. (Eds.) Ultrascale Computing Systems (2019)

Since its advent in the middle of the 2000’s, the Cloud Computing (CC) paradigm is increasingly advertised as a price-effective solution to many IT problems. This seems reasonable if we exclude the pure ... [more ▼]

Since its advent in the middle of the 2000’s, the Cloud Computing (CC) paradigm is increasingly advertised as a price-effective solution to many IT problems. This seems reasonable if we exclude the pure performance point of view as many studies highlight a non-negligible overhead induced by the virtualization layer at the heart of every Cloud middleware when subjected to an High Performance Computing (HPC) workload. When this is the case, traditional HPC and Ultrascale computing systems are required, and then comes the question of the real cost-effectiveness, especially when comparing to instances offered by the Cloud providers. In this section, and inspired by the work proposed in [1], we propose a Total Cost of Ownership (TCO) analysis of an in-house academic HPC facility of medium-size (in particular the one operated at the University of Luxembourg since 2007, or within the Grid’5000 project [2]), and compare it with the investment that would have been required to run the same platform (and the same workload) over a competitive Cloud IaaS offer. [less ▲]

Detailed reference viewed: 51 (5 UL)
Full Text
Peer Reviewed
See detailHow Evolutionary Algorithms and Information Hiding deceive machines and humans for image recognition: A research program
Bernard, Nicolas UL; Leprévost, Franck UL

in Theeramunkong, Thanaruk; Bouvry, Pascal; Srichaikul, Piyawut (Eds.) Proceedings of the OLA'2019 International Conference on Optimization and Learning (Bangkok, Thailand, Jan 29-31, 2019) (2019)

Deep Neural Networks are used for a wide range of critical applications, notably for image recognition. The ability to deceive their recognition abilities is an active research domain, since successful ... [more ▼]

Deep Neural Networks are used for a wide range of critical applications, notably for image recognition. The ability to deceive their recognition abilities is an active research domain, since successful deceptions may have disastrous consequences. Still, humans sometimes detect mistakes made by machines when they classify images. One can conceive a system able to solicit humans in case of doubts, namely when humans and machines may disagree. Using Information Hiding techniques, we describe a strategy to construct evolutionary algorithms able to fool both neural networks and humans for image recognition. Although this research is still exploratory, we already describe a concrete fitness function for a specific scenario. Additional scenarii and further research directions are provided. [less ▲]

Detailed reference viewed: 19 (0 UL)