References of "Doctoral thesis"
     in
Bookmark and Share    
Full Text
See detailSpatial adaptive settlement systems in archaeology. Modelling long-term settlement formation from spatial micro interactions
Sikk, Kaarel UL

Doctoral thesis (2023)

Despite research history spanning more than a century, settlement patterns still hold a promise to contribute to the theories of large-scale processes in human history. Mostly they have been presented as ... [more ▼]

Despite research history spanning more than a century, settlement patterns still hold a promise to contribute to the theories of large-scale processes in human history. Mostly they have been presented as passive imprints of past human activities and spatial interactions they shape have not been studied as the driving force of historical processes. While archaeological knowledge has been used to construct geographical theories of evolution of settlement there still exist gaps in this knowledge. Currently no theoretical framework has been adopted to explore them as spatial systems emerging from micro-choices of small population units. The goal of this thesis is to propose a conceptual model of adaptive settlement systems based on complex adaptive systems framework. The model frames settlement system formation processes as an adaptive system containing spatial features, information flows, decision making population units (agents) and forming cross scale feedback loops between location choices of individuals and space modified by their aggregated choices. The goal of the model is to find new ways of interpretation of archaeological locational data as well as closer theoretical integration of micro-level choices and meso-level settlement structures. The thesis is divided into five chapters, the first chapter is dedicated to conceptualisation of the general model based on existing literature and shows that settlement systems are inherently complex adaptive systems and therefore require tools of complexity science for causal explanations. The following chapters explore both empirical and theoretical simulated settlement patterns based dedicated to studying selected information flows and feedbacks in the context of the whole system. Second and third chapters explore the case study of the Stone Age settlement in Estonia comparing residential location choice principles of different periods. In chapter 2 the relation between environmental conditions and residential choice is explored statistically. The results confirm that the relation is significant but varies between different archaeological phenomena. In the third chapter hunter-fisher-gatherer and early agrarian Corded Ware settlement systems were compared spatially using inductive models. The results indicated a large difference in their perception of landscape regarding suitability for habitation. It led to conclusions that early agrarian land use significantly extended land use potential and provided a competitive spatial benefit. In addition to spatial differences, model performance was compared and the difference was discussed in the context of proposed adaptive settlement system model. Last two chapters present theoretical agent-based simulation experiments intended to study effects discussed in relation to environmental model performance and environmental determinism in general. In the fourth chapter the central place foragingmodel was embedded in the proposed model and resource depletion, as an environmental modification mechanism, was explored. The study excluded the possibility that mobility itself would lead to modelling effects discussed in the previous chapter. The purpose of the last chapter is the disentanglement of the complex relations between social versus human-environment interactions. The study exposed non-linear spatial effects expected population density can have on the system and the general robustness of environmental inductive models in archaeology to randomness and social effect. The model indicates that social interactions between individuals lead to formation of a group agency which is determined by the environment even if individual cognitions consider the environment insignificant. It also indicates that spatial configuration of the environment has a certain influence towards population clustering therefore providing a potential pathway to population aggregation. Those empirical and theoretical results showed the new insights provided by the complex adaptive systems framework. Some of the results, including the explanation of empirical results, required the conceptual model to provide a framework of interpretation. [less ▲]

Detailed reference viewed: 59 (7 UL)
See detailBrain development at single cell resolution in health and disease
Kyriakis, Dimitrios UL

Doctoral thesis (2022)

Using RNA sequencing, we can examine distinctions between different cell types and capture a moment in time of the dynamic activities taking place inside a cell. Researchers in fields like developmental ... [more ▼]

Using RNA sequencing, we can examine distinctions between different cell types and capture a moment in time of the dynamic activities taking place inside a cell. Researchers in fields like developmental biology have embraced this technology quickly as it has improved over the past few years, and there are now many single-cell RNA sequencing datasets accessible. A surge in the development of computational analysis techniques has occurred along with the invention of technologies for generating single-cell RNA sequencing data. In my thesis, I examine computational methods and tools for single-cell RNA sequencing data analysis in 3 distinct projects. In the fetal brain project, I tried to decipher the complexity of the human brain and its development, and the link between development and neuropsychiatric diseases at early fetal brain development. I provide a unique resource of fetal brain development across a number of functionally distinct brain regions in a brain region-specific manner at single nuclei resolution. In total, I retrieved 50,937 single nuclei from four individual time points (Early; gestational weeks 18 and 19, and late; gestational weeks 23 and 24) and four distinct brain regions (cortical plate, hippocampus, thalamus, and striatum). In my dissertation, I also tried to investigate the underlying mechanisms of Parkinsons disease (PD), the second-most prevalent neurodegenerative disorder, characterized by the loss of dopaminergic neurons (mDA) in the midbrain. I examined the disease process using single cells of mDA neurons developed from human induced pluripotent stem cells (hiPSCs) expressing the ILE368ASN mutation in the PINK1 gene, at four different maturation time points. Differential expression analysis resulted in a potential core network of PD development which linked known genetic risk factors of PD to mitochondrial and ubiquitination processes. In the final part of my thesis, I perform an analysis of a dataset from brain biopsies from patients with Intracerebral hemorrhage (ICH) stroke. In this project, I tried to investigate the dynamic spectrum of polarization of the immune cells to pro/anti-inflammatory states. I also tried to identify markers that potentially can be used to predict the outcome of the ICH patients. Overall, my thesis discusses a wide range of single-cell RNA sequencing tools and methods, as well as how to make sense of real datasets using already-developed tools. These discoveries may eventually lead to a more thorough understanding of Parkinson’s disease, ICH stroke but also psychiatric diseases and may facilitate the creation of novel treatments. v [less ▲]

Detailed reference viewed: 19 (3 UL)
Full Text
See detailOPTOELECTRONIC PROPERTIES OF CU(IN,GA)SE2 SINGLE CRYSTALS WITH ALKALI POSTDEPOSITION TREATMENTS
Ramirez Sanchez, Omar UL

Doctoral thesis (2022)

With a record power conversion efficiency of 23.35% and a low carbon footprint, Cu(In,Ga)Se2 remains as one of the most suitable solar energy materials to assist in the mitigation of the climate crisis we ... [more ▼]

With a record power conversion efficiency of 23.35% and a low carbon footprint, Cu(In,Ga)Se2 remains as one of the most suitable solar energy materials to assist in the mitigation of the climate crisis we are currently facing. The progress seen in the last decade of Cu(In,Ga)Se2 advancement, has been made possible by the development of postdeposition treatments (PDTs) with heavy alkali metals. PDTs are known to affect both surface and bulk properties of the absorber, resulting in an improvement of the solar cell parameters open-circuit voltage, short-circuit current density and fill factor. Even though the beneficial effects of PDTs are not questioned, the underlying mechanisms responsible for the improvement, mainly the one related to the open-circuit voltage, are still under discussion. Although such improvement has been suggested to arise from a suppression of bulk recombination, the complex interplay between alkali metals and grain boundaries has complicated the labour to discern what exactly in the bulk material is profiting the most from the PDTs. In this regard, the development of this thesis aims at investigating the effects of PDTs on the bulk properties of Cu(In,Ga)Se2 single crystals, i.e., to study the effects of alkali metals in the absence of grain boundaries. Most of the presented analyses are based on photoluminescence, since this technique allows to get access to relevant information for solar cells such as the quasi-Fermi level splitting and the density of tail states directly from the absorber layer, and without the need of complete devices. This work is a cumulative thesis of three scientific publications obtained from the results of the different studies carried out. Each publication aims at answering important questions related to the intrinsic properties of Cu(In,Ga)Se2 and the effects of PDTs. The first publication presents a thorough investigation on the effects of a single heavy alkali metal species on the optoelectronic properties of Cu(In,Ga)Se2. In the case of polycrystalline absorbers, the effects of potassium PDTs in the absence of sodium have been previously attributed to the passivation of grain boundaries and donor-like defects. The obtained results, however, suggest that potassium incorporated from a PDT can act as a dopant in the absence of grain boundaries and yield an improvement in quasi-Fermi level splitting of up to 30 meV in Cu-poor CuInSe2, where a type inversion from N-to-P is triggered upon potassium incorporation. This observation led to the second paper, where a closer look was taken to how the carrier concentration and electrical conductivity of alkali-free Cu-poor CuInSe2 is affected by the incorporation of gallium in the solid solution Cu(In,Ga)Se2. The results obtained suggest that the N-type character of CuInSe2 can remain as such until the gallium content reaches the critical concentration of 15-19%, where the N-to-P transition occurs. A model based on the trends in formation energies of donor and acceptor-like defects is presented to explain the experimental results. The conclusions drawn in this paper shed light on why CuGaSe2 cannot be doped N-type like CuInSe2. Since a decreased density of tail states as a result of reduced band bending at grain boundaries had been previously pointed out as the mechanism behind the improvement of the open-circuit voltage after postdeposition treatments, the third publication focusses on how compositional variations and alkali incorporation affect the density of tail states of Cu(In,Ga)Se2 single crystals. The results presented in this paper suggest that increasing the copper and reducing the gallium content leads to the reduction of tail states. Furthermore, it is observed that tail states in single crystals are similarly affected by the addition of alkali metals as in the case of polycrystalline absorbers, which demonstrates that tail states arise from grain interior properties and that the role of grain boundaries is not as relevant as it was thought. Finally, an analysis of the voltage losses in high-efficiency polycrystalline and single crystalline solar cells, suggested that the doping effect caused by the alkalis affects the density of tail states through the reduction of electrostatic potential fluctuations, which are reduced due to a decrease in the degree of compensation. By taking the effect of doping on tail sates into account, the entirety of the VOC losses in Cu(In,Ga)Se2 is described. The findings presented in this thesis explain the link between tail states and open circuit voltage losses and demonstrate that the effects of alkali metals in Cu(In,Ga)Se2 go beyond grain boundary passivation. The results presented shed light on the understanding of tail states, VOC losses and the intrinsic properties of Cu(In,Ga)Se2, which is a fundamental step in this technology towards the development of more efficient devices. [less ▲]

Detailed reference viewed: 33 (3 UL)
See detailAustralian Indigenous Life Writing: Analysing Discourses with Word Embedding Modelling
Kamlovskaya, Ekaterina UL

Doctoral thesis (2022)

The genre of Australian Aboriginal autobiography is a literature of significant socio-political importance, with authors sharing a history different to the one previously asserted by the European settlers ... [more ▼]

The genre of Australian Aboriginal autobiography is a literature of significant socio-political importance, with authors sharing a history different to the one previously asserted by the European settlers which ignored or misrepresented Australia's First People. While there has been a number of studies looking at the works belonging to this genre from various perspectives, Australian Indigenous life writing has never been approached from the digital humanities point of view which, given the constant development of computer technologies and growing availability of digital sources, offers humanities researchers many opportunities for exploring textual collections from various angles. With this research work I contribute to closing the above-mentioned research gap and discuss the results of the interdisciplinary research project within the scope of which I created a bibliography of published Australian Indigenous life writing works, designed and assembled a corpus and created word embedding models of this corpus which I then used to explore the discourses of identity, land, sport, and foodways, as well as gender biases present in the texts in the context of postcolonial literary studies and Australian history. Studying these discourses is crucial for gaining a better understanding of the contemporary Australian society as well as the nation's history. Word embeddings modelling has recently been used in digital humanities as an exploratory technique to complement and guide traditional close reading approaches, which is justified by their potential to identify word use patterns in a collection of texts. In this dissertation, I provide a case study of how word embedding modelling can be used to investigate humanities research questions and reflect on the issues which researchers may face while working with such models, approaching various aspects of the research project from the perspectives of digital source and tool criticism. I demonstrate how word embedding model of the analysed corpus represents discourses through relationships between word vectors that reflect the historical, political, and cultural environment of the authors and some unique experiences and perspectives related to their racial and gender identities. I show how the narrators reconstruct the analysed discourses to achieve the main goals of Australian Indigenous life writing as a genre - reclaiming identity and rewriting history. [less ▲]

Detailed reference viewed: 79 (12 UL)
Full Text
See detailMicroglia signatures in the hippocampus of Alzheimer's Disease and Dementia with Lewy bodies patients
Fixemer, Sonja UL

Doctoral thesis (2022)

Worldwide more than 55 million people are suffering from incurable age-related neurodegenerative diseases and associated dementia, including Alzheimer’s Disease (AD) and Dementia with Lewy bodies (DLB ... [more ▼]

Worldwide more than 55 million people are suffering from incurable age-related neurodegenerative diseases and associated dementia, including Alzheimer’s Disease (AD) and Dementia with Lewy bodies (DLB). AD and DLB patients share memory impairment symptoms but present specific deterioration patterns of the hippocampus, a brain region essential for memory processes. Notably, the CA1 subregion is more vulnerable to atrophy in AD patients than in DLB patients. However, it remains unclear which factors contribute to this differential subregional vulnerability. On the neuropathological level, both AD and DLB patients frequently present an overlap of misfolded protein pathologies with AD-typical pathologies including extracellular amyloid-β (Aβ) plaques and neurofibrillary tangles (NFTs) of hyperphosphorylated tau protein (pTau), and DLB-typical pathological inclusions of phosphorylated 𝛼-synuclein (pSyn). Recent genome-wide association studies (GWAS) have revealed many genetic AD risk factors that are directly linked to microglia and suggest that they play an active role in pathology. However, how microglia alterations are linked to local pathological environments and which role microglia subpopulations play in specific vulnerability patterns of the hippocampus in AD and DLB remains poorly understood. This PhD thesis addressed two main aspects of microglia alterations in the post-mortem hippocampus of AD and DLB patients. The first study provided a very detailed and 3D characterization of microglia alterations at individual cell levels across CA1, CA3 and DG/CA4 subfields; and their local association with concomitant pTau, Aβ and pSyn loads across AD and DLB. We show that the co-occurrence of these three different types of misfolded proteins is frequent and follows specific subregional patterns in both diseases but is more severe in AD than DLB cases. Our results suggest that high burdens of pTau and pSyn associated with increased microglial alterations could be associated to the CA1 vulnerability in AD. Our second study provided an morphological and molecular characterization of a type of microglia accumulations referred to as coffin-like microglia (CoM), using high- and super-resolution microscopy as well as digital spatial profiling. We showed that CoM were enriched in the pyramidal layer of CA1/CA2 and were not linked to Aβ plaques, but occasionally engulfed or contained NFTs or intraneuronal granular pSyn inclusions. Furthermore, CoM are not surrounded by hypertrophic reactive astrocytes like plaque-associated microglia (PAM), but rather by dystrophic astrocytic processes. We found that proteomic and transcriptomic signatures of CoM point toward cellular senescence and immune cell infiltration, while PAM signatures indicate oxido-reductase activity and lipid degradation. Our studies provide new insights in complex signatures of human microglia in the hippocampus of age-related neurodegenerative diseases. [less ▲]

Detailed reference viewed: 49 (4 UL)
Full Text
See detailClassification and detection of Critical Transitions: from theory to data
Proverbio, Daniele UL

Doctoral thesis (2022)

From population collapses to cell-fate decision, critical phenomena are abundant in complex real-world systems. Among modelling theories to address them, the critical transitions framework gained traction ... [more ▼]

From population collapses to cell-fate decision, critical phenomena are abundant in complex real-world systems. Among modelling theories to address them, the critical transitions framework gained traction for its purpose of determining classes of critical mechanisms and identifying generic indicators to detect and alert them (“early warning signals”). This thesis contributes to such research field by elucidating its relevance within the systems biology landscape, by providing a systematic classification of leading mechanisms for critical transitions, and by assessing the theoretical and empirical performance of early warning signals. The thesis thus bridges general results concerning the critical transitions field – possibly applicable to multidisciplinary contexts – and specific applications in biology and epidemiology, towards the development of sound risk monitoring system. [less ▲]

Detailed reference viewed: 76 (10 UL)
See detailRESOURCE MANAGEMENT TECHNIQUES FOR FLEXIBLE BROADBAND SATELLITE COMMUNICATION SYSTEMS
Abdu, Tedros Salih UL

Doctoral thesis (2022)

The application of Satellite Communications (SatCom) has recently evolved from providing simple Direct-To-Home television (DTHTV) to enable a range of broadband internet services. Typically, it offers ... [more ▼]

The application of Satellite Communications (SatCom) has recently evolved from providing simple Direct-To-Home television (DTHTV) to enable a range of broadband internet services. Typically, it offers services to the broadcast industry, the aircraft industry, the maritime sector, government agencies, and end-users. Furthermore, SatCom has a significant role in the era of 5G and beyond in terms of integrating satellite networks with terrestrial networks, offering backhaul services, and providing coverage for the Internet of Things (IoT) applications. Moreover, thanks to the satellite's wide coverage area, it can provide services to remote areas where terrestrial networks are inaccessible or expensive to connect. Due to the wide range of satellite applications outlined above, the demand for satellite service from user terminals is rapidly increasing. Conventionally, satellites use multi-beam technology with uniform resource allocation to provide service to users/beams. In this case, the satellite's resources, such as power and bandwidth, are evenly distributed among the beams. However, this resource allocation method is inefficient since it does not consider the heterogeneous demands of each beam, which may result in a beam with a low demand receiving too many resources while a beam with a high demand receiving few resources. Consequently, we may not satisfy some beam demands. Additionally, satellite resources are limited due to spectrum regulations and onboard batteries constraint, which require proper utilization. Therefore, the next generation of satellites must address the above main challenges of conventional satellites. For this, in this thesis, novel advanced resource management techniques are proposed to manage satellite resources efficiently while accommodating heterogeneous beam demands. In the above context, the second and third chapters of the thesis explore on-demand resource allocation methods with no precoding technique. These methods aim to closely match the beam traffic demand by using the minimum transmit power and utilized bandwidth while having tolerable interference among the beams. However, an advanced interference mitigation technique is required in a high interference scenario. Thus, in the fourth chapter of the thesis, we propose a combination of resource allocation and interference management strategies to mitigate interference and meet high-demand requirements with less power and bandwidth consumption. In this context, the performance of the resource management method for systems with full precoding, that is, all beams are precoded; without precoding, that is, no precoding is applied to any beams; and with partial precoding, that is, some beams are precoded, is investigated and compared. Thanks to emerging technologies, the next generation of satellite communication systems will deploy onboard digital payloads; thus, advanced resource management techniques can be implemented. In this case, the digital payload can be configured to change the bandwidth, carrier frequency, and transmit power of the system in response to heterogeneous traffic demands. Typically, onboard digital payloads consist of payload processors, each operating with specific power and bandwidth to process each beam signal. There are, however, only a limited number of processors, thus requiring proper management. Furthermore, the processors consume more energy to process the signals, resulting in high power consumption. Therefore, payload management will be crucial for future satellite generation. In this context, the fifth chapter of the thesis proposes a demand-aware onboard payload processor management method, which switches on the processors according to the beam demand. In this case, for low demand, fewer processors are in-use, while more processors become necessary as demand increases. Demand-aware resource allocation techniques may require optimization of large variables. Consequently, this may increase the computational time complexity of the system. Thus, the sixth chapter of the thesis explores the methods of combining demand-aware resource allocation and deep learning (DL) to reduce the computational complexity of the system. In this case, a demand-aware algorithm enables bandwidth and power allocation, while DL can speed up computation. Finally, the last chapter provides the main conclusions of the thesis, as well as the future research directions. [less ▲]

Detailed reference viewed: 51 (10 UL)
Full Text
See detailMinimizing Supervision for Vision-Based Perception and Control in Autonomous Driving
Robinet, François UL

Doctoral thesis (2022)

The research presented in this dissertation focuses on reducing the need for supervision in two tasks related to autonomous driving: end-to-end steering and free space segmentation. For end-to-end ... [more ▼]

The research presented in this dissertation focuses on reducing the need for supervision in two tasks related to autonomous driving: end-to-end steering and free space segmentation. For end-to-end steering, we devise a new regularization technique which relies on pixel-relevance heatmaps to force the steering model to focus on lane markings. This improves performance across a variety of offline metrics. In relation to this work, we publicly release the RoboBus dataset, which consists of extensive driving data recorded using a commercial bus on a cross-border public transport route on the Luxembourgish-French border. We also tackle pseudo-supervised free space segmentation from three different angles: (1) we propose a Stochastic Co-Teaching training scheme that explicitly attempts to filter out the noise in pseudo-labels, (2) we study the impact of self-training and of different data augmentation techniques, (3) we devise a novel pseudo-label generation method based on road plane distance estimation from approximate depth maps. Finally, we investigate semi-supervised free space estimation and find that combining our techniques with a restricted subset of labeled samples results in substantial improvements in IoU, Precision and Recall. [less ▲]

Detailed reference viewed: 40 (5 UL)
Full Text
See detailCIRCULAR ARCHITECTURE: MODELS AND STRATEGIES TO REUSE AND RECYCLE BUILDINGS
Ferreira Silva, Marielle UL

Doctoral thesis (2022)

How we design, construct and live in our houses as well as go to work can mitigate carbon dioxide (CO2) emissions and global climate change. Furthermore, the complex world we live in is in an ongoing ... [more ▼]

How we design, construct and live in our houses as well as go to work can mitigate carbon dioxide (CO2) emissions and global climate change. Furthermore, the complex world we live in is in an ongoing transformation process. The housing shortage problem is increasing as the world population and cities are increasingly growing. Thereby, we must think of all the other issues that come along with population growth, such as increased demand for built space, mobility, expansion of cities into green areas, use of resources, and materials scarcity. Various projects from history have used alternatives to solve the problem of social housing, such as increasing density in cities through housing complexes, fast and low-cost constructions with prefabricated methods and materials, and modularisation systems. However, the current architecture is not designed to meet users’ future needs and reduce the environmental impact. A proposal to change this situation would be to go back to the beginning of architecture’s conception and to design it differently. In addition, nowadays, there is an increasing focus on moving towards sustainable and circular living spaces based on shared, adaptable and modular built environments to improve residents’ quality of life. For this reason, the main objective of this thesis is to study the potential of architecture that can reconfigure spatially and temporally, and produce alternative generic models to reuse and recycle architectural elements and spaces for functional flexibility through time. To approach the discussion, a documentary research methodology was applied to study the modular, prefabricated and ecological architectural typologies to address recyclability in buildings. The Atlas with case studies and architectural design strategies emerged from the analyses of projects from Durant to the 21st century. Furthermore, this thesis is a part of the research project Eco-Construction for Sustainable Development (ECON4SD), which is co-funded by the EU in partnership with the University of Luxembourg, and it presents three new generic building typologies. They are named according to their strong characteristics: Prototype 1 - Slab typology, a building designed as a concrete shelf structure in which timber housing units can be plugged in and out; Prototype 2 - Tower typology, a tower building with a flexible floor plan combining working and residential facilities with adjacent multi-purpose facilities; and Prototype 3 - Block typology, a structure characterised by the entire disassembly. The three new typologies combine modularity, prefabrication, flexibility and disassembly strategies to address the increasing demand for multi-use, reusable and resourceefficient housing units. The prototypes continually adapt to the occupants’ needs as the infrastructure incorporates repetition, exposed structure, central core, terrace, open floors, unfinished spaces, prefabrication, combined activities, and have reduced and different housing unit sizes, in which parts can be disassembled. They also densify the region that they are being implemented in. Moreover, the new circular typologies can offer more generous public and shared space for the occupants within the same building size as an ordinary building. The alternative design allows the reconversion of existing buildings or the reconstruction of the same buildings in other places reducing waste and increases its useful lifespan. Once the building is adapted and reused as much as possible, and the life cycle comes to an end, it can be disassembled, and the materials can be sorted for reusable or recyclable resources. The results demonstrate that circular architecture is feasible, realistic, adapts through time, increases material use, avoids unnecessary demolition, reduces construction waste and CO2 emissions and extends the useful life of the buildings. [less ▲]

Detailed reference viewed: 37 (3 UL)
Full Text
See detailData-driven patient-specific breast modeling: a simple, automatized, and robust computational pipeline
Mazier, Arnaud UL

Doctoral thesis (2022)

Background: Breast-conserving surgery is the most acceptable option for breast cancer removal from an invasive and psychological point of view. During the surgical procedure, the imaging acquisition using ... [more ▼]

Background: Breast-conserving surgery is the most acceptable option for breast cancer removal from an invasive and psychological point of view. During the surgical procedure, the imaging acquisition using Magnetic Image Resonance is performed in the prone configuration, while the surgery is achieved in the supine stance. Thus, a considerable movement of the breast between the two poses drives the tumor to move, complicating the surgeon's task. Therefore, to keep track of the lesion, the surgeon employs ultrasound imaging to mark the tumor with a metallic harpoon or radioactive tags. This procedure, in addition to an invasive characteristic, is a supplemental source of uncertainty. Consequently, developing a numerical method to predict the tumor movement between the imaging and intra-operative configuration is of significant interest. Methods: In this work, a simulation pipeline allowing the prediction of patient-specific breast tumor movement was put forward, including personalized preoperative surgical drawings. Through image segmentation, a subject-specific finite element biomechanical model is obtained. By first computing an undeformed state of the breast (equivalent to a nullified gravity), the estimated intra-operative configuration is then evaluated using our developed registration methods. Finally, the model is calibrated using a surface acquisition in the intra-operative stance to minimize the prediction error. Findings: The capabilities of our breast biomechanical model to reproduce real breast deformations were evaluated. To this extent, the estimated geometry of the supine breast configuration was computed using a corotational elastic material model formulation. The subject-specific mechanical properties of the breast and skin were assessed, to get the best estimates of the prone configuration. The final results are a Mean Absolute Error of 4.00 mm for the mechanical parameters E_breast = 0.32 kPa and E_skin = 22.72 kPa. The optimized mechanical parameters are congruent with the recent state-of-the-art. The simulation (including finding the undeformed and prone configuration) takes less than 20 s. The Covariance Matrix Adaptation Evolution Strategy optimizer converges on average between 15 to 100 iterations depending on the initial parameters for a total time comprised between 5 to 30 min. To our knowledge, our model offers one of the best compromises between accuracy and speed. The model could be effortlessly enriched through our recent work to facilitate the use of complex material models by only describing the strain density energy function of the material. In a second study, we developed a second breast model aiming at mapping a generic model embedding breast-conserving surgical drawing to any patient. We demonstrated the clinical applications of such a model in a real-case scenario, offering a relevant education tool for an inexperienced surgeon. [less ▲]

Detailed reference viewed: 63 (11 UL)
Full Text
See detailDecision Making in Supply Chains with Waste Considerations
Perez Becker, Nicole UL

Doctoral thesis (2022)

As global population and income levels have increased, so has the waste generated as a byproduct of our production and consumption processes. Approximately two billion tons of municipal solid waste are ... [more ▼]

As global population and income levels have increased, so has the waste generated as a byproduct of our production and consumption processes. Approximately two billion tons of municipal solid waste are generated globally every year – that is, more than half a kilogram per person each day. This waste, which is generated at various stages of the supply chain, has negative environmental effects and often represents an inefficient use or allocation of limited resources. With the growing concern about waste, many governments are implementing regulations to reduce waste. Waste is a often consequence of the inventory decisions of different players in a supply chain. As such, these regulations aim to reduce waste by influencing inventory decisions. However, determining the inventory decisions of players in a supply chain is not trivial. Modern supply chains often consist of numerous players, who may each differ in their objectives and in the factors they consider when making decisions such as how much product to buy and when. While each player makes unilateral inventory decisions, these decisions may also affect the decisions of other players. This complexity makes it difficult to predict how a policy will affect profit and waste outcomes for individual players and the supply chain as a whole. This dissertation studies the inventory decisions of players in a supply chain when faced with policy interventions to reduce waste. In particular, the focus is on food supply chains, where food waste and packaging waste are the largest waste components. Chapter 2 studies a two-period inventory game between a seller (e.g., a wholesaler) and a buyer (e.g., a retailer) in a supply chain for a perishable food product with uncertain demand from a downstream market. The buyer can differ in whether he considers factors affecting future periods or the seller’s supply availability in his period purchase decisions – that is, in his degree of strategic behavior. The focus is on understanding how the buyer’s degree of strategic behavior affects inventory outcomes. Chapter 3 builds on this understanding by investigating waste outcomes and how policies that penalize waste affect individual and supply chain profits and waste. Chapter 4 studies the setting of a restaurant that uses reusable containers instead of single-use ones to serve its delivery and take-away orders. With policy-makers discouraging the use of single-use containers through surcharges or bans, reusable containers have emerged as an alternative. Managing inventories of reusable containers is challenging for a restaurant as both demand and returns of containers are uncertain and the restaurant faces various customers types. This chapter investigates how the proportion of each customer type affects the restaurant’s inventory decisions and costs. [less ▲]

Detailed reference viewed: 23 (4 UL)
Full Text
See detailThe role of neuromelanin in dopaminergic neuron demise and inflammation in idiopathic Parkinson's disease
Smajic, Semra UL

Doctoral thesis (2022)

For a very long time, the main focus in Parkinson’s disease (PD) research was the loss of neuromelanin-containing dopaminergic neurons from the substantia nigra (SN) of the midbrain - the key pathological ... [more ▼]

For a very long time, the main focus in Parkinson’s disease (PD) research was the loss of neuromelanin-containing dopaminergic neurons from the substantia nigra (SN) of the midbrain - the key pathological feature of the disease. However, the association of neuronal vulnerability and neuromelanin presence has not been a common study subject. Recently, cells other than neurons also gained attention as mediators of PD pathogenesis. There are indications that glial cells undergo disease-related changes, however, the exact mechanisms remain unknown. In this thesis, I aimed to explore the contribution of every cell type of the midbrain to PD using single-nuclei RNA sequencing. Additionally, the goal was to explore their association to PD risk gene variants. As we identified microgliosis as a major mechanism in PD, we further extended our research to microglia. We sought to investigate the relation of microglia and neuromelanin. Thus, we aimed to, by means of immunohistochemical staining, imaging and laser-capture microdissection-based transcriptomics, elucidate this association on a single-cell level. This work resulted in the first midbrain single-cell atlas from idiopathic PD subjects and age- and sex-matched controls. We revealed SN-specific microgliosis with GPNMB upregulation, which also seemed to be specific to the idiopathic form of the disease. We further observed an accumulation of (extraneuronal) neuromelanin particles in Parkinson’s midbrain parenchyma, indicative of incomplete degradation. Moreover, we showed that GPNMB can be alleviated in microglia in contact with neuromelanin. Taken together, we provide evidence of a GPNMB-related microglial state as a disease mechanism specific to idiopathic PD, and highlight neuromelanin as an important player in microglia disease pathology. Further investigations are needed to understand whether the modulation of neuromelanin levels could be relevant in the context of PD therapy. [less ▲]

Detailed reference viewed: 35 (6 UL)
Full Text
See detailA Software Vulnerabilities Odysseus: Analysis, Detection, and Mitigation
Riom, Timothée UL

Doctoral thesis (2022)

Programming has become central in the development of human activities while not being immune to defaults, or bugs. Developers have developed specific methods and sequences of tests that they implement to ... [more ▼]

Programming has become central in the development of human activities while not being immune to defaults, or bugs. Developers have developed specific methods and sequences of tests that they implement to prevent these bugs from being deployed in releases. Nonetheless, not all cases can be thought through beforehand, and automation presents limits the community attempts to overcome. As a consequence, not all bugs can be caught. These defaults are causing particular concerns in case bugs can be exploited to breach the program’s security policy. They are then called vulnerabilities and provide specific actors with undesired access to the resources a program manages. It damages the trust in the program and in its developers, and may eventually impact the adoption of the program. Hence, to attribute a specific attention to vulnerabilities appears as a natural outcome. In this regard, this PhD work targets the following three challenges: (1) The research community references those vulnerabilities, categorises them, reports and ranks their impact. As a result, analysts can learn from past vulnerabilities in specific programs and figure out new ideas to counter them. Nonetheless, the resulting quality of the lessons and the usefulness of ensuing solutions depend on the quality and the consistency of the information provided in the reports. (2) New methods to detect vulnerabilities can emerge among the teachings this monitoring provides. With responsible reporting, these detection methods can provide hardening of the programs we rely on. Additionally, in a context of computer perfor- mance gain, machine learning algorithms are increasingly adopted, providing engaging promises. (3) If some of these promises can be fulfilled, not all are not reachable today. Therefore a complementary strategy needs to be adopted while vulnerabilities evade detection up to public releases. Instead of preventing their introduction, programs can be hardened to scale down their exploitability. Increasing the complexity to exploit or lowering the impact below specific thresholds makes the presence of vulnerabilities an affordable risk for the feature provided. The history of programming development encloses the experimentation and the adoption of so-called defence mechanisms. Their goals and performances can be diverse, but their implementation in worldwide adopted programs and systems (such as the Android Open Source Project) acknowledges their pivotal position. To face these challenges, we provide the following contributions: • We provide a manual categorisation of the vulnerabilities of the worldwide adopted Android Open Source Project up to June 2020. Clarifying to adopt a vulnera- bility analysis provides consistency in the resulting data set. It facilitates the explainability of the analyses and sets up for the updatability of the resulting set of vulnerabilities. Based on this analysis, we study the evolution of AOSP’s vulnerabilities. We explore the different temporal evolutions of the vulnerabilities affecting the system for their severity, the type of vulnerability, and we provide a focus on memory corruption-related vulnerabilities. • We undertake the replication of a machine-learning based detection algorithms that, besides being part of the state-of-the-art and referenced to by ensuing works, was not available. Named VCCFinder, this algorithm implements a Support- Vector Machine and bases its training on Vulnerability-Contributing Commits and related patches for C and C++ code. Not in capacity to achieve analogous performances to the original article, we explore parameters and algorithms, and attempt to overcome the challenge provided by the over-population of unlabeled entries in the data set. We provide the community with our code and results as a replicable baseline for further improvement. • We eventually list the defence mechanisms that the Android Open Source Project incrementally implements, and we discuss how it sometimes answers comments the community addressed to the project’s developers. We further verify the extent to which specific memory corruption defence mechanisms were implemented in the binaries of different versions of Android (from API-level 10 to 28). We eventually confront the evolution of memory corruption-related vulnerabilities with the implementation timeline of related defence mechanisms. [less ▲]

Detailed reference viewed: 49 (4 UL)
Full Text
See detailTRANSFORMING DATA PREPROCESSING: A HOLISTIC, NORMALIZED AND DISTRIBUTED APPROACH
Tawakuli, Amal UL

Doctoral thesis (2022)

Substantial volumes of data are generated at the edge as a result of an exponential increase in the number of Internet of Things (IoT) applications. IoT data are generated at edge components and, in most ... [more ▼]

Substantial volumes of data are generated at the edge as a result of an exponential increase in the number of Internet of Things (IoT) applications. IoT data are generated at edge components and, in most cases, transmitted to central or cloud infrastructures via the network. Distributing data preprocessing to the edge and closer to the data sources would address issues found in the data early in the pipeline. Thus, distribution prevents error propagation, removes redundancies, minimizes privacy leakage and optimally summarizes the information contained in the data prior to transmission. This in turn, prevents wasting valuable yet limited resources at the edge, which would otherwise be used for transmitting data that may contain anomalies and redundancies. New legal requirements such the GDPR and ethical responsibilities render data preprocessing, which addresses these emerging topics, urgent especially at the edge prior to the data leaving the premises of data owners. This PhD dissertation is divided into two parts that focus on two main directions within data preprocessing. The first part focuses on structuring and normalizing the data preprocessing design phase for AI applications. This involved an extensive and comprehensive survey of data preprocessing techniques coupled with an empirical analysis. From the survey, we introduced a holistic and normalized definition and scope of data preprocessing. We also identified the means of generalizing data preprocessing by abstracting preprocessing techniques into categories and sub-categories. Our survey and empirical analysis highlighted dependencies and relationships between the different categories and sub-categories, which determine the order of execution within preprocessing pipelines. The identified categories, sub-categories and their dependencies were assembled into a novel data preprocessing design tool that is a template from which application and dataset specific preprocessing plans and pipelines are derived. The design tool is agnostic to datasets and applications and is a crucial step towards normalizing, regulating and structuring the design of data preprocessing pipelines. The tool helps practitioners and researchers apply a modern take on data preprocessing that enhances the reproducibility of preprocessed datasets and addresses a broader spectrum of issues in the data. The second part of the dissertation focuses on leveraging edge computing within an IoT context to distribute data preprocessing at the edge. We empirically evaluated the feasibility of distributing data preprocessing techniques from different categories and assessed the impact of the distribution including on the consumption of different resources such as time, storage, bandwidth and energy. To perform the distribution, we proposed a collaborative edge-cloud framework dedicated to data preprocessing with two main mechanisms that achieve synchronization and coordination. The synchronization mechanism is an Over-The-Air (OTA) updating mechanism that remotely pushes updated preprocessing plans to the different edge components in response to changes in user requirements or the evolution of data characteristics. The coordination mechanism is a resilient and progressive execution mechanism that leverages the Directed Acyclic Graph (DAG) to represent the data preprocessing plans. Distributed preprocessing plans are shared between different cloud and edge components and are progressively executed while adhering to the topological order dictated by the DAG representation. To empirically test our proposed solutions, we developed a prototype, named DeltaWing, of our edge-cloud collaborative data preprocessing framework that consists of three stages; one central stage and two edge stages. A use-case was also designed based on a dataset obtained from Honda Research Institute US. Using DeltaWing and the use-case, we simulated an Automotive IoT application to evaluate our proposed solutions. Our empirical results highlight the effectiveness and positive impact of our framework in reducing the consumption of valuable resources (e.g., ≈ 57% reduction in bandwidth usage) at the edge while retaining information (prediction accuracy) and maintaining operational integrity. The two parts of the dissertation are interconnected yet can exist independently. Their contributions combined, constitute a generic toolset for the optimization of the data preprocessing phase. [less ▲]

Detailed reference viewed: 75 (6 UL)
Full Text
See detailEssays on the Economics of Wellbeing and Machine Learning
Gentile, Niccolo' UL

Doctoral thesis (2022)

Detailed reference viewed: 44 (12 UL)
See detailComputational Tools for Evaluation and Programing of Deep Brain Stimulation
Baniasadi, Mehri UL

Doctoral thesis (2022)

Deep brain stimulation (DBS) is a surgical therapy to alleviate symptoms of numerous movement and psychiatric disorders by electrical stimulation of specific neural tissues via implanted electrodes ... [more ▼]

Deep brain stimulation (DBS) is a surgical therapy to alleviate symptoms of numerous movement and psychiatric disorders by electrical stimulation of specific neural tissues via implanted electrodes. Precise electrode implantation is important to target the right brain area. After the surgery, DBS parameters, including stimulation amplitude, frequency, pulse width, and selection of electrode’s active contacts, are adjusted during programming sessions. Programming sessions are normally done by trial and error. Thus, they can be long and tiring. The main goal of the thesis is to make the post-operative experience, particularly the programming session, easier and faster by using visual aids to create a virtual reconstruction of the patient’s case. This enables in silico testing of different scenarios before applying them to the patient. A quick and easy-to-use deep learning-based tool for deep brain structure segmentation is developed with 89% ± 3 accuracy (DBSegment). It is much easier to implement compared to widely-used registration-based methods, as it requires less dependencies and no parameter tuning. Therefore, it is much more practical. Moreover, it segments 40 times faster than the registration-based method. This method is combined with an electrode localization method to reconstruct patients’ cases. Additionally, we developed a tool that simulates DBS-induced electric field distributions in less than a seconds (FastField). This is 1000 times faster than standard methods based on finite elements, with nearly the same performance (92%). The speed of the electric field simulation is particularly important for DBS parameter initialization, which we initialize by solving an optimization problem (OptimDBS). A grid search method confirms that our novel approach convergences to the global minimum. Finally, all the developed methods are tested on clinical data to ensure their applicability. In conclusion, this thesis develops various novel user-friendly tools, enabling efficient and accurate DBS reconstruction and parameter initialization. The methods are by far the quickest among open-source tools. They are easy to use and publicly available, FastField within the LeadDBS toolbox, and DBSegment as a Python pip package and a Docker image. We hope they can boost the DBS post-operative experience, maximize the therapy’s efficacy, and ameliorate DBS research. [less ▲]

Detailed reference viewed: 64 (5 UL)
Full Text
See detailFATIGUE AND BREAKDOWN STUDIES OF SOLUTION DEPOSITED OXIDE FERROELECTRIC THIN FILMS
Aruchamy, Naveen UL

Doctoral thesis (2022)

Ferroelectric materials are ubiquitous in several applications and offer advantages for microelectromechanical systems (MEMS) in their thin film form. However, novel applications require ferroelectric ... [more ▼]

Ferroelectric materials are ubiquitous in several applications and offer advantages for microelectromechanical systems (MEMS) in their thin film form. However, novel applications require ferroelectric films to be deposited on various substrates, which requires effective integration and know-how of the material response when selecting a substrate for film deposition. As substrate-induced stress can alter the ferroelectric properties of the films, the knowledge of how stress changes the ferroelectric response under different actuation conditions is essential. Furthermore, the stress-dependent behavior raises the question of understanding the reliability and degradation mechanisms under cyclic electric loading. Therefore, the ferroelectric thin film’s fatigue and breakdown characteristics become more relevant. Lead zirconate titanate (PZT) thin films are popular among other ferroelectric materials. However, there is a tremendous effort made in the direction of finding a lead-free alternative to PZT. Ferroelectric thin films can be deposited using different processing techniques. In this work, the chemical solution deposition route is adapted for depositing PZT thin films on transparent and non-transparent substrates. A correlation between the substrate-induced ferroelectric properties and processing conditions with different electrode configurations is established. Finite element modeling is used to understand the influence of the design parameters of the co-planar interdigitated electrodes for fabricating fully transparent PZT stacks. In-plane and out-of-plane ferroelectric properties of PZT thin films in metal-insulator-metal (MIM) and interdigitated electrode (IDE) geometries, respectively, on different substrates, are compared to establish the connection between the stress-induced effect and the actuation mode. It is shown that the out-of-plane polarization is high under in-plane compressive stress but reduced by nearly four times by in-plane tensile stress. In contrast, the in-plane polarization shows an unexpectedly weak stress dependence. The fatigue behavior of differently stressed PZT thin films with IDE structures is reported for the first time in this study. The results are compared to the fatigue behavior of the same films in MIM geometry. PZT films in MIM geometry, irrespective of the stress state, show a notable decrease in switchable polarization during fatigue cycling. In contrast, the films actuated with IDEs have much better fatigue resistance. The primary fatigue mechanism is identified as domain wall pinning by charged defects. The observed differences in fatigue behavior between MIM and IDE geometries are linked to the orientation of the electric field with respect to the columnar grain structure of the films. Hafnium oxide, an emerging and widely researched lead-free alternative to PZT for non-volatile ferroelectric memory application, is also explored in this work. The breakdown properties of chemical solution-deposited ferroelectric hafnium oxide thin films are also studied. The structure-property relationship for stabilizing the ferroelectric phase in solution-deposited hafnium oxide thin films is established. Furthermore, the effect of processing conditions on the ferroelectric switching behavior and breakdown characteristics are demonstrated and correlated with the possible mechanism. [less ▲]

Detailed reference viewed: 59 (4 UL)
Full Text
See detailModel-based Specification and Analysis of Natural Language Requirements in the Financial Domain
Veizaga Campero, Alvaro Mario UL

Doctoral thesis (2022)

Software requirements form an important part of the software development process. In many software projects conducted by companies in the financial sector, analysts specify software requirements using a ... [more ▼]

Software requirements form an important part of the software development process. In many software projects conducted by companies in the financial sector, analysts specify software requirements using a combination of models and natural language (NL). Neither models nor NL requirements provide a complete picture of the information in the software system, and NL is highly prone to quality issues, such as vagueness, ambiguity, and incompleteness. Poorly written requirements are difficult to communicate and reduce the opportunity to process requirements automatically, particularly the automation of tedious and error-prone tasks, such as deriving acceptance criteria (AC). AC are conditions that a system must meet to be consistent with its requirements and be accepted by its stakeholders. AC are derived by developers and testers from requirement models. To obtain a precise AC, it is necessary to reconcile the information content in NL requirements and the requirement models. In collaboration with an industrial partner from the financial domain, we first systematically developed and evaluated a controlled natural language (CNL) named Rimay to help analysts write functional requirements. We then proposed an approach that detects common syntactic and semantic errors in NL requirements. Our approach suggests Rimay patterns to fix errors and convert NL requirements into Rimay requirements. Based on our results, we propose a semiautomated approach that reconciles the content in the NL requirements with that in the requirement models. Our approach helps modelers enrich their models with information extracted from NL requirements. Finally, an existing test-specification derivation technique was applied to the enriched model to generate AC. The first contribution of this dissertation is a qualitative methodology that can be used to systematically define a CNL for specifying functional requirements. This methodology was used to create Rimay, a CNL grammar, to specify functional requirements. This CNL was derived after an extensive qualitative analysis of a large number of industrial requirements and by following a systematic process using lexical resources. An empirical evaluation of our CNL (Rimay) in a realistic setting through an industrial case study demonstrated that 88% of the requirements used in our empirical evaluation were successfully rephrased using Rimay. The second contribution of this dissertation is an automated approach that detects syntactic and semantic errors in unstructured NL requirements. We refer to these errors as smells. To this end, we first proposed a set of 10 common smells found in the NL requirements of financial applications. We then derived a set of 10 Rimay patterns as a suggestion to fix the smells. Finally, we developed an automatic approach that analyzes the syntax and semantics of NL requirements to detect any present smells and then suggests a Rimay pattern to fix the smell. We evaluated our approach using an industrial case study that obtained promising results for detecting smells in NL requirements (precision 88%) and for suggesting Rimay patterns (precision 89%). The last contribution of this dissertation was prompted by the observation that a reconciliation of the information content in the NL requirements and the associated models is necessary to obtain precise AC. To achieve this, we define a set of 13 information extraction rules that automatically extract AC-related information from NL requirements written in Rimay. Next, we propose a systematic method that generates recommendations for model enrichment based on the information extracted from the 13 extraction rules. Using a real case study from the financial domain, we evaluated the usefulness of the AC-related model enrichments recommended by our approach. The domain experts found that 89% of the recommended enrichments were relevant to AC, but absent from the original model (precision of 89%). [less ▲]

Detailed reference viewed: 36 (8 UL)
Full Text
See detailModelling astrocytic metabolism in actual cell morphologies
Farina, Sofia UL

Doctoral thesis (2022)

The human brain is the most structurally and biochemically complex organ, and its broad spectrum of diverse functions is accompanied by high energy demand. In order to address this high energy demand ... [more ▼]

The human brain is the most structurally and biochemically complex organ, and its broad spectrum of diverse functions is accompanied by high energy demand. In order to address this high energy demand, brain cells of the central nervous system are organised in a complex and balanced ecosystem, and perturbation of brain energy metabolism is known to be associated with neurodegenerative diseases such as Alzheimer's (AD) and Parkinson's disease. Among all cells composing this ecosystem, astrocytes contribute metabolically to produce the primary energy substrate of life, $\ATP$, and lactate, which can be exported to neurons to support their metabolism. Astrocytes have a star-shaped morphology, allowing them to connect on the one side with blood vessels to uptake glucose and on the other side with neurons to provide lactate. Astrocytes may also exhibit metabolic dysfunctions and modify their morphology in response to diseases. A mechanistic understanding of the morphology-dysfunction relation is still elusive. This thesis developed and applied a mechanistic multiscale modelling approach to investigate astrocytic metabolism in physiological morphologies in healthy and diseased human subjects. The complexity of cellular systems is a significant obstacle in investigating cellular behaviour. Systems biology tackles biological unknowns by combining computational and biological investigations. In order to address the elusive connection between metabolism and morphology in astrocytes, we developed a computational model of central energy metabolism in realistic morphologies. The underlying processes are described by a reaction-diffusion system that can represent cells more realistically by considering the actual three-dimensional shape than classical ordinary differential equation models where the cells are assumed to be spatially punctual, i.e. have no spatial dimension. Thus, the computational model we developed integrates high-resolution microscopy images of astrocytes from human post-mortem brain samples and simulates glucose metabolism in different physiological astrocytic human morphologies associated with AD and healthy conditions. The first part of the thesis is dedicated to presenting a numerical approach that includes complex morphologies. We investigate the classical finite element method (FEM) and cut finite element method (\cutfem{}) for simplified metabolic models in complex geometries. Establishing our image-driven numerical method leads to the second part of this thesis, where we investigate the crucial role played by the locations of reaction sites. We demonstrate that spatial organisation and chemical diffusivity play a pivotal role in the system output. Based on these new findings, we subsequently use microscopy images of healthy and Alzheimer's diseased human astrocytes to build simulations and investigate cell metabolism. In the last part of the thesis, we consider another critical process for astrocytic functionality: calcium signalling. The energy produced in metabolism is also partially used for calcium exchange between cell compartments and mainly can drive mitochondrial activity as a main ATP generating entity. Thus, the active cross-talk between glucose metabolism and calcium signalling can significantly impact the metabolic functionality of cells and requires deeper investigation. For this purpose, we extend our established metabolic model by a calcium signalling module and investigate the coupled system in two-dimensional geometries. Overall, the investigations showed the importance of spatially organised metabolic modelling and paved the way for a new direction of image-driven-meshless modelling of metabolism. Moreover, we show that complex morphologies play a crucial role in metabolic robustness and how astrocytes' morphological changes to AD conditions lead to impaired energy metabolism. [less ▲]

Detailed reference viewed: 39 (10 UL)
Full Text
See detailCREDIT CARDS AND CASHLESS PAYMENT: BANK COMMUNICATION POLICIES IN FRANCE, GERMANY AND LUXEMBOURG (1968-2015)
Vetter, Florian UL

Doctoral thesis (2022)

“Pecunia non olet”. Ironically, this Latin dictum strongly relates to the 20th and 21st century if one considers how banks dematerialised constantly money and changed the way a society deals with deposits ... [more ▼]

“Pecunia non olet”. Ironically, this Latin dictum strongly relates to the 20th and 21st century if one considers how banks dematerialised constantly money and changed the way a society deals with deposits. By implementing quite radical changes to the concept of money, banks became an accelerating element for social and technological innovation. Our research project within the field of computerisation and digitalisation concentrates on banking activities and services from a European perspective. Banks’ communication regarding credit cards and cashless payments is at the heart of this research. The study intertwines several case studies in selected European countries (i.e., Luxembourg, Germany, France). In particular, the study focuses on the following bank services: automated teller machines, bankcards (especially MasterCard and Eurocard) and home banking since the emergence of Minitel, Vidéotex, or Btx. The comparative and diachronic perspective of this study, starting from the 1960s onwards, aims at shedding light on a history which has often only been seen from an insider’s perspective. It should be noted that our focus is primarily the communication strategies of banks and their related advertisement campaigns for credit cards and cashless payments. This is achieved by focusing on the strategy of the banks and their economic, technical, digital, but also societal approaches. The research topic relates to contemporary history, the history of digitalisation and innovation. In this context, press, audio-visual materials, banking reports, advertising, oral history, as well as web archives serve as primary sources. Moreover, bank archives in Luxembourg, France and Germany are used to complete the study corpus. All in all, the research results help us to understand the high complex world of banking services from an unusual research angle. Therefore, the research topic changes the current scientific standard of banking history by including the perspective of various actors of the European payment market as well as their perception of banking innovations over the years (1968 – 2015) and by analysing a European transnational corpus. Furthermore, by analysing the history of the Eurocard and its relation to MasterCard in a long-term perspective, we offer a novel approach. It helps to enrich the field of banking history, which is slowly changing and introducing different research angles, thanks to pioneering research by Bernardo Bátiz-Lazo, Sabine Effosse, David Sparks Evans, Richard Schmalensee, Lana Schwartz, Sebastian Gießmann and others. In this respect, this PhD research aims to add a milestone to historical research on banking innovation and retail banking which is still in its early stages but is moving fast, driven forward in particular by the pioneers mentioned above. [less ▲]

Detailed reference viewed: 42 (2 UL)
Full Text
See detailMetamaterial Design and elaborative approach for efficient selective solar absorber
Khanna, Nikhar UL

Doctoral thesis (2022)

The thesis is focused on developing spectral selective coatings (SSC) composed of multilayer cermets and periodic array of resonating omega structures, turning them to behave like metamaterials, while ... [more ▼]

The thesis is focused on developing spectral selective coatings (SSC) composed of multilayer cermets and periodic array of resonating omega structures, turning them to behave like metamaterials, while showing high thermal stability up to1000°C. The developed SSC is intended to be used for the concentrated solar power (CSP) applications. With the aim of achieving highest possible absorbance in the visible region of the spectrum and highest reflectance in the infrared region of the spectrum. The thesis highlights the numerical design, the synthesis and optical characterization of the SSC of approximately 500 nm thickness. A bottom-up approach was adopted for the preparation of a stack with alternate layers, consisting of a distribution of Titanium Nitride (TiN) nanoparticles with a layer of Aluminum Nitride (AlN) on top. The TiN nanoparticles, laid on a Silicon substrate by wet chemical method, are coated with conforming layer of AlN, via Plasma-enhanced Atomic Layer Deposition (PE-ALD). The control of the morphology at the nanoscale is fundamental for tuning the optical behaviour of the material. For this reason, two composites were prepared. One starting with TiN dispersion made with dry TiN powder and deionized water, and the other with ready-made TiN dispersion. Nano-structured metamaterial based absorbers have many benefits over conventional absorbers, such as miniaturisation, adaptability and frequency tuning. Dealing with the current challenges of producing the new metamaterial based absorber with optimal nanostructure design along with its synthesis within current nano-technological limits, we were capable of turning the cermets into metamaterial. A periodic array of metallic omega structures was patterned on top of both the composites I and II, by using e-beam lithography technique. Parameters, such as the size of TiN nanoparticles, the thickness of AlN thin film and the dimensions of the omega structure were all revealed by the numerical simulations, performed using Wave-Optics module in COMSOL Multiphysics. The work showcased clearly compares the two kinds of composites, using scanning electron microscope, X-ray photoelectron spectroscopy(XPS) and electrical conductivity measurement. The improvement in the optical performance of the SSC after the inclusion of metallic omega structures in the uppermost layer of the two composites has been thoroughly investigated for light absorption boosting. In addition, the optical performance of the two prepared composites and the metamaterial is used as a means of validating the computational model. [less ▲]

Detailed reference viewed: 29 (3 UL)
Full Text
See detailScale law on materials efficiency of electrocaloric materials
Nouchokgwe Kamgue, Youri Dilan UL

Doctoral thesis (2022)

Caloric materials are suggested as energy-efficient refrigerants for future cooling devices. They could replace the greenhouse gases used for decades in our air conditioners, fridges, and heat pumps ... [more ▼]

Caloric materials are suggested as energy-efficient refrigerants for future cooling devices. They could replace the greenhouse gases used for decades in our air conditioners, fridges, and heat pumps. Among the four types of caloric materials (electro, baro, elasto, magneto caloric), electrocaloric materials are more promising as applying large electric fields is much simpler and cheaper than the other fields. The research in the last years has been focused on looking for electrocaloric materials with high thermal responses. However, the energy efficiency crucial for future replacement of the vapor compression technology has been overlooked. The intrinsic efficiency of electrocaloric has been barely studied. In the present dissertation, we will study the efficiency of EC materials defined as materials efficiency. It is the ratio of the reversible electrocaloric heat to the reversible electrical work required to drive this heat. In this work, we will study the materials efficiency of the benchmark lead scandium tantalate in different shapes (bulk ceramic and multilayer capacitors). A comparison to other caloric materials is presented in this dissertation. Our work gives more insights on the figure merit of materials efficiency to further improve the efficiency of our devices. [less ▲]

Detailed reference viewed: 30 (8 UL)
Full Text
See detailMechanisms of Micropollutant Elimination in Vertical Flow Constructed Wetlands
Brunhoferova, Hana UL

Doctoral thesis (2022)

One of the biggest global challenges is the enormous growth of the population. With the growing population rises consequentially production and release of anthropogenic compounds, which then, due to ... [more ▼]

One of the biggest global challenges is the enormous growth of the population. With the growing population rises consequentially production and release of anthropogenic compounds, which then, due to insufficient wastewater treatment system, become pollutants, more precisely micropollutants (MPs). Advanced wastewater technologies presented in this dissertation are solutions applied for targeted elimination of MPs. Ozonation and adsorption on Activated Carbon or their combination belong to the most used advanced wastewater treatment technologies in Europe, however, they are suited for effluents of larger wastewater treatment plants. Therefore, an attempt has been made to test Constructed Wetlands (CWs) as an advanced wastewater treatment technology for small-to-medium sized WWTPs, which are typical for rural areas at the catchment of the river Sûre, the geographical border between Luxembourg and Germany. The efficiency of the CWs for the removal of 27 selected compounds has been tested at different scales (laboratory to pilot) in the Interreg Greater Region project EmiSûre 2017-2021 (Développement de stratégies visant à réduire l'introduction de micropolluants dans les cours d'eau de la zone transfrontalière germanoluxembourgeoise). The results of the project confirmed high ability of CWs to remove MPs from municipal effluents. The quantification of the main mechanisms contributing to the elimination of MPs within the CWs was thus established as the main target of the present PhD research, given the evidence of their high ability in the EmiSûre project. The main mechanisms have been identified as adsorption on the soil of the wetland, phytoremediation by the wetland macrophytes and bioremediation by the wetland microorganisms. The nature of the doctoral thesis is cumulative, the core of the thesis are the following four publications: • Publication [I] describes the usage of CWs as a post-treatment step for municipal effluents. • Publication [II] assesses the role of adsorption of the targeted MPs on the used substrates within the studied CWs and presents characterization of the wetland substrates. • Publication [III] describes the role of the wetland macrophytes in the phytoremediation of the targeted MPs within the studied CWs. Furthermore, it reveals a comparison of the different macrophyte types in varying vegetation stadia. • Publication [IV] outlines the role of the wetland microbes in the bioremediation of the targeted MPs within the studied CWs. Moreover, the wetland microbes known to be able to digest MPs or contribute to the elimination of MPs are identified and quantified. Results suggest adsorption as leading removal mechanism (achieved average removal 18 out of 27 compounds >80%), followed by bioremediation (achieved average removal 18 out of 27 compounds >40%) and phytoremediation (achieved average removal 17 out of 27 compounds <20%). The research described contributes to the extension of knowledge about CWs applied for the elimination of MPs from water. Some of the outcomes (deepened knowledge about soil influencing adsorption, recommendations for adjustment of operational parameters, etc.) could be used as a tool for enhancement of the wetland’s treatment efficiency. The research is concluded by recommendations for further investigations of the individual mechanisms (e.g. application of artificial aeration or circulation of the reaction matrix could result in enhancement of bioremediation). [less ▲]

Detailed reference viewed: 38 (8 UL)
Full Text
See detailA cosmopolitan international law: the authority of regional inter-governmental organisations to establish international criminal accountability mechanisms
Owiso, Owiso UL

Doctoral thesis (2022)

The overall aim of this thesis is to investigate the potential role of regional inter-governmental organisations (RIGOs) in international criminal accountability, specifically through the establishment of ... [more ▼]

The overall aim of this thesis is to investigate the potential role of regional inter-governmental organisations (RIGOs) in international criminal accountability, specifically through the establishment of criminal accountability mechanisms, and to make a case for RIGOs’ active involvement. The thesis proceeds from the assumption that international criminal justice is a cosmopolitan project that demands that a tenable conception of state sovereignty guarantees humanity’s fundamental values, specifically human dignity. Since cosmopolitanism emphasises the equality and unity of the human family, guaranteeing the dignity and humanity of the human family is therefore a common interest of humanity rather than a parochial endeavour. Accountability for international crimes is one way through which human dignity can be validated and reaffirmed where such dignity has been grossly and systematically assaulted. Therefore, while accountability for international crimes is primarily the obligation of individual sovereign states, this responsibility is ultimately residually one of humanity as a whole, exercisable through collective action. As such, the thesis advances the argument that states as collective representations of humanity have a responsibility to assist in ensuring accountability for international crimes where an individual state is either genuinely unable or unwilling by itself to do so. The thesis therefore addresses the question as to whether RIGOs, as collective representations of states and their peoples, can establish international criminal accountability mechanisms. Relying on cosmopolitanism as a theoretical underpinning, the thesis examines the exercise of what can be considered as elements of sovereign authority by RIGOs in pursuit of the cosmopolitan objective of accountability for international crimes. In so doing, the thesis interrogates whether there is a basis in international law for such engagement, and examines how such engagement can practically be undertaken, using two case studies of the European Union and the Kosovo Specialist Chambers and Specialist Prosecutor’s Office, and the African Union and the (proposed) Hybrid Court for South Sudan. The thesis concludes that general international law does not preclude RIGOs from exercising elements of sovereign authority necessary for the establishment of international criminal accountability mechanisms, and that specific legal authority to engage in this regard can then be determined by reference to the doctrine of attributed/conferred powers and the doctrine of implied powers in interpreting the legal instruments of RIGOs. Based on this conclusion, the thesis makes a normative case for an active role for RIGOs in the establishment of international criminal accountability mechanisms, and provides a practical step-by-step guide on possible legal approaches for the establishment of such mechanisms by RIGOs, as well as guidance on possible design models for these mechanisms. [less ▲]

Detailed reference viewed: 37 (7 UL)
Full Text
See detailMulti-objective Robust Machine Learning For Critical Systems With Scarce Data
Ghamizi, Salah UL

Doctoral thesis (2022)

With the heavy reliance on Information Technologies in every aspect of our daily lives, Machine Learning (ML) models have become a cornerstone of these technologies’ rapid growth and pervasiveness. In ... [more ▼]

With the heavy reliance on Information Technologies in every aspect of our daily lives, Machine Learning (ML) models have become a cornerstone of these technologies’ rapid growth and pervasiveness. In particular, the most critical and fundamental technologies that handle our economic systems, transportation, health, and even privacy. However, while these systems are becoming more effective, their complexity inherently decreases our ability to understand, test, and assess the dependability and trustworthiness of these systems. This problem becomes even more challenging under a multi-objective framework: When the ML model is required to learn multiple tasks together, behave under constrained inputs or fulfill contradicting concomitant objectives. Our dissertation focuses on the context of robust ML under limited training data, i.e., use cases where it is costly to collect additional training data and/or label it. We will study this topic under the prism of three real use cases: Fraud detection, pandemic forecasting, and chest x-ray diagnosis. Each use-case covers one of the challenges of robust ML with limited data, (1) robustness to imperceptible perturbations, or (2) robustness to confounding variables. We provide a study of the challenges for each case and propose novel techniques to achieve robust learning. As the first contribution of this dissertation, we collaborate with BGL BNP Paribas. We demonstrate that their overdraft and fraud detection systems are prima facie robust to adversarial attacks because of the complexity of their feature engineering and domain constraints. However, we show that gray-box attacks that take into account domain knowledge can easily break their defense. We propose, CoEva2 adversarial fine-tuning, a new defense mechanism based on multi-objective evolutionary algorithms to augment the training data and mitigate the system’s vulnerabilities. Next, we investigate how domain knowledge can protect against adversarial attacks through multi-task learning. We show that adding domain constraints in the form of additional tasks can significantly improve the robustness of models to adversarial attacks, particularly for the robot navigation use case. We propose a new set of adaptive attacks and demonstrate that adversarial training combined with such attacks can improve robustness. While the raw data available in the BGL or Robot Navigation is vast, it is heavily cleaned, feature-engineered, and annotated by domain experts (which are expensive), and the end training data is scarce. In contrast, raw data is scarce when dealing with an outbreak, and designing robust ML systems to predict, forecast, and recommend mitigation policies is challenging. In particular, for small countries like Luxembourg. Contrary to common techniques that forecast new cases based on previous data in time series, we propose a novel surrogate-based optimization as an integrated loop. It combines a neural network prediction of the infection rate based on mobility attributes and a model-based simulation that predicts the cases and deaths. Our approach has been used by the Luxembourg government’s task force and has been recognized with a best paper award at KDD2020. Our following work focuses on the challenges that pose cofounding factors to the robustness and generalization of Chest X-ray (CXR) classification. We first investigate the robustness and generalization of multi-task models, then demonstrate that multi-task learning, leveraging the cofounding variables, can significantly improve the generalization and robustness of CXR classification models. Our results suggest that task augmentation with additional knowledge (like extraneous variables) outperforms state-of-art data augmentation techniques in improving test and robust performances. Overall, this dissertation provides insights into the importance of domain knowledge in the robustness and generalization of models. It shows that instead of building data-hungry ML models, particularly for critical systems, a better understanding of the system as a whole and its domain constraints yields improved robustness and generalization performances. This dissertation also proposes theorems, algorithms, and frameworks to effectively assess and improve the robustness of ML systems for real-world cases and applications. [less ▲]

Detailed reference viewed: 42 (3 UL)
Full Text
See detailWCET and Priority Assignment Analysis of Real-Time Systems using Search and Machine Learning
Lee, Jaekwon UL

Doctoral thesis (2022)

Real-time systems have become indispensable for human life as they are used in numerous industries, such as vehicles, medical devices, and satellite systems. These systems are very sensitive to violations ... [more ▼]

Real-time systems have become indispensable for human life as they are used in numerous industries, such as vehicles, medical devices, and satellite systems. These systems are very sensitive to violations of their time constraints (deadlines), which can have catastrophic consequences. To verify whether the systems meet their time constraints, engineers perform schedulability analysis from early stages and throughout development. However, there are challenges in obtaining precise results from schedulability analysis due to estimating the worst-case execution times (WCETs) and assigning optimal priorities to tasks. Estimating WCET is an important activity at early design stages of real-time systems. Based on such WCET estimates, engineers make design and implementation decisions to ensure that task executions always complete before their specified deadlines. However, in practice, engineers often cannot provide a precise point of WCET estimates and they prefer to provide plausible WCET ranges. Task priority assignment is an important decision, as it determines the order of task executions and it has a substantial impact on schedulability results. It thus requires finding optimal priority assignments so that tasks not only complete their execution but also maximize the safety margins from their deadlines. Optimal priority values increase the tolerance of real-time systems to unexpected overheads in task executions so that they can still meet their deadlines. However, it is a hard problem to find optimal priority assignments because their evaluation relies on uncertain WCET values and complex engineering constraints must be accounted for. This dissertation proposes three approaches to estimate WCET and assign optimal priorities at design stages. Combining a genetic algorithm and logistic regression, we first suggest an automatic approach to infer safe WCET ranges with a probabilistic guarantee based on the worst-case scheduling scenarios. We then introduce an extended approach to account for weakly hard real-time systems with an industrial schedule simulator. We evaluate our approaches by applying them to industrial systems from different domains and several synthetic systems. The results suggest that they are possible to estimate probabilistic safe WCET ranges efficiently and accurately so the deadline constraints are likely to be satisfied with a high degree of confidence. Moreover, we propose an automated technique that aims to identify the best possible priority assignments in real-time systems. The approach deals with multiple objectives regarding safety margins and engineering constraints using a coevolutionary algorithm. Evaluation with synthetic and industrial systems shows that the approach significantly outperforms both a baseline approach and solutions defined by practitioners. All the solutions in this dissertation scale to complex industrial systems for offline analysis within an acceptable time, i.e., at most 27 hours. [less ▲]

Detailed reference viewed: 52 (8 UL)
See detailArtificial Intelligence-enabled Automation For Ambiguity Handling And Question Answering In Natural-language Requirements
Ezzini, Saad UL

Doctoral thesis (2022)

Requirements Engineering (RE) quality control is a crucial step for a project’s success. Natural Language (NL) is by far the most commonly used means for capturing requirement specifications. Despite ... [more ▼]

Requirements Engineering (RE) quality control is a crucial step for a project’s success. Natural Language (NL) is by far the most commonly used means for capturing requirement specifications. Despite facilitating communication, NL is prone to quality defects, one of the most notable of which is ambiguity. Ambiguous requirements can lead to misunderstandings and eventually result in a system that is different from what is intended, thus wasting time, money, and effort in the process. This dissertation tackles selected quality issues in NL requirements: • Using Domain-specific Corpora for Improved Handling of Ambiguity in Requirements: Syntactic ambiguity types occurring in coordination and prepositional-phrase attachment structures are prevalent in requirements (in our document collection, as we discuss in Chapter 3, 21% and 26% of the requirements are subject to coordination and prepositional-phrase attachment ambiguity analysis, respectively). We devise an automated solution based on heuristics and patterns for improved handling of coordination and prepositional-phrase attachment ambiguity in requirements. As a prerequisite for this research, we further develop a more broadly applicable corpus generator that creates a domain-specific knowledge resource by crawling Wikipedia. • Automated Handling of Anaphoric Ambiguity in Requirements: A Multi-solution Study: Anaphoric ambiguity is another prevalent ambiguity type in requirements. Estimates from the RE literature suggest that nearly 20% of industrial requirements contain anaphora [1, 2]. We conducted a multi-solution study for anaphoric ambiguity handling. Our study investigates six alternative solutions based on three different technologies: (i) off-the-shelf natural language processing (NLP), (ii) recent NLP methods utilizing language models, and (iii) machine learning (ML). • AI-based Question Answering Assistant for Analyzing NL Requirements: Understanding NL requirements requires domain knowledge that is not necessarily shared by all the involved stakeholders. We develop an automated question-answering assistant that supports requirements engineers during requirements inspections and quality assurance. Our solution uses advanced information retrieval techniques and machine reading comprehension models to answer questions from the same requirement specifications document and/or an external domain-specific knowledge resource. All the research components in this dissertation are tool-supported. Our tools are released with open-source licenses to encourage replication and reuse. [less ▲]

Detailed reference viewed: 62 (8 UL)
Full Text
See detailA Holistic Methodology to Deploy Industry 4.0 in Manufacturing Enterprises
Kolla, Sri Sudha Vijay Keshav UL

Doctoral thesis (2022)

In the last decade, the manufacturing industry has seen a shift in the way products are produced due to the integration of digital technologies and existing manufacturing systems. This transformation is ... [more ▼]

In the last decade, the manufacturing industry has seen a shift in the way products are produced due to the integration of digital technologies and existing manufacturing systems. This transformation is often referred to as \textbf{Industry 4.0} (I4.0), which guarantees to deliver cost efficiency, mass customization, operational agility, traceability, and enable service orientation. To realize the potential of I4.0, integration of physical and digital elements using advanced technologies is a prerequisite. Large manufacturing companies have been embracing the I4.0 transformation swiftly. However, Small and Medium-sized Enterprises (SMEs) face challenges in terms of skills and capital requirements required for a smoother digital transformation. The goal of this thesis is to understand the features of a typical manufacturing SME and map them with the existing (e.g. Lean) and I4.0 manufacturing systems. The mapping is then used to develop a Self-Assessment Tool (SAT) to measure the maturity of a manufacturing entity. The SAT developed in this research has a critical SME focus. However, the scope of the SAT is not limited to SMEs and can be used for large companies. The analysis of the maturity of manufacturing companies revealed that the managerial dimensions of the companies are more mature than the technical dimensions. Therefore, this thesis attempts to fill the gap in technical dimensions especially Augmented Reality (AR) and Industrial Internet of Things (IIoT) through laboratory experiments and industrial validation. A holistic method is proposed to introduce I4.0 technologies in manufacturing enterprises based on maturity assessment, observations, technical road map, and applications. The method proposed in this research includes SAT, which measures the maturity of a manufacturing company in five categorical domains (\textbf{dimensions}): Strategy, Process and Value Stream, Organization, Methods and Tools, and Personnel. Furthermore, these dimensions are attributed to 36 modules, which help manufacturing companies measure their maturity level in terms of lean and I4.0. The SAT is tested in 100 manufacturing enterprises in Grande Région consisting of the pilot study (n=20) and maturity assessment (n=63). The observations from the assessment are then used to set up the technological road map for the research. AR and IIoT are the two technologies that are associated with the least mature modules, which are explored in depth in this thesis. A holistic method is incomplete without industry validation. Therefore, the above-mentioned technologies are applied in two manufacturing companies for further validation of the laboratory results. These applications include 1) the application of AR for maintenance and quality inspection in a tire manufacturing industry, and 2) the application of retrofitting technology for IIoT on a production machine in an SME. With the validated assessment model and the industrial applications, this thesis overall presents a holistic approach to introducing I4.0 technologies in manufacturing enterprises. This is accomplished through identifying the status of the company using maturity assessment and deriving the I4.0 roadmap for high potential modules. The skill gap in the addressed technologies is compensated by designing and testing prototypes in the laboratory before applying them in the industry. [less ▲]

Detailed reference viewed: 49 (9 UL)
Full Text
See detailTOPICS IN COMPUTATIONAL NUMBER THEORY AND CRYPTANALYSIS - On Simultaneous Chinese Remaindering, Primes, the MiNTRU Assumption, and Functional Encryption
Barthel, Jim Jean-Pierre UL

Doctoral thesis (2022)

This thesis reports on four independent projects that lie in the intersection of mathematics, computer science, and cryptology: Simultaneous Chinese Remaindering: The classical Chinese Remainder Problem ... [more ▼]

This thesis reports on four independent projects that lie in the intersection of mathematics, computer science, and cryptology: Simultaneous Chinese Remaindering: The classical Chinese Remainder Problem asks to find all integer solutions to a given system of congruences where each congruence is defined by one modulus and one remainder. The Simultaneous Chinese Remainder Problem is a direct generalization of its classical counterpart where for each modulus the single remainder is replaced by a non-empty set of remainders. The solutions of a Simultaneous Chinese Remainder Problem instance are completely defined by a set of minimal positive solutions, called primitive solutions, which are upper bounded by the lowest common multiple of the considered moduli. However, contrary to its classical counterpart, which has at most one primitive solution, the Simultaneous Chinese Remainder Problem may have an exponential number of primitive solutions, so that any general-purpose solving algorithm requires exponential time. Furthermore, through a direct reduction from the 3-SAT problem, we prove first that deciding whether a solution exists is NP-complete, and second that if the existence of solutions is guaranteed, then deciding whether a solution of a particular size exists is also NP-complete. Despite these discouraging results, we studied methods to find the minimal solution to Simultaneous Chinese Remainder Problem instances and we discovered some interesting statistical properties. A Conjecture On Primes In Arithmetic Progressions And Geometric Intervals: Dirichlet’s theorem on primes in arithmetic progressions states that for any positive integer q and any coprime integer a, there are infinitely many primes in the arithmetic progression a + nq (n ∈ N), however, it does not indicate where those primes can be found. Linnik’s theorem predicts that the first such prime p0 can be found in the interval [0;q^L] where L denotes an absolute and explicitly computable constant. Albeit only L = 5 has been proven, it is widely believed that L ≤ 2. We generalize Linnik’s theorem by conjecturing that for any integers q ≥ 2, 1 ≤ a ≤ q − 1 with gcd(q, a) = 1, and t ≥ 1, there exists a prime p such that p ∈ [q^t;q^(t+1)] and p ≡ a mod q. Subsequently, we prove the conjecture for all sufficiently large exponent t, we computationally verify it for all sufficiently small modulus q, and we investigate its relation to other mathematical results such as Carmichael’s totient function conjecture. On The (M)iNTRU Assumption Over Finite Rings: The inhomogeneous NTRU (iNTRU) assumption is a recent computational hardness assumption, which claims that first adding a random low norm error vector to a known gadget vector and then multiplying the result with a secret vector is sufficient to obfuscate the considered secret vector. The matrix inhomogeneous NTRU (MiNTRU) assumption essentially replaces vectors with matrices. Albeit those assumptions strongly remind the well-known learning-with-errors (LWE) assumption, their hardness has not been studied in full detail yet. We provide an elementary analysis of the corresponding decision assumptions and break them in their basis case using an elementary q-ary lattice reduction attack. Concretely, we restrict our study to vectors over finite integer rings, which leads to a problem that we call (M)iNTRU. Starting from a challenge vector, we construct a particular q-ary lattice that contains an unusually short vector whenever the challenge vector follows the (M)iNTRU distribution. Thereby, elementary lattice reduction allows us to distinguish a random challenge vector from a synthetically constructed one. A Conditional Attack Against Functional Encryption Schemes: Functional encryption emerged as an ambitious cryptographic paradigm supporting function evaluations over encrypted data revealing the result in plain. Therein, the result consists either in a valid output or a special error symbol. We develop a conditional selective chosen-plaintext attack against the indistinguishability security notion of functional encryption. Intuitively, indistinguishability in the public-key setting is based on the premise that no adversary can distinguish between the encryptions of two known plaintext messages. As functional encryption allows us to evaluate functions over encrypted messages, the adversary is restricted to evaluations resulting in the same output only. To ensure consistency with other primitives, the decryption procedure of a functional encryption scheme is allowed to fail and output an error. We observe that an adversary may exploit the special role of these errors to craft challenge messages that can be used to win the indistinguishability game. Indeed, the adversary can choose the messages such that their functional evaluation leads to the common error symbol, but their intermediate computation values differ. A formal decomposition of the underlying functionality into a mathematical function and an error trigger reveals this dichotomy. Finally, we outline the impact of this observation on multiple DDH-based inner-product functional encryption schemes when we restrict them to bounded-norm evaluations only. [less ▲]

Detailed reference viewed: 27 (2 UL)
Full Text
See detailSecure, privacy-preserving and practical collaborative Genome-Wide Association Studies
Pascoal, Túlio UL

Doctoral thesis (2022)

Understanding the interplay between genomics and human health is a crucial step for the advancement and development of our society. Genome-Wide Association Study (GWAS) is one of the most popular methods ... [more ▼]

Understanding the interplay between genomics and human health is a crucial step for the advancement and development of our society. Genome-Wide Association Study (GWAS) is one of the most popular methods for discovering correlations between genomic variations associated with a particular phenotype (i.e., an observable trait such as a disease). Leveraging genome data from multiple institutions worldwide nowadays is essential to produce more powerful findings by operating GWAS at larger scale. However, this raises several security and privacy risks, not only in the computation of such statistics, but also in the public release of GWAS results. To that extent, several solutions in the literature have adopted cryptographic approaches to allow secure and privacy-preserving processing of genome data for federated analysis. However, conducting federated GWAS in a secure and privacy-preserving manner is not enough since the public releases of GWAS results might be vulnerable to known genomic privacy attacks, such as recovery and membership attacks. The present thesis explores possible solutions to enable end-to-end privacy-preserving federated GWAS in line with data privacy regulations such as GDPR to secure the public release of the results of Genome-Wide Association Studies (GWASes) that are dynamically updated as new genomes become available, that might overlap with their genomes and considered locations within the genome, that can support internal threats such as colluding members in the federation and that are computed in a distributed manner without shipping actual genome data. While achieving these goals, this work created several contributions described below. First, the thesis proposes DyPS, a Trusted Execution Environment (TEE)-based framework that reconciles efficient and secure genome data outsourcing with privacy-preserving data processing inside TEE enclaves to assess and create private releases of dynamic GWAS. In particular, DyPS presents the conditions for the creation of safe dynamic releases certifying that the theoretical complexity of the solution space an external probabilistic polynomial-time (p.p.t.) adversary or a group of colluders (up to all-but-one parties) would need to infer when launching recovery attacks on the observation of GWAS statistics is large enough. Besides that, DyPS executes an exhaustive verification algorithm along with a Likelihood-ratio test to measure the probability of identifying individuals in studies. Thus, also protecting individuals against membership inference attacks. Only safe genome data (i.e., genomes and SNPs) that DyPS selects are further used for the computation and release of GWAS results. At the same time, the remaining (unsafe) data is kept secluded and protected inside the enclave until it eventually can be used. Our results show that if dynamic releases are not improperly evaluated, up to 8% of genomes could be exposed to genomic privacy attacks. Moreover, the experiments show that DyPS’ TEE-based architecture can accommodate the computational resources demanded by our algorithms and present practical running times for larger-scale GWAS. Secondly, the thesis offers I-GWAS that identifies the new conditions for safe releases when considering the existence of overlapping data among multiple GWASes (e.g., same individuals participating in several studies). Indeed, it is shown that adversaries might leverage information of overlapping data to make both recovery and membership attacks feasible again (even if they are produced following the conditions for safe single-GWAS releases). Our experiments show that up to 28.6% of genetic variants of participants could be inferred during recovery attacks, and 92.3% of these variants would enable membership attacks from adversaries observing overlapping studies, which are withheld by I-GWAS. Lastly yet importantly, the thesis presents GenDPR, which encompasses extensions to our protocols so that the privacy-verification algorithms can be conducted distributively among the federation members without demanding the outsourcing of genome data across boundaries. Further, GenDPR can also cope with collusion among participants while selecting genome data that can be used to create safe releases. Additionally, GenDPRproduces the same privacy guarantees as centralized architectures, i.e., it correctly identifies and selects the same data in need of protection as with centralized approaches. In the end, the thesis presents a homogenized framework comprising DyPS, I-GWAS and GenDPR simultaneously. Thus, offering a usable approach for conducting practical GWAS. The method chosen for protection is of a statistical nature, ensuring that the theoretical complexity of attacks remains high and withholding releases of statistics that would impose membership inference risks to participants using Likelihood-ratio tests, despite adversaries gaining additional information over time, but the thesis also relates the findings to techniques that can be leveraged to protect releases (such as Differential Privacy). The proposed solutions leverage Intel SGX as Trusted Execution Environment to perform selected critical operations in a performant manner, however, the work translates equally well to other trusted execution environments and other schemes, such as Homomorphic Encryption. [less ▲]

Detailed reference viewed: 81 (12 UL)
Full Text
See detailInterleukin-6 signalling and long non-coding RNAs in liver cancer
Minoungou, Wendkouni Nadège UL

Doctoral thesis (2022)

Hepatocellular carcinoma (HCC), the main form of primary liver cancer, is the second leading cause of cancer-related deaths worldwide after lung cancer. Multiple aetiologies have been associated with the ... [more ▼]

Hepatocellular carcinoma (HCC), the main form of primary liver cancer, is the second leading cause of cancer-related deaths worldwide after lung cancer. Multiple aetiologies have been associated with the development of HCC, which arises in most cases in the context of a chronically inflamed liver. HCC is in fact an inflammation-driven cancer, with the TNF and IL6 families of cytokines playing key roles in maintaining a chronic inflammatory state, promoting hepatocarcinogenesis. IL6 signals mainly through the JAK1/STAT3 signal transduction pathway and is known to play key roles in liver physiology and disease. In the interest of identifying novel players and downstream effectors of the IL6/JAK1/STAT3 signalling pathway that may contribute to the signal transduction of IL6 in liver-derived cells, we have been investigating the expression of long non-coding RNAs (lncRNAs) in response to treatment with the designer cytokine Hyper-IL6. Indeed, lncRNAs have recently emerged as a key layer of biological regulation and have been shown to be differentially expressed in cancer, including HCC. Upon analysis of time series transcriptomics data, we have identified hundreds of lncRNAs to be differentially expressed in HepG2, HuH7, and Hep3B hepatoma cells upon cytokine stimulation; 26 of which are common to the three cell lines tested. qPCR validation experiments have been performed for several lncRNAs, such as for the liver-specific lncRNA linc-ELL2. By functionally characterising identified clusters of IL6-regulated coding and non-coding genes in hepatoma cells, we propose, based on a guilt-by-association hypothesis, novel functions for previously poorly characterized lncRNAs and pseudogenes such as AL391422.4 or TUBA5P. Several lncRNA genes seem to be co-regulated with a protein-coding gene localized in their vicinity. For example, Hyper-IL-6 increases the mRNA and protein levels of XBP1, a well-known regulator of the unfolded protein response. At the same time, the expression of lncRNA AF086143 increases, which is expressed from the same gene locus in a bidirectional way. The targeted as well as a genome-wide analysis of lncRNA/mRNA gene pairs indicates a possible cis-regulatory role of lncRNAs with regards to their antisense and bidirectional protein coding counterparts. Taken together, these results provide a comprehensive characterisation of the lncRNA and pseudogene repertoire of IL6-regulated genes in hepatoma cells. Our results emphasize lncRNAs as crucial components of the gene regulatory networks affected by cytokine signalling pathways. [less ▲]

Detailed reference viewed: 37 (11 UL)
Full Text
See detailMagnetic Guinier Law and Uniaxial Polarization Analysis in Small Angle Neutron Scattering
Malyeyev, Artem UL

Doctoral thesis (2022)

The present PhD thesis is devoted to the development of the use of the magnetic small-angle neutron scattering (SANS) technique for analyzing the magnetic microstructures of magnetic materials. The ... [more ▼]

The present PhD thesis is devoted to the development of the use of the magnetic small-angle neutron scattering (SANS) technique for analyzing the magnetic microstructures of magnetic materials. The emphasis is on the three aspects: (i) analytical development of the magnetic Guinier law; (ii) the application the magnetic Guinier law and of the generalized Guinier-Porod model to the analysis of experimental neutron data on various magnets such as a Nd-Fe-B nanocomposite, nanocrystalline cobalt, and Mn-Bi rare-earth-free permanent magnets; (iii) development of the theory of uniaxial neutron polarization analysis and experimental testing on a soft magnetic nanocrystalline alloy. The conventional “nonmagnetic” Guinier law represents the low-q approximation for the small-angle scattering curve from an assembly of particles. It has been derived for nonmagnetic particle-matrix-type systems and is routinely employed for the estimation of particle sizes in e.g., soft-matter physics, biology, colloidal chemistry, materials science. Here, the extension of the Guinier law is provided for magnetic SANS through the introduction of the magnetic Guinier radius, which depends on the applied magnetic field, on the magnetic interactions (exchange constant, saturation magnetization), and on the magnetic anisotropy-field radius. The latter quantity characterizes the size over which the magnetic anisotropy field is coherently aligned into the same direction. In contrast to the conventional Guinier law, the magnetic version can be applied to fully dense random-anisotropy-type ferromagnets. The range of applicability is discussed and the validity of the approach is experimentally demonstrated on a Nd-Fe-B-based ternary permanent magnet and on a nanocrystalline cobalt sample. Rare-earth-free permanent magnets in general and the Mn-Bi-based ones in particular have received a lot of attention lately due to their application potential in electronics devices and electromotors. Mn-Bi samples with three different alloy compositions were studied by means of unpolarized SANS and by very small-angle neutron scattering (VSANS). It turns out that the magnetic scattering of the Mn-Bi samples is determined by long-wavelength transversal magnetization fluctuations. The neutron data is analyzed in terms of the generalized Guinier-Porod model and the distance distribution function. The results for the so-called dimensionality parameter obtained from the Guinier-Porod model indicate that the magnetic scattering of a Mn$_{45}$Bi$_{55}$ specimen has its origin in slightly shape-anisotropic structures and the same conclusions are drawn from the distance distribution function analysis. Finally, based on Brown’s static equations of micromagnetics and the related theory of magnetic SANS, the uniaxial polarization of the scattered neutron beam of a bulk magnetic material is computed. The theoretical expressions are tested against experimental data on a soft magnetic nanocrystalline alloy, and both qualitative and quantitative correspondence is discussed. The rigorous analysis of the polarization of the scattered neutron beam establishes the framework for the emerging polarized real-space techniques such as spin-echo small-angle neutron scattering (SESANS), spin-echo modulated small-angle neutron scattering (SEMSANS), and polarized neutron dark-field contrast imaging (DFI), and opens up a new avenue for magnetic neutron data analysis on nanoscaled systems. [less ▲]

Detailed reference viewed: 46 (9 UL)
See detailDocteur
Iglesias González, Alba UL

Doctoral thesis (2022)

The last century has been characterized by the increasing presence of synthetic chemicals in human surroundings, with as consequence, the increasing exposure of individuals to a wide variety of chemical ... [more ▼]

The last century has been characterized by the increasing presence of synthetic chemicals in human surroundings, with as consequence, the increasing exposure of individuals to a wide variety of chemical substances on a regular basis. The Lancet Commission on Pollution and Health estimated that since synthetic chemicals started to be available for common use at the end of the 1940s, more than 140,000 new chemicals have been produced, including five thousand used globally in massive volume. In parallel, awareness of the adverse effects of pollutant mixtures, possibly more severe than single-chemical exposures, has drawn attention towards the need of multi-residue analytical methods to obtain the most comprehensive information on human chemical exposome. Human biomonitoring, consisting in the measurement of pollutants in biological matrices, provides information that integrates all the possible sources of exposure, and is specific to the subject the sample is collected from. For this purpose, hair appears as a particularly promising matrix to assess chemical exposure thanks to its multiple benefits. Hair enables to detect both parent chemicals and metabolites, it is suitable to investigate exposure to chemicals from different families, and allows the detection of persistent and non-persistent chemicals. Moreover, contrary to fluids such as urine and blood, which only give information on the short-term exposure and present great variability in chemical concentration, hair is representative of wider time windows that can easily cover several months. Children represent the most vulnerable part of the population, and exposure to pollutants at young ages has been associated with severe health effects during childhood, but also during the adult life. Nevertheless, most epidemiological studies investigating exposure to pollutants are still conducted on adults, and data on children remain much more limited. The present study named “Biomonitoring of children exposure to pollutants based on hair analysis” investigated the relevance of hair analysis for assessing children exposure to pollutants. In this study, 823 hair samples were collected from children and adolescents living in 9 different countries (Luxembourg, France, Spain, Uganda, Indonesia, Ecuador, Suriname, Paraguay and Uruguay), and 117 hair samples were also collected from French adults. All samples were analysed for the detection of 153 organic compounds (140 were pesticides, 4 PCBs, 7 BDEs and 2 bisphenols). Moreover, the hair samples of French adults and children were also analysed for the detection of polycyclic aromatic hydrocarbons (PAH) and their metabolites (n = 62), nicotine, cotinine and metals (n = 36). The results obtained here clearly demonstrated that children living in different geographical areas are simultaneously exposed to multiple chemicals from different chemical classes. Furthermore, the presence of persistent organic pollutants in all children, and not only in adults, suggests that exposure to these chemicals is still ongoing, although these chemicals were banned decades ago. In the sub-group of Luxembourgish children, information collected within questionnaires in parallel to hair sample collection allowed to identify some possible determinant of exposure, such as diet (organic vs conventional), residence area (urban vs countryside), and presence of pets at home. Moreover, results showed higher levels of concentration in younger children, and higher exposure of boys to non-persistent pesticides than girls, which could possibly be attributed to differences in metabolism, behaviour and gender-specific activities. Finally, the study also highlighted high level of similarity in the chemical exposome between children from the same family compared to the rest of the population. The present study strongly supports the use of hair analysis for assessing exposure to chemical pollutants, and demonstrates the relevance of multi-residue methods to investigate exposome. [less ▲]

Detailed reference viewed: 66 (3 UL)
Full Text
See detailModeling and Control of Laser Wire Additive Manufacturing
Mbodj, Natago Guilé UL

Doctoral thesis (2022)

Metal Additive Manufacturing (MAM) offers many advantages such as fast product manufacturing, nearly zero material waste, prototyping of complex large parts and the automatization of the manufacturing ... [more ▼]

Metal Additive Manufacturing (MAM) offers many advantages such as fast product manufacturing, nearly zero material waste, prototyping of complex large parts and the automatization of the manufacturing process in the aerospace, automotive and other sectors. In the MAM, several parameters influence the product creation steps, making the MAM challenging. In this thesis, we modelize and control the deposition process for a type of MAM where a laser beam is used to melt a metallic wire to create the metal parts called the Laser Wire Additive Manufacturing Process (LWAM). In the dissertation, first, a novel parametric modeling approach is created. The goal of this approach is to use parametric product design features to simulate and print 3D metallic objects for the LWAM. The proposed method includes a pattern and the robot toolpath creation while considering several process requirements of LWAM, such as the deposition sequences and the robot system. This technique aims to develop adaptive robot toolpaths for a precise deposition process with nearly zero error in the product creation process. Second, a layer geometry (width and height) prediction model to improve deposition accuracy is proposed. A machine learning regression algorithm is applied to several experimental data to predict the bead geometry across layers. Furthermore, a neural network-based approach was used to study the influence of different deposition parameters, namely laser power, wire-feed rate and travel speed on bead geometry. The experimental results shows that the model has an error rate of (i.e., 2∼4%). Third, a physics-based model of the bead geometry including known process parameters and material properties was created. The model developed for the first time includes critical process parameters, the material properties and the thermal history to describe the relationship between the layer height with different process inputs (i.e., the power, the standoff distance, the temperature, the wire-feed rate and the travel speed). The numerical results show a match of the model with the experimental measurements. Finally, a Model Predictive Controller (MPC) was designed to keep the layer height trajectory constant, considering the constraints and the operating range of the parameters of the process inputs. The model simulation result shows an acceptable tracking of the reference height. [less ▲]

Detailed reference viewed: 119 (5 UL)
Full Text
See detailModelling complex systems in the context of the COVID-19 pandemics
Kemp, Francoise UL

Doctoral thesis (2022)

Systems biology is an interdisciplinary approach investigating complex biological systems at different levels by combining experimental and modelling approaches to understand underlying mechanisms of ... [more ▼]

Systems biology is an interdisciplinary approach investigating complex biological systems at different levels by combining experimental and modelling approaches to understand underlying mechanisms of health and disease. Complex systems including biological systems are affected by a plethora of interactions and dynamic processes often with the aim to ensure robustness to emer- gent system properties. The need for interdisciplinary approaches became very evident in the recent COVID-19 pandemic spreading around the globe since the end of 2019. This pandemic came with a bundle of urgent epidemiological open questions including the infection and transmis- sion mechanisms of the virus, its pathogenicity and the relation to clinical symptoms. During the pandemic, mathematical modelling became an essential tool to integrate biological and healthcare data into mechanistic frameworks for projections of future developments and the assessment of different mitigation strategies. In this regard, systems biology with its interdisciplinary approach was a widely applied framework to support society in the COVID-19 crisis. In my thesis, I applied different mathematical modelling approaches as a tool to identify underlying mechanisms of the complex dynamics of the COVID-19 pandemic with a specific focus on the situation in Luxembourg. For this purpose, I analysed the COVID-19 pandemic at its different phases and from various perspectives by investigating mitigation strategies, consequences in the healthcare and economical system, and pandemic preparedness in terms of early-warning signals for re-emergence of new COVID-19 outbreaks by extended and adapted epidemiological Susceptible-Exposed-Infectious-Recovered (SEIR) models. [less ▲]

Detailed reference viewed: 23 (1 UL)
See detailDeciphering the role of colorectal cancer-associated bacteria in the fibroblast-tumor cell interaction
Karta, Jessica UL

Doctoral thesis (2022)

Dysbiosis is an imbalance in the gut microbiome that is often associated with inflammation and cancer. Several microbial species, such as Fusobacterium nucleatum, have been suggested to be involved in ... [more ▼]

Dysbiosis is an imbalance in the gut microbiome that is often associated with inflammation and cancer. Several microbial species, such as Fusobacterium nucleatum, have been suggested to be involved in colorectal cancer (CRC). To date, most studies have focused on the interaction between CRC-associated bacteria and tumor cells. However, the tumor microenvironment (TME) is composed of various types of cells, among which cancer-associated fibroblasts (CAFs), one of the most vital players in the TME. The interaction between CRC-associated bacteria and CAFs and especially the impact of their cross-talk on tumor cells, remains largely unknown. In this regard, this thesis investigated the interaction between a well described and accepted CRC-associated bacteria, Fusobacterium nucleatum, and CAFs and their subsequent effects on tumor progression in CRC. Our findings show that F.nucleatum binds to CAFs and induces phenotypic changes. F.nucleatum promotes CAFs to secrete several pro-inflammatory cytokines and membrane-associated proteases. Upon exposure with F.nucleatum, CAFs also undergo metabolic rewiring with higher mitochondrial ROS and lactate secretion. Importantly, F.nucleatum-treated CAFs increase the migration ability of tumor cells in vitro through secreted cytokines, among which CXCL1. Furthermore, the co-injection of F.nucleatum-treated CAFs with tumor cells in vivo leads to a faster tumor growth as compared to the co-injection of untreated CAFs with tumor cells. Taken together, our results show that CAFs are an important player in the gut microbiome-CRC axis. Targeting the CAF-microbiome crosstalk might represent a novel therapeutic strategy for CRC. [less ▲]

Detailed reference viewed: 79 (14 UL)
Full Text
See detailExploring the Institutionalisation of Science Diplomacy: A Comparison of German and Swiss Science and Innovation Centres
Epping, Elisabeth UL

Doctoral thesis (2022)

This thesis explains and investigates the development and the institutionalisation of Science and Innovation Centres (SICs) as being distinct instruments of science diplomacy. SICs are a unique and ... [more ▼]

This thesis explains and investigates the development and the institutionalisation of Science and Innovation Centres (SICs) as being distinct instruments of science diplomacy. SICs are a unique and underexplored instrument in the science diplomacy toolbox and they are increasingly being adopted by highly innovative countries. This research responds to a growing interest in the field. Science diplomacy is commonly understood as a distinct governmental approach that mobilises science for wider foreign policy goals, such as improving international relations. However, science diplomacy discourse is characterised by a weak empirical basis and driven by normative perspectives. This research responds to these shortcomings and aims to lift the smokescreen of science diplomacy by providing an insight into its governance while also establishing a distinctly actor-centred perspective. In order to achieve this, two distinct SICs, Germany’s Deutsche Wissenschafts- und Innovationshäuser (DWIH) and Switzerland’s Swissnex are closely analysed in an original comparative and longitudinal study. While SICs are just one instrument in the governmental toolbox for promoting international collaboration and competition, they are distinct due to their holistic set- up and their role as a nucleus for the wider research and innovation system they represent. Moreover, SICs appear to have the potential to create a significant impact, despite their limited financial resources. This thesis takes a historical development perspective to outline how these two SICs were designed as well as their gradual development and institutionalisation. The thesis further probes why actors participate in SICs by unpacking their differing rationales, developing a distinctly actor-centred perspective on science diplomacy. This study has been designed in an inductive and exploratory way to account for the novelty of the topic; the research findings are based on the analysis of 41 interviews and a substantial collection of documents. The study finds evidence that SICs developed as a response to wider societal trends, although these trends differed for the two case studies. Moreover, the development of SICs has been characterised by aspects such as timing, contingency and critical junctures. SICs are inextricably connected to their national contexts and mirror distinct system characteristics, such as governance arrangements or degree of actor involvement. These aspects were also seen as explaining the exact shape that SICs take. Furthermore, this study finds evidence of an appropriation of SICs by key actors, in line with their organisational interests. In the case of the DWIH, this impacted and even limited its (potential) design and ways of operating. However, the analysis of SICs’ appropriation also revealed a distinct sense of collectivity, which developed among actors in the national research and innovation ecosystem due to this joint instrument. The research findings reaffirm that science diplomacy is clearly driven by national interests, while further highlighting that the notion of science diplomacy and its governance (actors, rationales and instruments) can only be fully understood by analysing the national context. [less ▲]

Detailed reference viewed: 93 (4 UL)
Full Text
See detailLimit theorems with Malliavin calculus and Stein's method
Garino, Valentin UL

Doctoral thesis (2022)

We use recent tools from stochastic analysis (such as Stein's method and Malliavin calculus) to study the asymptotic behaviour of some functionals of a Gaussien Field.

Detailed reference viewed: 49 (5 UL)
Full Text
See detailThe Multi-Level System of Space Mining: Regulatory Aspects and Enforcement Options
Salmeri, Antonino UL

Doctoral thesis (2022)

Few contests that space mining holds the potential to revolutionize the space sector. The utilization of space resources can reduce the costs of deep space exploration and kick-off an entirely new economy ... [more ▼]

Few contests that space mining holds the potential to revolutionize the space sector. The utilization of space resources can reduce the costs of deep space exploration and kick-off an entirely new economy in our solar system. However, whether such a revolution will happen for good or for worse depends also on the enactment of appropriate regulation. Under the right framework, space mining will be able to deliver on its promise of a new era of prosperous and sustainable space exploration. But with the wrong rules (or lack thereof), unbalanced space resource activities can destabilize the space community to a truly unprecedented scale. With companies planning mining operations on the Moon already during this decade, the regulation of space resource activities has thus become one of the most pressing and crucial topics to be addressed by the global space community. In this context, this thesis provides a first-of-its-kind, comprehensive and innovative analysis of the regulatory and enforcement options currently shaping the multi-level governance of space mining. In addition to this, the thesis also suggests a series of correctives that can improve the system and ensure the peaceful, rational, safe, and sustainable conduct of space mining. Structurally, the thesis moves from general to particular and is divided in three chapters. Chapter 1 discusses the relationship between space law and international law to contextualize the specific assessment of space mining. Chapter 2 analyses the current regulatory framework applicable to space mining, considering both the international and national levels. Finally, Chapter 3 identifies potential enforcement options, assesses them in terms of effectiveness and legitimacy, and further proposes some pragmatic correctives to reinforce the governance system. [less ▲]

Detailed reference viewed: 66 (12 UL)
Full Text
See detailElectrocaloric coolers and pyroelectric energy harvesters based on multilayer capacitors of Pb(Sc0.5Ta0.5)O3
Torelló Massana, Àlvar UL

Doctoral thesis (2022)

The following work investigates the development of heat pumps that exploit electrocaloric effects in Pb(Sc,Ta)03 (PST) multilayer capacitors (MLCs). The electrocaloric effect refers to reversible thermal ... [more ▼]

The following work investigates the development of heat pumps that exploit electrocaloric effects in Pb(Sc,Ta)03 (PST) multilayer capacitors (MLCs). The electrocaloric effect refers to reversible thermal changes in a material upon application (and removal) of an electric field. Electrocaloric cooling is interesting because 1) it has the potential to be more efficient than competing technologies, such as vapour-compression systems, and 2) it does not compel the use of greenhouse gases, which is crucial in order to slow down global warming and mitigate the effects of climate change. The continuous progress in the field of electrocalorics has promoted the creation of several electrocaloric based heat pump prototypes. Despite the different designs and working principles utilized, these prototypes have struggled to maintain temperature variations as large as 10 K, discouraging their industrial development. In this work, bespoke PST-MLCs exhibiting large electrocaloric effects near room temperature were embodied in a novel heat pump with the motivation to surpass the 10 K-barrier. The experimental design of the heat pump was based on the outcome of a numerical model. After implementing some of the modifications suggested by the latter, consistent temperature spans of 13 K at 30 °C were reported, with cooling powers of 12 W / kg. Additional simulations predicted temperature spans as large as 50 K and cooling powers in the order of 1000 W / kg, if a new set of plausible modifications were to be put in place. Similarly, these very same PST-MLCs samples were implemented into pyroelectric harvesters revisiting Olsen's pioneering work from 1980. The harvested energies were found to be as large as 11.2 J, with energy densities reaching up to 4.4 J / cm3 of active material, when undergoing temperature oscillations of 100 K under electric fields applied of 140-200 kV / cm. These findings are two and four times, respectively, larger than the best reported values in the literature. The results obtained in this dissertation are beyond the state-of-the-art and show that 1) electrocaloric heat pumps can indeed achieve temperature spans larger than 10 K, and 2) pyroelectric harvesters can generate electrical energy in the Joule-range. Moreover, numerical models indicated that there is still room for improvement, especially when it comes to the power of these devices. This should encourage the development of these kinds of electrocaloric- and pyroelectric-based applications in the near future. [less ▲]

Detailed reference viewed: 59 (5 UL)
Full Text
See detailReductions of algebraic numbers and Artin's conjecture on primitive roots
Sgobba, Pietro UL

Doctoral thesis (2022)

Detailed reference viewed: 56 (8 UL)
See detailMULTI-STAGE PROCESS FOR A HIGHER FLEXIBILITY OF BIOGAS PLANTS WITH (CO-) FERMENTATION OF WASTE – OPTIMISATION AND MODELLING
Sobon-Mühlenbrock, Elena Katarzyna UL

Doctoral thesis (2022)

The European Union has been striving to become the first climate-neutral continent by 2050. This implicates an intensified transition towards sustainability. The most applied renewable energy sources are ... [more ▼]

The European Union has been striving to become the first climate-neutral continent by 2050. This implicates an intensified transition towards sustainability. The most applied renewable energy sources are the sun and wind, which are intermittent. Thus, great fluctuating shares in the energy network are expected within the next years. Consequently, there might occur periods of no congruence between energy demand and energy supply leading to destabilization of the electricity grid. Therefore, an urgency to overcome the intermittency arises. One feasible option is to use a third renewable energy source, biomass, which can be produced demand-oriented. Hence, a flexible biogas plant running on a two-stage mode, where the first stage would serve as a storage for liquid intermediates, could be a viable option to create demand-driven and need-orientated electricity. Since vast amounts of food waste are thrown away each year (in 2015 they amounted 88 mio. tones within the EU-28, accounting for ca. 93 TWh of energy), one could energetically recover this substrate in the above-described process. This is a promising concept, however, not widely applicable as it faces many challenges, such as technical and economical. Additionally, food waste is inhomogeneous, and its composition is country- and collecting season dependent. The motivation of this work was to contribute to a vaster understanding of the two-stage anaerobic digestion process by using food waste as a major substrate. At first, an innovative substitute for a heterogeny food waste was introduced and examined at two different loadings and temperature modes. It was proven that the Model Kitchen Waste (MKW) was comparable to the real Kitchen Waste (KW), at mesophilic and thermophilic mode for an organic loading in accordance with the guideline VDI 4630 (2016). For an “extreme” loading, and mesophilic mode, the MKW generated similar biogas, methane, and volatile fatty acid (VFA) patterns as well. Furthermore, another two MKW versions were developed, allowing covering a variety of different organic wastes and analyzing the impact of fat content on the biogas production. Afterwards, a semi-continuous one-stage experiment of 122 days was conducted. It was followed by an extensive semi-continuous two-stage study of almost 1.5-year runtime. Different loadings and hydraulic retention times were investigated in order to optimize this challenging process. Additionally, the impact of co-digestion of lignocellulose substrate was analyzed. It was concluded that two-stage mode led to a higher biogas and methane yield than the one-stage. However, the former posed challenges related to the stability and the process maintenance. Additionally, it was found that co-digestion of food waste and maize silage results in methane yield, atypical for the acidic stage. Apart from the experiments, the Anaerobic Digestion Model No. 1 (ADM1), originally developed for wastewater, was modified so that it would suit the anaerobic digestion of food waste of different fat contents, at batch and semi-continuous mode consisting of one- and two-stages. The goodness of fit was assessed by the Normalized Root Mean Square Error (NRMSE) and coefficient of efficiency (CE). For the batch mode, two temperature modes could be properly simulated at loadings conform and nonconform to the VDI 4630 (2016). For each mode, two different sets of parameters were introduced, namely for substrates of low-fat content and for substrates of middle/high fat content (ArSo LF and ArSo MF, with LF standing for low fat and MF for middle fat). The models could be further validated in another experiment, also using a co-digestion of lignocellulose substances. Further, parameters estimated for the batch mode, were applied for the semi-continuous experiment. It proved successful, however, due to a high amount of butyrate (HBu) and valerate (HVa), the model underwent calibration so that it could better predict the acids (model developed for one-stage semi-continuous experiment was called: ArSo M LF*). This could be validated on another semi-continuous reactor running on one-stage. Finally, the acidic-stage of the two-stage mode was analyzed. The model applied for one-stage fitted the data of the two-stage mode as far as the VFA are concerned. Nevertheless, due to a vast amount of acids, it was adjusted and called ArSo M LF**. [less ▲]

Detailed reference viewed: 33 (2 UL)
Full Text
See detailEssays on Market Microstructure and Financial Markets Stability
Levin, Vladimir UL

Doctoral thesis (2022)

The present doctoral thesis consists of three main chapters. The chapters of the thesis can be considered independently. Each of the three chapters raises a research question, reviews the related ... [more ▼]

The present doctoral thesis consists of three main chapters. The chapters of the thesis can be considered independently. Each of the three chapters raises a research question, reviews the related literature, proposes a method for the analysis, and, finally, reports results and conclusions. Chapter 1 is entitled Dark Trading and Financial Markets Stability and it is based on a working paper co-authored with Prof. Dr. Jorge Goncalves and Prof. Dr. Roman Kraussl. This paper examines how the implementation of a new dark order -- Midpoint Extended Life Order (M-ELO) on Nasdaq -- impacts financial markets stability in terms of occurrences of mini-flash crashes in individual securities. We use high-frequency order book data and apply panel regression analysis to estimate the effect of dark order trading activity on market stability and liquidity provision. The results suggest a predominance of a speed bump effect of M-ELO rather than a darkness effect. We find that the introduction of M-ELO increases market stability by reducing the average number of mini-flash crashes, but its impact on market quality is mixed. Chapter 2 is entitled Dark Pools and Price Discovery in Limit Order Markets and it is a single-authored work. This paper examines how the introduction of a dark pool impacts price discovery, market quality, and aggregate welfare of traders. I use a four-period model where rational and risk-neutral agents choose the order type and the venue and obtain the equilibrium numerically. The comparative statics on the order submission probability suggests a U-shaped order migration to the dark pool. The overall effect of dark trading on market quality and aggregate welfare was found to be positive but limited in size and depended on market conditions. I find mixed results for the process of price discovery. Depending on the immediacy need of traders, price discovery may change due to the presence of the dark venue. Chapter 3 is entitled Machine Learning and Market Microstructure Predictability and it is another single-authored piece of work. This paper illustrates the application of machine learning to market microstructure research. I outline the most insightful microstructure measures, that possess the highest predictive power and are useful for the out-of-sample predictions of such features of the market as liquidity volatility and general market stability. By comparing the models' performance during the normal time versus the crisis time, I come to the conclusion that financial markets remain efficient during both periods. Additionally, I find that high-frequency traders activity is not able to forecast accurately neither of the market features. [less ▲]

Detailed reference viewed: 45 (2 UL)
Full Text
See detailNon-Orthogonal Multiple Access for Next-Generation Satellite Systems: Flexibility Exploitation and Resource Optimization
Wang, Anyue UL

Doctoral thesis (2022)

In conventional satellite communication systems, onboard resource management follows pre-design approaches with limited flexibility. On the one hand, this can simplify the satellite payload design. On the ... [more ▼]

In conventional satellite communication systems, onboard resource management follows pre-design approaches with limited flexibility. On the one hand, this can simplify the satellite payload design. On the other hand, such limited flexibility hardly fits the scenario of irregular traffic and dynamic demands in practice. As a consequence, the efficiency of resource utilization could be deteriorated, evidenced by mismatches between offered capacity and requested traffic in practical operations. To overcome this common issue, exploiting multi-dimension flexibilities and developing advanced resource management approaches are of importance for next-generation high-throughput satellites (HTS). Non-orthogonal multiple access (NOMA), as one of the promising new radio techniques for future mobile communication systems, has proved its advantages in terrestrial communication systems. Towards future satellite systems, NOMA has received considerable attention because it can enhance power-domain flexibility in resource management and achieve higher spectral efficiency than orthogonal multiple access (OMA). From ground to space, terrestrial-based NOMA schemes may not be directly applied due to distinctive features of satellite systems, e.g., channel characteristics and limited onboard capabilities, etc. To investigate the potential synergies of NOMA in satellite systems, we are motivated to enrich this line of studies in this dissertation. We aim at resolving the following questions: 1) How to optimize resource management in NOMA-enabled satellite systems and how much performance gain can NOMA bring compared to conventional schemes? 2) For complicated resource management, how to accelerate the decision-making procedure and achieve a good tradeoff between complexity reduction and performance improvement? 3) What are the mutual impacts among multiple domains of resource optimization, and how to boost the underlying synergies of NOMA and exploit flexibilities in other domains? The main contributions of the dissertation are organized in the following four chapters: First, we design an optimization framework to enable efficient resource allocation in general NOMA-enabled multi-beam satellite systems. We investigate joint optimization of power allocation, decoding orders, and terminal-timeslot assignment to improve the max-min fairness of the offered-capacity-to-requested-traffic ratio (OCTR). To solve the mixed-integer non-convex programming (MINCP) problem, we develop an optimal fast-convergence algorithmic framework and a heuristic scheme, which outperform conventional OMA in matching capacity to demand. Second, to accelerate the decision-making procedure in resource optimization, we attempt to solve optimization problems for satellite-NOMA from a machine-learning perspective and reveal the pros and cons of learning and optimization techniques. For complicated resource optimization problems in satellite-NOMA, we introduce deep neural networks (DNN) to accelerate decision making and design learning-assisted optimization schemes to jointly optimize power allocation and terminal-timeslot assignment. The proposed learning-optimization schemes achieve a good trade-off between complexity and performance. Third, from a time-domain perspective, beam hopping (BH) is promising to mitigate the capacity-demand mismatches and inter-beam interference by selectively and sequentially illuminating suited beams over timeslots. Motivated by this, we investigate the synergy and mutual influence of NOMA and BH for satellite systems to jointly exploit power- and time-domain flexibilities. We jointly optimize power allocation, beam scheduling, and terminal-timeslot assignment to minimize the capacity-demand gap. The global optimal solution may not be achieved due to the NP-hardness of the problem. We develop a bounding scheme to tightly gauge the global optimum and propose a suboptimal algorithm to enable efficient resource assignment. Numerical results demonstrate the synthetic synergy of combining NOMA and BH, and their individual performance gains compared to the benchmarks. Fourth, from the spatial domain, adaptive beam patterns can adjust the beam coverage to serve irregular traffic demand and alleviate co-channel interference, motivating us to investigate joint resource optimization for satellite systems with flexibilities in power and spatial domains. We formulate a joint optimization problem of power allocation, beam pattern selection, and terminal association, which is in the format of MINCP. To tackle the integer variables and non-convexity, we design an algorithmic framework and a low-complexity scheme based on the framework. Numerical results show the advantages of jointly optimizing NOMA and beam pattern selection compared to conventional schemes. In the end, the dissertation is concluded with the main findings and insights on future works. [less ▲]

Detailed reference viewed: 82 (7 UL)
Full Text
See detailMoral Decision-Making in Video Games
Holl, Elisabeth UL

Doctoral thesis (2022)

The present dissertation focuses on moral decision-making in single player video games. The thesis comprises four manuscripts: a theoretical book chapter (Melzer & Holl, 2021), a qualitative focus group ... [more ▼]

The present dissertation focuses on moral decision-making in single player video games. The thesis comprises four manuscripts: a theoretical book chapter (Melzer & Holl, 2021), a qualitative focus group study (Holl et al., 2020), a quantitative case study on the video game Detroit: Become Human (Holl & Melzer, 2021), and results from a large experimental laboratory study (Holl et al., 2022). With more than 2.6 billion players worldwide (Entertainment Software Association, 2018) gaming has become increasingly present in society. In addition to this growing interest, technological advances allow for more complex narratives and deeper character design. Thus, meaningful and morally-laden storylines have become increasingly popular in recent years both in popular AAA (e.g., Detroit: Become Human, The Last of Us 2) and smaller Indie titles (e.g., Papers please, Undertale). At the same time, scholars suggested that not only hedonic but also eudaimonic experiences are an essential part of (gaming) entertainment (Daneels, Bowman, et al., 2021; Oliver et al., 2015; Wirth et al., 2012). This dissertation explores in greater detail one aspect of eudaimonic gameplay, namely single player games that feature meaningful moral decision-making. Prior research on morality and gaming has relied on a variety of theoretical concepts, such as moral disengagement (Bandura, 1990; Klimmt et al., 2008) or moral foundations and intuitions (Haidt, 2001; Haidt & Joseph, 2007; Tamborini, 2013). Thus, the first task of the dissertation was to establish a previously missing model of moral processing in video games the unifies existing theories (cf. chapter 5.13; Melzer & Holl, 2021). Furthermore, the model proposes factors (e.g., moral disengagement cues, limited cognitive capacities/time pressure) promoting or hampering moral engagement while playing, thus fostering moral versus strategic processing. The model not only integrates relevant theoretical publications but was also designed using data collected in focus groups with frequent gamers (Holl et al., 2020). These qualitative results showed that moral gameplay is not a niche anymore. Furthermore, players expressed they deliberately chose between hedonic and eudaimonic gaming depending on their mood and motivation. Lastly, players mentioned several factors influencing their emotional and moral engagement while playing (e.g., identification, framing). To test parts of the proposed theoretical model, the game Detroit: Become Human, which has been praised for its emotional storytelling and meaningful choices (Pallavicini et al., 2020), was investigated in a case study (Holl & Melzer, 2021). Extensive coding of large-scale online data revealed that 73% of in-game decisions in Detroit: Become Human were morally relevant with a high prevalence for situations relating to harm/care- and authority-based morality. Overall, players preferred to choose moral options over immoral options. This tendency to act “good” was even more pronounced under time pressure and when non-human characters were involved. Furthermore, behavioral variations were found depending on what character was played. To test findings of the case study in greater detail and to also gather individual data in an experimental setup, Holl et al. (2022) conducted a laboratory study. A total of 101 participants played several chapters of Detroit: Become Human featuring up to 13 moral decisions after being randomly assigned to one of three conditions (i.e., playing a morally vs. immorally framed character vs. no framing/control). As expected, players again preferred to act morally sound. Contrary to expectations, character framing did not affect decision-making or physiological responses (i.e., heart rate variability). However, time pressure again increased the likelihood of moral decision-making. Unfortunately, anticipated effects of personality traits (i.e., trait moral disengagement, empathy) were inconclusive both regarding the outcome of decision-making and participants’ perceived guilt after playing. In summary, the work of this dissertation further underlines the relevance of eudaimonic entertainment. Studying moral decision-making in games may provide insights for moral decision-making in general. Additionally, the presented results have the potential to defuse the heated debate over violent gaming. Novel insights are gained using a mixed methods approach combining qualitative with quantitative data from a large-scale case study of worldwide user behavior and an experimental setup. [less ▲]

Detailed reference viewed: 84 (11 UL)
Full Text
See detailmmWave Cognitive Radar: Adaptive Waveform Design and Implementation
Raei, Ehsan UL

Doctoral thesis (2022)

Detailed reference viewed: 50 (5 UL)
Full Text
See detailGrain boundaries and potassium post-deposition treatments in chalcopyrite solar cells
Martin Lanzoni, Evandro UL

Doctoral thesis (2022)

Over the last years, alkali post-deposition treatments (PDT) have been attributed as the main driver for the continuous improvements in the power conversion efficiency (PCE) of Cu(In,Ga)Se2 (CIGSe) solar ... [more ▼]

Over the last years, alkali post-deposition treatments (PDT) have been attributed as the main driver for the continuous improvements in the power conversion efficiency (PCE) of Cu(In,Ga)Se2 (CIGSe) solar cells. All the alkali elements have shown beneficial optoelectronic effects, ranging from sodium to cesium, with many reports linking the improvements to grain boundary (GB) passivation. The most common process for alkali incorporation into the CIGS absorber is based on the thermal evaporation of alkali fluorides in a selenium atmosphere. Besides the demonstrated improvements in performance, disentangling the individual contributions of the PDTs on the GB, surface, and bulk is very challenging because of the many concurring chemical reactions and diffusion processes. This thesis aims to investigate how pure metallic potassium interacts with CIGSe epitaxially grown on GaAs (100) and multi-crystalline GaAs. Surface sensitive Kelvin probe force microscopy (KPFM) and X-ray photoelectron spectroscopy (XPS) measurements are used to, in-situ, analyze changes in workfunction and compositional changes before and after each deposition step. Inert gas transfer systems and ultrahigh vacuum (UHV) are used to keep the pristine surface properties of the CIGSe. An in-depth understanding of how different KPFM operation modes and environments influence the measured workfunction is discussed in detail in this thesis. It is shown that AM-KPFM, the most common KPFM operation mode, leads to misinterpretations of the measured workfunction at GBs on rough samples. Frequency modulation KPFM (FM-KPFM), on the other hand, turns out to be the most suitable KPFM mode to investigate GB band bending. Pure metallic potassium evaporation on CIGSe epitaxially grown on GaAs (100) leads to diffusion of K from the surface down to the CIGS/GaAs interface even in the absence of GBs. Evaporation of metallic K is performed using a metallic dispenser, in which the evaporation rate can be controlled to deposit a few monolayers of K. The deposition is done in UHV, and an annealing step is used to diffuse K from the surface to the bulk. Pure metallic potassium is also evaporated on CIGSe epitaxially grown on multicrystalline GaAs substrate, where well-defined GBs are present. Negligible workfunction changes at the GB were observed. XPS shows a strong Cu depletion after K deposition followed by annealing. Interestingly, the amount of K on the absorber surface after the K-deposition and subsequent annealing is almost equal to the amount of Cu that diffused into the bulk, suggesting a 1:1 exchange mechanism and no KInSe2 secondary phase. [less ▲]

Detailed reference viewed: 40 (6 UL)
Full Text
See detailThe European Approach to Open Science and Research Data
Paseri, Ludovica UL

Doctoral thesis (2022)

This dissertation proposes an analysis of the governance of the European scientific research, focusing on the emergence of the Open Science paradigm. The paradigm of Open Science indicates a new way of ... [more ▼]

This dissertation proposes an analysis of the governance of the European scientific research, focusing on the emergence of the Open Science paradigm. The paradigm of Open Science indicates a new way of doing science, oriented towards the openness of every phase of the scientific research process, and able to take full advantage of the digital Information and Communication Technologies (ICTs). The emergence of this paradigm is relatively recent, but in the last couple of years it has become increasingly relevant. The European institutions expressed a clear intention to embrace the Open Science paradigm, with several interventions and policies on this matter. Among many, consider, for example, the project of the European Open Science Cloud (EOSC), a federated and trusted environment for access and sharing of research data and services for the benefit of the European researchers; or the establishment of the new research funding programme, i.e., the Horizon Europe programme, laid down in the EU Regulation 2021/695, which links research funding to the adoption of the Open Science tenets. This dissertation examines the European approach to Open Science, providing a conceptual framework for the multiple interventions of the European institutions in the field of Open Science, as well as addressing the major legal challenges that the implementation of this new paradigm is generating. To this aim, the study first investigates the notion of Open Science, in order to understand what specifically falls under the umbrella of this broad term: it is proposed a definition that takes into account all its dimensions and an analysis of the human and fundamental rights framework in which Open Science is grounded. After that, the inquiry addresses the legal challenges related to the openness of research data, in light of the European legislative framework on Open Data. This also requires drawing attention to the European data protection framework, analysing the impact of the General Data Protection Regulation (GDPR) on the context of Open Science. The last part of the study is devoted to the infrastructural dimension of the Open Science paradigm, exploring the digital infrastructures that are increasingly an integral part of the scientific research process. In particular, the focus is on a specific type of computational infrastructure, namely the High Performance Computing (HPC) facility. The adoption of HPC for research is analysed both from the European perspective, investigating the EuroHPC project, and the local perspective, proposing the case study of the HPC facility of the University of Luxembourg, namely the ULHPC. This dissertation intends to underline the relevance of the legal coordination approach, between all actors and phases of the scientific research process, in order to develop and implement the Open Science paradigm, adhering to the underlying human and fundamental rights. [less ▲]

Detailed reference viewed: 99 (1 UL)
Full Text
See detailDigital Forensics AI: on Practicality, Optimality, and Interpretability of Digital Evidence Mining Techniques
Solanke, Abiodun Abdullahi UL

Doctoral thesis (2022)

Digital forensics as a field has progressed alongside technological advancements over the years, just as digital devices have gotten more robust and sophisticated. However, criminals and attackers have ... [more ▼]

Digital forensics as a field has progressed alongside technological advancements over the years, just as digital devices have gotten more robust and sophisticated. However, criminals and attackers have devised means for exploiting the vulnerabilities or sophistication of these devices to carry out malicious activities in unprecedented ways. Their belief is that electronic crimes can be committed without identities being revealed or trails being established. Several applications of artificial intelligence (AI) have demonstrated interesting and promising solutions to seemingly intractable societal challenges. This thesis aims to advance the concept of applying AI techniques in digital forensic investigation. Our approach involves experimenting with a complex case scenario in which suspects corresponded by e-mail and deleted, suspiciously, certain communications, presumably to conceal evidence. The purpose is to demonstrate the efficacy of Artificial Neural Networks (ANN) in learning and detecting communication patterns over time, and then predicting the possibility of missing communication(s) along with potential topics of discussion. To do this, we developed a novel approach and included other existing models. The accuracy of our results is evaluated, and their performance on previously unseen data is measured. Second, we proposed conceptualizing the term “Digital Forensics AI” (DFAI) to formalize the application of AI in digital forensics. The objective is to highlight the instruments that facilitate the best evidential outcomes and presentation mechanisms that are adaptable to the probabilistic output of AI models. Finally, we enhanced our notion in support of the application of AI in digital forensics by recommending methodologies and approaches for bridging trust gaps through the development of interpretable models that facilitate the admissibility of digital evidence in legal proceedings. [less ▲]

Detailed reference viewed: 38 (4 UL)
Full Text
See detailMALDI-TOF-Enabled Subtyping and Antimicrobial Screening of the Food- and Waterborne Pathogen Campylobacter
Feucherolles, Maureen UL

Doctoral thesis (2022)

For decades, antimicrobial resistance has been considered as a global long-lasting challenge. If no action is taken, antimicrobial resistance-related diseases could give a rise up to 10 million deaths ... [more ▼]

For decades, antimicrobial resistance has been considered as a global long-lasting challenge. If no action is taken, antimicrobial resistance-related diseases could give a rise up to 10 million deaths each year by 2050 and 24 million people might end into extreme poverty. The ever-increasing spread and cross-transmission of drug-resistant foodborne pathogens such as Campylobacter spp. between reservoirs, such as human, animal and environment are of concern. Indeed, because of the over-exposition and overuse of antibiotics in food-producing animals, the latter could carry multidrug resistant Campylobacter that could be transmitted to humans via food sources or from direct animal contacts. One of the solutions to tackle antimicrobial resistances is the development of rapid diagnostics tests to swiftly detect resistances in routine laboratories. By detecting earlier AMR, adapted antibiotherapy might be administrated promptly shifting from empirical to evidence-based practices, conserving effectiveness of antimicrobials. The already implemented cost- and time-efficient MALDI-TOF MS in routine laboratories for the identification of microorganisms based on expressed protein profiles was successfully applied for bacterial typing and detection of specific AMR peak in a research context. In the line of developing rapid tests for diagnostics, MALDI-TOF MS appeared to be an ideal candidate for a powerful and promising “One fits-all” diagnostics tool. Therefore, the present study aimed to get more insights on the ability of MALDI-TOF MS-protein based signal to reflect the AMR and genetic diversity of Campylobacter spp. The groundwork of this research consisted into the phenotypic and genotypic characterization of a One-Health Campylobacter collection. Then, isolates were submitted to protein extraction for MALDI-TOF MS analysis. Firstly, mass spectra were investigated to screen AMR to different classes of antibiotics and to retrieve putative biomarkers related to already known AMR mechanisms. The second part evaluated the ability of MALDI-TOF MS to cluster mass spectra according to the genetic relatedness of isolates and congruently compare it to reference genomic-based methods. MALDI-TOF MS protein profiles combined to machine learning displayed promising results for the prediction of the susceptibility and the ciprofloxacin and tetracycline Campylobacter’s resistances. Additionally, MALDI-TOF MS C. jejuni protein clusters were highly concordant to conventional DNA-based typing methods, such as MLST and cgMLST, when a similarity cut-off of 94% was applied. A similar discriminatory power between 2-20 kDa expressed protein and cgMLST profiles was underlined as well. Finally, putative biomarkers either linked to known or unknown AMR mechanisms, or genetic structural population of Campylobacter were identified. Overall, a single spectrum based on bacterial expressed protein could be used for species identification, AMR screening and potentially as a complete pre-screening for daily surveillance, including genetic diversity and source attribution after further analysis. [less ▲]

Detailed reference viewed: 148 (4 UL)
Full Text
See detailThe Interpretation of UN Security Council Resolutions
di Gianfrancesco, Laura UL

Doctoral thesis (2022)

Detailed reference viewed: 39 (4 UL)
Full Text
See detailMethods and tools for analysis and management of risks and regulatory compliance in the healthcare sector: the Hospital at Home – HaH
Amantea, Ilaria Angela UL

Doctoral thesis (2022)

Changing or creating a new organization means creating a new process. Each process involves many risks that need to be identified and managed. The main risks considered here are procedural risks and legal ... [more ▼]

Changing or creating a new organization means creating a new process. Each process involves many risks that need to be identified and managed. The main risks considered here are procedural risks and legal risks. The former are related to the risks of errors that may occur during processes, while the latter are related to the compliance of processes with regulations. Therefore, managing the risks implies proposing changes to the processes that allow the desired result: an optimized process. In order to manage a company and optimize it in the best possible way, not only should the organizational aspect, risk management and legal compliance be taken into account, but it is important that they are all analyzed simultaneously with the aim of finding the right balance that satisfies them all. This is exactly the aim of this thesis, to provide methods and tools to balance these three characteristics, and to enable this type of optimization, ICT support is used. This work is not intended to be a computer science or law thesis but an interdisciplinary thesis. Most of the work done so far is vertical and in a specific domain. The particularity and aim of this thesis is not so much to carry out an in-depth analysis of a particular aspect, but rather to combine several important aspects, normally analyzed separately, which however have an impact on each other and influence each other. In order to carry out this kind of interdisciplinary analysis, the knowledge base of both areas was involved and the combination and collaboration of different experts in the various fields was necessary. Although the methodology described is generic and can be applied to all sectors, a particular use case was chosen to show its application. The case study considered is a new type of healthcare service that allows patients in acute disease to be hospitalized to their home. This provide the possibility to perform experiments using real hospital database. [less ▲]

Detailed reference viewed: 38 (5 UL)
Full Text
See detailHybrid Artificial Intelligence to extract patterns and rules from argumentative and legal texts
Liga, Davide UL

Doctoral thesis (2022)

This Thesis is composed of a selection of studies realized between 2019 and 2022, whose aim is to find working methodologies of Artificial Intelligence (AI) and Machine Learning for the detection and ... [more ▼]

This Thesis is composed of a selection of studies realized between 2019 and 2022, whose aim is to find working methodologies of Artificial Intelligence (AI) and Machine Learning for the detection and classification of patterns and rules in argumentative and legal texts. We define our approach as “hybrid”, since different methods have been employed combining symbolic AI (which involves “top-dow” structured knowledge) and sub-symbolic AI (which involves “bottom-up” data-driven knowledge). The first group of these works was dedicated to the classification of argumentative patterns. Following the Waltonian model of argument (according to which arguments are composed by a set of premises and a conclusion), and the theory of Argumentation Schemes, this group of studies was focused on the detection of argumentative evidences of support and opposition. More precisely, the aim of these first works was to show that argumentative patterns of opposition and support could be classified at fine-grained levels and without resorting to highly engineered features. To show this, we firstly employed methodologies based on Tree Kernel classifiers and TFIDF. In these experiments, we explored different combinations of Tree Kernel calculation and different data structures (i.e., different tree structures). Also, some of these combinations employs a hybrid approach where the calculation of similarity among trees is influenced not only by the tree structures but also by a semantic layer (e.g. those using “smoothed” trees and “compositional” trees). After the encouraging results of this first phase, we explored the use of a new methodology which was deeply changing the NLP landscape exactly in that year, fostered and promoted by actors like Google, i.e. Transfer Learning and the use of language models. These newcomer methodologies markedly improved our previous results and provided us with stronger NLP tools. Using Transfer Learning, we were also able to perform a Sequence Labelling task for the recognition of the exact span of argumentative components (i.e. claims and premises), which is crucial to connect the sphere of natural language to the sphere of logic. The last part of this work was finally dedicated to show how to use Transfer Learning for the detection of rules and deontic modalities. In this case, we tried to explore a hybrid approach which combines structured knowledge coming from two LegalXML formats (i.e., Akoma Ntoso and LegalRuleML) with sub-symbolic knowledge coming from pre-trained (and then fine-tuned) neural architectures. [less ▲]

Detailed reference viewed: 21 (5 UL)
Full Text
See detailEnvironmental performance assessment of an innovative modular construction concept composed of a permanent structure and flexible modular units
Rakotonjanahary, Tahiana Roland Michaël UL

Doctoral thesis (2022)

To face the challenges of global warming, the building sector is currently undergoing a noticeable revolution. Buildings are tending to consume less energy, use more renewable energy sources, be built ... [more ▼]

To face the challenges of global warming, the building sector is currently undergoing a noticeable revolution. Buildings are tending to consume less energy, use more renewable energy sources, be built with eco-friendly materials, and generate less wastes during their construction and end-of-life stage. Yet, they could be more resilient or else capable of quickly responding to the housing demand, which may fluctuate in time and in space. Innovative concepts therefore need to be developed to allow buildings to expand and/or shrink. Modular buildings could be a solution to combine these criteria, since they offer faster construction process, provide better construction quality, allow reducing construction waste and are potentially flexible. Frames of modular units can be made of metal, timber, concrete, or mixed materials but lightweight structures do not always allow erecting high-rise buildings and generally present a higher risk of overheating and/or overcooling. To reconcile these pros and cons, a building typology called Slab was designed by a group of architects jointly with the team of the Eco-Construction for Sustainable Development (ECON4SD) research project. The Slab building is an innovative modular building concept based on plug-in architecture, which is composed of a permanent concrete structure on which relocatable timber modular units come to slot in. With respect to flexibility, the Slab building was designed to adapt to any orientation and location in Luxembourg. This doctoral thesis mainly deals with the environmental performance assessment of the Slab building but also involves the development of an energy concept for this one. In this regard, the minimum required wall thicknesses of the Slab building’s modules were determined in compliance with the Luxembourg standard although the current regulation does not yet cover flexible buildings. In this process, two module variants were designed; the first one fulfils the passive house requirements which match with the AAA energy class requirements, and the second one complies with the current building codes requirements, also known as the requirements for building permit application, which in principle correspond to low energy house requirements. Calculations showed that 40 cm wall thickness is sufficient to fulfil both requirements. The environmental performance assessment focused on the appraisal of specific CO2 footprint, which considers on the one hand the operational energy and on the other hand the building materials. The operational energy of modules was determined by carrying out energy balance calculations on LuxEeB-Tool software by considering the worst-case and best-case scenarios. Besides, a method was developed to estimate the space heating demand and CO2 emissions of module aggregation, which can have different configurations over time. The method proposed in this thesis was established for the Slab building but could potentially be applicable to flexible buildings. A comparative study of the CO2 footprint considering the embodied and operational energy showed that there is no environmental benefit in having the modules comply with the passive house requirements in the worst-case scenario (window facing north and high wind exposure). A thermal comfort assessment was also done by realizing DTS on TRNSYS software, to check the necessity of active cooling. Simulations showed that with adequate solar shading and reinforced natural ventilation by window opening, summertime overheating risk could be avoided for the normal residential use scenario for both module variants. Finally, the LCA of the Slab building consisted, on the one hand of optimizing its life cycle and, on the other hand, of comparing its specific CO2 footprint with benchmarks. LCA based on 100 years of lifetime concluded that the total specific CO2 footprint of the Slab building for a low module occupancy rate is lower than that of the Slab building bis, which is a building designed based on the Slab building. The latter would be built according to conventional construction method and thereby does not provide the same level of flexibility as the Slab building. However, for a high module occupancy rate, the Slab building does not environmentally perform any better than the Slab building bis. Some solutions could be proposed to further reduce the specific CO2 footprint of the Slab building, but these would impact the architectural aspect or even the functionalities of the Slab building. [less ▲]

Detailed reference viewed: 61 (6 UL)
Full Text
See detailEconomics of Migration, Inequalities, and Culture
Maleeva, Victoria UL

Doctoral thesis (2022)

The present doctoral thesis consists of three chapters of self-contained works about the economics of migration, inequalities, and culture. In the first chapter, I introduce the thesis outline and discuss ... [more ▼]

The present doctoral thesis consists of three chapters of self-contained works about the economics of migration, inequalities, and culture. In the first chapter, I introduce the thesis outline and discuss each chapter's research questions. The second chapter explores the effects of mass migration on individual attitudes towards migrants. Using several data sources for the mass migration of Ukrainians in Poland between 2014-2016, this chapter is focused on how a massive exogenous increase in the stock of migrant residents and migrant co-workers affects the perception of migrants. Using both an IV methodology and a difference-in-difference analysis, I test two hypotheses: the labor market competition and contact theory and find some evidence favoring the second. First, difference-in-difference analysis shows that Poles become more welcoming to migrants in regions with more job opportunities for migrants. Second, I find that an increase in the size of the migrant group affects attitudes towards migrants positively, inside a group of natives with similar demographic and job skills characteristics. The third chapter explores how poverty can be explained by marital status and gender, using the RLMS-HSE household survey. This research shows that divorced women exhibit lower poverty levels than divorced men by employing longitudinal data from the Russian National Survey (RLMS-HSE) from 2004 to 2019. The result remains qualitatively invariant when considering a theoretical probability to divorce for married couples that take into account the age of the partners, labor force participation, and education. A higher probability to divorce impacts positively only men's poverty level. Investigating an inter-related dynamic model of poverty and labor market participation, we find that divorced women work more than divorced men, which is why divorce hits harder on husbands than on wives. In the fourth chapter of the thesis, we study the effect of past exposure to communist indoctrination during early age (9-14 years) on a set of crucial attitudes in the communist ideology aiming to create the \emph{new communist man/woman}. We focus on the indoctrination received by children during their pioneering years. School pupils automatically became pioneers when they reached 3rd or 4th grade. The purpose of the pioneer years was to educate soviet children to be loyal to the ideals of communism and the Party. We use a regression discontinuity design exploiting the discontinuity in the exposure to pioneering years due to the fall of the USSR in 1991, implying a strong association that hints to causality. We find robust evidence that has been a pioneer has long-lasting effects on interpersonal trust, life satisfaction, fertility, income, and perception of own economic rank. Overall, these results suggest that past pioneers show a higher level of optimism than non-pioneers. Finally, we look for gender differences because various forms of emulation campaigns were used to promote the desired virtues of the new communist woman. However, we find no evidence of the effect of exposure to communism on women. The indoctrination seems to have had more substantial effects on men. [less ▲]

Detailed reference viewed: 59 (1 UL)
Full Text
See detailEssays on the Economics of Migration, Inequalities, and Culture
Maleeva, Victoria UL

Doctoral thesis (2022)

The present doctoral thesis consists of three chapters of self-contained works about the economics of migration, inequalities, and culture. In the first chapter, I introduce the outline of the thesis and ... [more ▼]

The present doctoral thesis consists of three chapters of self-contained works about the economics of migration, inequalities, and culture. In the first chapter, I introduce the outline of the thesis and shortly discuss the research questions of each chapter. The second chapter explores the effects of mass migration on individual attitudes towards migrants. Using several data sources for the mass migration of Ukrainians in Poland between 2014-2016, this chapter is focused on how a massive exogenous increase in the stock of migrant residents and migrant co-workers affects the perception of migrants. Using both an IV methodology and a difference-in-difference analysis, I test two hypotheses: the labor market competition and contact theory and find some evidence favoring the second. First, difference-in-difference analysis shows that Poles become more welcoming to migrants in regions with more job opportunities for migrants. Second, I find that an increase in the size of the migrant group affects attitudes towards migrants positively inside a group of natives with similar demographic and job skills characteristics. The third chapter explores how poverty can be explained by marital status and gender using the RLMS-HSE household survey. This research shows that divorced women exhibit lower poverty levels than divorced men by employing longitudinal data from the Russian National Survey (RLMS-HSE) from 2004 to 2019. The result remains qualitatively invariant when considering a theoretical probability to divorce for married couples that take into account the age of the partners, labor force participation, and education. A higher probability to divorce impacts positively only men's poverty level. Investigating an inter-related dynamic model of poverty and labor market participation, we find that divorced women work more than divorced men, which is why divorce hits harder on husbands than on wives. In the fourth chapter of the thesis, we study the effect of past exposure to communist indoctrination during early age (9-14 years) on a set of crucial attitudes in the communist ideology aiming to create the \emph{new communist man/woman}. We focus on the indoctrination received by children during their pioneering years. School pupils automatically became pioneers when they reached 3rd or 4th grade. The purpose of the pioneer years was to educate soviet children to be loyal to the ideals of communism and the Party. We use a regression discontinuity design exploiting the discontinuity in the exposure to pioneering years due to the fall of the USSR in 1991, implying a strong association that hints to causality. We find robust evidence that has been a pioneer has long-lasting effects on interpersonal trust, life satisfaction, fertility, income, and perception of own economic rank. Overall, these results suggest that past pioneers show a higher level of optimism than non-pioneers. Finally, we look for gender differences because various forms of emulation campaigns were used to promote the desired virtues of the new communist woman. However, we find no evidence of the effect of exposure to communism on women. The indoctrination seems to have left more substantial effects on men. [less ▲]

Detailed reference viewed: 44 (2 UL)
Full Text
See detailIs universal healthcare truly universal? Socioeconomic and migrant inequalities in healthcare
Paccoud, Ivana UL

Doctoral thesis (2022)

Through the principle of Universal Healthcare Coverage, many governments across Europe and beyond seek to ensure that all people have equal access to good quality healthcare services, without facing a ... [more ▼]

Through the principle of Universal Healthcare Coverage, many governments across Europe and beyond seek to ensure that all people have equal access to good quality healthcare services, without facing a financial burden. Despite this, studies have highlighted persistent migrant and socio-economic inequalities in the use of healthcare services, and personal health records. Therefore, understanding the complex mechanisms that produce and maintain social inequalities in the effective use of healthcare services is thus an important step towards advancing equity in healthcare. This thesis draws on Bourdieu's forms of capital (cultural, social, economic, and symbolic) to conceptualise and empirically test social inequalities related to healthcare. In doing so, it investigates the factors contributing to socioeconomic and migrant inequalities in the use, navigation and optimisation of healthcare services as well as personal health records. The three studies that make up this thesis empirically test these ideas through statistical modelling on population-based datasets as well as through the analysis of two cross-sectional surveys in Luxembourg and the Greater region. The first study draws on the fifth wave of the Survey of Health, Aging, and Retirement in Europe (SHARE). It used cluster analysis and regression models to explain how the unequal distribution of material and non-material capitals acquired in childhood shape health practices, leading to different levels of healthcare utilisation in later life. The results suggest that although related, both material and non-material capitals independently contribute to health practices associated with the use of healthcare services. The second study used data from a cross-sectional survey to investigate inequalities in the navigation and optimisation of healthcare services, taking into consideration the interplay between perceived racial discrimination and socioeconomic position. It revealed disparities between individuals born in Eastern Europe and the Global South and those born in Luxembourg which were explained by the experience of racial discrimination. It also found that the impact of discrimination on both health service navigation and optimisation was reduced after accounting for social capital. The last study used data from a cross-sectional survey developed as a part of a collaborative project (INTERREG-APPS) to examine the socioeconomic and behavioural determinants in the intention to use personal health record in the Greater region of Luxembourg (Baumann et al., 2020). This study found that people’s desire and actual access to personal health electronic records is determined by different socioeconomic factors, while educational inequalities in the intention to regularly use personal health records were explained by the role of behavioural factors. Taking together, the findings presented in this thesis thus show the value of mobilising Bourdieu’s theoretical framework to understand the mechanisms through which social inequalities in healthcare develop. In addition, it showed the importance of considering racial discrimination when examining migrant, and racial/ethnic differences in health. [less ▲]

Detailed reference viewed: 52 (8 UL)
Full Text
See detailFirst-principles investigation of ferroelectricity and related properties of HfO2
Dutta, Sangita UL

Doctoral thesis (2022)

Nonvolatile memories are in increasing demand as the world moves toward information digitization. The ferroelectric materials offer a promising alternative for this. Since the existing perovskite ... [more ▼]

Nonvolatile memories are in increasing demand as the world moves toward information digitization. The ferroelectric materials offer a promising alternative for this. Since the existing perovskite materials have various flaws, including incompatibility with complementary metal-oxide-semiconductor processes in memory applications, the discovery of new optimized FE thin films was necessary. In 2011, the disclosure of ferroelectricity in hafnia (HfO$_2$) reignited interest in ferroelectric memory devices because this material is well integrated with CMOS technology. Although the reporting of ferroelectricity in HfO$_2$ has been a decade, researchers are still enthralled by this material's properties as well as its possible applications. The ferroelectricity in HfO$_{2}$ has been attributed to the orthorhombic phase with spacegroup $Pca2_1$. This phase is believed to be the metastable phase of the system. Many experimental and theoretical research groups joined the effort to understand the root causes for the stability of this ferroelectric phase of HfO$_{2}$ by considering the role of the surface energy effects, chemical dopants, local strain, oxygen vacancies. However, the understanding was not conclusive. In this part of this work, we will present our first-principles results, predicting a situation where the ferroelectric phase becomes the thermodynamic ground state in the presence of a ordered dopant forming layers. Since the main focus was on understanding and optimizing the ferroelectricity in HfO$_{2}$, we observed that the electro-mechanical response of the system has garnered comparatively less attention. The recent discovery of the negative longitudinal piezoelectric effect in HfO$_2$ has challenged our thinking about piezoelectricity, which was molded by what we know about ferroelectric perovskites. In this work, we will discuss the atomistic underpinnings behind the negative longitudianl piezoelectric effect in HfO$_{2}$. We will also discuss the behavior of the longitudinal piezoelectric coefficient ($e_{33}$) under the application of epitaxial strain, where we find that $e_{33}$ changes sign even though the polarization does not switch. Aside from a basic understanding of piezoelectric characteristics in HfO$_2$, the application aspect is also worth considering. The piezoelectric properties of the material can be tuned to meet the needs of the applications. In this work, we will describe our findings on how the piezoelectric characteristics of the material change as a function of isovalent dopants. [less ▲]

Detailed reference viewed: 73 (2 UL)
Full Text
See detailFRACTAL DIMENSION AND POINT-WISE PROPERTIES OF TRAJECTORIES OF FRACTIONAL PROCESSES
Daw, Lara UL

Doctoral thesis (2022)

The topics of this thesis lie at the interference of probability theory with dimensional and harmonic analysis, accentuating the geometric properties of random paths of Gaussian and non-Gaussian ... [more ▼]

The topics of this thesis lie at the interference of probability theory with dimensional and harmonic analysis, accentuating the geometric properties of random paths of Gaussian and non-Gaussian stochastic processes. Such line of research has been rapidly growing in past years, paying off clear local and global properties for random paths associated to various stochastic processes such as Brownian and fractional Brownian motion. In this thesis, we start by studying the level sets associated to fractional Brownian motion using the macroscopic Hausdorff dimension. Then as a preliminary step, we establish some technical points regarding the distribution of the Rosenblatt process for the purpose of studying various geometric properties of its random paths. First, we obtain results concerning the Hausdorff (both classical and macroscopic), packing and intermediate dimensions, and the logarithmic and pixel densities of the image, level and sojourn time sets associated with sample paths of the Rosenblatt process. Second, we study the pointwise regularity of the generalized Rosenblatt and prove the existence of three kinds of local behavior: slow, ordinary and rapid points. In the last chapter, we illustrate several methods to estimate the macroscopic Hausdorff dimension, which played a key role in our results. In particular, we build the potential theoretical methods. Then, relying on this, we show that the macroscopic Hausdorff dimension of the projection of a set E ⊂ R^2 onto almost all straight lines passing through the origin in R^2 depends only on E, that is, they are almost surely independent of the choice of straight line. [less ▲]

Detailed reference viewed: 83 (11 UL)
Full Text
See detailNext Generation Mutation Testing: Continuous, Predictive, and ML-enabled
Ma, Wei UL

Doctoral thesis (2022)

Software has been an essential part of human life, and it substantially improves production and enriches our life. However, flaws in software can lead to tragedies, e.g. the failure of the Mariner 1 ... [more ▼]

Software has been an essential part of human life, and it substantially improves production and enriches our life. However, flaws in software can lead to tragedies, e.g. the failure of the Mariner 1 Spacecraft in 1962. At the moment, modern software systems are much different from what before. The issue gets even more severe since the complexity of software systems grows larger than before and Artificial Intelligence(AI) models are integrated into software (e.g., Tesla Deaths Report ). Testing such modern artificial software systems is challenging. Due to new requirements, software systems evolve and frequently change, and artificial intelligence(AI) models have non-determination issues. The non-determination of AI models is related to many factors, e.g., optimization algorithms, numerical problems, the labelling threshold, data of the same object but under different collecting conditions or changing the backend libraries. We have witnessed many new testing techniques emerge to guarantee the trustworthiness of modern software systems. Coverage-based Testing is one early technique to test Deep Learning(DL) systems by analyzing the neuron values statistically, e.g., Neuron Coverage(NC) . In recent years, Mutation Testing has drawn much attention. Coverage-based testing metrics can be misleading and easily be fooled by generating tests to satisfy test coverage requirements just by executing the code line. The test suite with one hundred percent coverage may detect no flaw in software. On the contrary, Mutation Testing is a robust approach to approximating the quality of a test suite. Mutation Testing is a technique based on detecting artificial defects from many crafted code perturbations (i.e., mutant) to assess and improve the quality of a test suite. The behaviour of a mutant is likely to be located on the border between correctness and non-correctness since the code perturbation is usually tiny. Through mutation testing, the border behaviour of the subject under test can be explored well, which leads to a high quality of software. It has been generalized to test software systems integrated with DL systems, e.g., image classification systems and autonomous driving systems. However, the application of Mutation Testing encounters some obstacles. One main challenge is that Mutation Testing is resource-intensive. Large resource consumption makes it unskilled in modern software development because the code frequently evolves every day. This dissertation studies how to apply Mutation Testing for modern software systems, exploring and exploiting the usages and innovations of Mutation Testing encountering AI algorithms, i.e., how to employ Mutation Testing for modern software systems under test. AI algorithms can improve Mutation Testing for modern software systems, and at the same time, Mutation Testing can test modern software integrated with DL models well. First, this dissertation adapts Mutation Testing to modern software development, Continuous Integration. Most software development teams currently employ Continuous Integration(CI) as the pipeline where the changes happen frequently. It is problematic to adopt Mutation Testing in Continuous Integration because of its expensive cost. At the same time, traditional Mutation Testing is not a good test metric for code changes as it is designed for the whole software. We adapt Mutation Testing to test these program changes by proposing commit-relevant mutants. This type of mutant affects the changed program behaviours and represents the commit-relevant test requirements. We use the benchmarks from C and Java to validate our proposal. The experiment results indicate that commit-relevant mutants can effectively enhance code change testing. Second, based on the aforementioned work, we introduce MuDelta, an AI approach that identifies commit-relevant mutants, i.e., some mutants that interact with the code change. MuDelta uses manually-designed features that require expert knowledge. MuDelta leverages a combined scheme of static code characteristics as the data feature. Our evaluation results indicate that commit-based mutation testing is suitable and promising for evolving software systems. Third, this dissertation proposes a new approach GraphCode2Vec to learn the general software code representation. Recent works utilize natural language models to embed the code into the vector representation. Code embedding is a keystone in the application of machine learning on several Software Engineering (SE) tasks. Its target is to extract universal features automatically. GraphCode2Vec considers program syntax and semantics simultaneously by combining code analysis and Graph Neural Networks(GNN). We evaluate our approach in the mutation testing task and three other tasks (method name prediction, solution classification, and overfitted patch classification). GraphCode2Vec is better or comparable to the state-of-the-art code embedding models. We also perform an ablation study and probing analysis to give insights into GraphCode2Vec. Finally, this dissertation studies Mutation Testing to select test data for deep learning systems. Since deep learning systems play an essential role in different fields, the safety of DL systems takes centre stage. Such DL systems are much different from traditional software systems, and the existed testing techniques are not supportive of guaranteeing the reliability of the deep learning systems. It is well-known that DL systems usually require extensive data for learning. It is significant to select data for training and testing DL systems. A good dataset can help DL models have a good performance. There are several metrics to guide choosing data to test DL systems. We compare a set of test selection metrics for DL systems. Our results show that uncertainty-based metrics are competent in identifying misclassified data. These metrics also improve classification accuracy faster when retraining DL systems. In summary, this dissertation shows the usage of Mutation Testing in the artificial intelligence era. The first, second and third contributions are on Mutation Testing helping modern software test in CI. The fourth contribution is a study on selecting training and testing data for DL systems. Mutation Testing is an excellent technique for testing modern software systems. At the same time, AI algorithms can alleviate the main challenges of Mutation Testing in practice by reducing the resource cost. [less ▲]

Detailed reference viewed: 159 (5 UL)
Full Text
See detailLengths and intersections of curves on surfaces
Vo, Thi Hanh UL

Doctoral thesis (2022)

Detailed reference viewed: 88 (18 UL)
Full Text
See detailINTERROGATING INTRA-TUMORAL HETEROGENEITY AND TREATMENT RESISTANCE IN GLIOBLASTOMA PATIENT-DERIVED XENOGRAFT MODELS USING SINGLE-CELL RNA SEQUENCING
Yabo, Yahaya Abubakar UL

Doctoral thesis (2022)

Despite available treatment options for glioblastoma (GBM), GBM has one of the poorest prognosis, resist treatment, and recur aggressively in the majority of cases. Intra-tumoral heterogeneity and ... [more ▼]

Despite available treatment options for glioblastoma (GBM), GBM has one of the poorest prognosis, resist treatment, and recur aggressively in the majority of cases. Intra-tumoral heterogeneity and phenotypic plasticity are major factors contributing to treatment resistance and underlie tumor escape in GBM. Several potential therapeutic agents showing promising therapeutic effects against GBMs at the preclinical level failed to translate into effective therapies for GBM patients. This is partly attributed to the inadequacy of preclinical models to fully recapitulate the complex biology of human GBMs. This project aimed to characterize the transcriptomic heterogeneity and understand the dynamic GBM ecosystem in patient-derived xenograft (PDOX) models at the single-cell level. To achieve this aim, I established cell purification and cryopreservation protocols that enable the generation of high-quality single-cell RNA seq data from PDOX models including longitudinal and treated PDOXs. Different computational strategies were used to interrogate the transcriptomic features as well as the interactions between GBM cells and the surrounding microenvironment. This work critically analyzed and discussed key components contributing to intra-tumoral heterogeneity and phenotypic plasticity within the GBM ecosystem and their potential contributions to treatment resistance. Here, we provide evidence that PDOX models retain histopathologic and transcriptomic features of parental human GBMs. PDOX models were further shown to recapitulate major tumor microenvironment (TME) components identified in human GBMs. Cells within the GBM ecosystem were shown to display GBM-specific transcriptomic features, indicating active TME crosstalk in PDOX models. Tumor-associated microglia/macrophages were shown to be heterogeneous and display the most prominent transcriptomic adaptations following crosstalk with GBM cells. The myeloid cells in PDOXs and human GBM displayed a microglia-derived TAMs signature. Notably, GBM-educated microglia display immunologic features of migration, phagocytosis, and antigen presentation that indicates the functional role of microglia in the GBM TME. Taking advantage of a cohort of longitudinal PDOXs and treated PDOX models, I demonstrated the utility of PDOX models in elucidating longitudinal changes in GBM. We show that temozolomide treatment leads to transcriptomic adaptation of not only the GBM tumor cells but also adjacent TME components. Overall, this work further highlights the importance and the clinical relevance of PDOX models for the testing of novel therapeutics including immunotherapies targeting certain tumor TME components in GBM. [less ▲]

Detailed reference viewed: 49 (3 UL)
Full Text
See detailMachine Learning-Based Efficient Resource Scheduling for Future Wireless Communication Networks
Yuan, Yaxiong UL

Doctoral thesis (2022)

The next-generation mobile communication system, e.g., 6G communication system, is envisioned to support unprecedented performance requirements such as exponentially increasing data requests ... [more ▼]

The next-generation mobile communication system, e.g., 6G communication system, is envisioned to support unprecedented performance requirements such as exponentially increasing data requests, heterogeneous service demands, and massive connectivity. When these challenging tasks meet the scarcity of wireless resources, efficient resource management becomes crucial. Conventionally, optimization algorithms, either optimal or suboptimal, are the main approaches for solving resource allocation problems. However, the efficiency of these iterative optimization algorithms can significantly degrade when the problems become large or difficult, e.g., non-convex or combinatorial optimization problems. Over the past few years, machine learning (ML), as an emerging approach in the toolbox, is widely investigated to accelerate the decision-making process. Since applying ML-based approaches to solve complex resource management problems is in its early-stage study, many open issues and challenges need to be solved towards the maturity and practical applications. The motivation and objective of this dissertation lie at investigating and providing answers to the following research questions: 1) How to overcome the shortcomings of extensively adopted end-to-end learning in addressing resource management problems, and which types of features are suited to be learned if supervised learning is applied? 2) What are the limitations and benefits when widely-used deep reinforcement learning (DRL) approaches are used to address constrained and combinatorial optimization problems in wireless networks, and are there tailored solutions to overcome the inherent drawbacks? 3) How to enable ML-based approaches to timely adapt to dynamic and complex wireless environments? 4) How to enlarge the performance gains when the paradigm shifts from centralized learning to distributed learning? The main contributions are organized by the following four research works. Firstly, from a supervised-learning perspective, we address common issues, e.g., unsatisfactory pre- diction performance and resultant infeasible solutions, when end-to-end learning approaches are applied to resource scheduling problems. Based on the analysis of optimal results, we design suited-to-learn features for a class of resource scheduling problems, and develop combined learning-and-optimization approaches to enable time-efficient and energy-efficient resource scheduling in multi-antenna systems. The original optimization problems are mixed-integer programming problems with high-dimensional decision vectors. The optimal solution requires exponential complexity due to the inherent difficulties of the problems. Towards an efficient and competitive solution, we apply fully-connected deep neural network (DNN) and convolutional neural network (CNN) to learn the designed features. The predicted information can effectively reduce the large search space and accelerate the optimization process. Compared to the conventional optimization and pure ML algorithms, the proposed method achieves a good trade-off between quality and complexity. Secondly, we address typical issues when DRL is adopted to deal with combinatorial and non-convex scheduling problems. The original problem is to provide energy-saving solutions via resource scheduling in energy-constrained networks. An optimal algorithm and a golden section search suboptimal approach are developed to serve as offline benchmarks. For online operations, we propose an actor-critic-based deep stochastic online scheduling (AC-DSOS) algorithm. Compared to supervised learning, DRL is suitable for dynamic environments and capable of making decisions based on the current state without an offline training phase. However, for the specific constrained scheduling problem, conventional DRL may not be able to handle two major issues of exponentially-increased action space and infeasible actions. The proposed AC-DSOS is developed to overcome these drawbacks. In simulations, AC-DSOS is able to provide feasible solutions and save around more energy compared to the conventional DRL algorithms. Compared to the offline benchmarks, AC-DSOS reduces the computational time from second-level to millisecond-level. Thirdly, the dissertation pays attention to the performance of the ML-based approaches in highly dynamic and complex environments. Most of the ML models are trained by the collected data or the observed environments. They may not be able to timely respond to the large variations of environments, such as dramatically fluctuating channel states or bursty data demands. In this work, we develop ML-based approaches in a time-varying satellite-terrestrial network and address two practical issues. The first is how to efficiently schedule resources to serve the massive number of connected users, such that more data and users can be delivered/served. The second is how to make the algorithmic solution more resilient in adapting to the time-varying wireless environments. We propose an enhanced meta-critic learning (EMCL) algorithm, combining a DRL model with a meta-learning technique, where the meta-learning can acquire meta-knowledge from different tasks and fast adapt to the new task. The results demonstrate EMCL’s effectiveness and fast-response capabilities in over-loaded systems and in adapting to dynamic environments compare to previous actor-critic and meta-learning methods. Fourthly, the dissertation focuses on reducing the energy consumption for federated learning (FL), in mobile edge computing. The power supply and computation capabilities are typically limited in edge devices, thus energy becomes a critical issue in FL. We propose a joint sparsification and resource optimization scheme (JSRO) to jointly reduce computational and transmission energy. In the first part of JSRO, we introduce sparsity and adopt sparse or binary neural networks (SNN or BNN) as the learning model to complete the local training tasks at the devices. Compared to fully-connected DNN, the computational operations can be significantly reduced, and thus requires less energy consumption and fewer transmitted data to the central node. In the second part, we develop an efficient scheduling scheme to minimize the overall transmission energy by optimizing wireless resources and learning parameters. We develop an enhanced FL algorithm in JSRO, i.e., non-smoothness and constraints - stochastic gradient descent, to handle the non-smoothness and constraints of SNN and BNN, and provide guarantees for convergence. Finally, we conclude the thesis with the main findings and insights on future research directions. [less ▲]

Detailed reference viewed: 134 (19 UL)
Full Text
See detailINVESTIGATING NEUROINFLAMMATION IN SPORADIC AND LRRK2-ASSOCIATED PARKINSON'S DISEASE
Badanjak, Katja UL

Doctoral thesis (2022)

Inflammatory responses are evolutionarily conserved reactions to pathogens, injury, or any form of a serious perturbation of a human organism. These mechanisms evolved together with us and, although ... [more ▼]

Inflammatory responses are evolutionarily conserved reactions to pathogens, injury, or any form of a serious perturbation of a human organism. These mechanisms evolved together with us and, although capable of somewhat adapting, innate responses are gravely impacted by prolonged human lifespan. Better sanitary measures, health systems, food and medicine supply have prolonged human life expectancy to ~72 years. Aging is characterized by prolonged, chronic (often low-grade) inflammation. With tissue and cellular defense mechanisms becoming dysfunctional over time, this inflammation becomes detrimental and destructive to the human body. Aging is a major risk factor for Parkinson’s disease (PD), a movement disorder characterized by the loss of dopaminergic neurons. Even though the disease is predominantly idiopathic, genetic cases are contributing to a better understanding of the underlying cellular and neuropathological mechanisms. In comparison to neuronal demise, the contribution of microglia (the immune cells of the brain) to PD is relatively understudied. Initially studied in PD patient-derived post-mortem tissue, novel in vitro technologies, such as induced pluripotent stem cells (iPSCs), are permitting the generation of specific cell types of interest in order to study disease mechanisms. We derived microglia cells from iPSCs of patients and healthy or isogenic controls to explore (shared) pathological immune responses in LRRK2-PD and idiopathic PD. Our findings suggest a significant involvement of microglia cells in the pathogenesis of PD and highlight potential therapeutic targets in alleviating overactive immune responses. [less ▲]

Detailed reference viewed: 158 (48 UL)
Full Text
See detailCognitive Pain Modulation in Young and Older Adults: Understanding the Role of Individual Differences in Frontal Functioning
Rischer, Katharina Miriam UL

Doctoral thesis (2022)

Cognitive pain modulation is integral to our quality of life and deeply interwoven with the success of pain treatments but is also characterized by large interindividual variations. Emerging evidence ... [more ▼]

Cognitive pain modulation is integral to our quality of life and deeply interwoven with the success of pain treatments but is also characterized by large interindividual variations. Emerging evidence suggests that one of the driving factors behind these variations are individual differences in frontal functioning. Further evidence indicates that pain-related cognitions, and possibly also emotional distress, may influence the efficacy of pain modulation. Central aim of this project was to assess the role of individual differences in frontal functioning in cognitive pain modulation, with a specific focus on older adults. With respect to this, we also wanted to assess whether individual differences in frontal functions could explain conflicting previous results on age-related changes in the efficacy of cognitive pain modulation. In addition, we wanted to address the role of negative pain-related mindsets and emotional distress on the efficacy of cognitive pain modulation. We tested these research questions across four different studies using two prime paradigms of cognitive pain modulation, namely distraction from pain and placebo analgesia. In Study I, we assessed the role of individual differences in executive functions, emotional distress, and pain-related cognitions in modulating heat pain thresholds in healthy young adults in virtual reality environments with different levels of cognitive load. We found that emotional distress and visuo-spatial short-term memory significantly predicted how participants responded to the low vs high load environment. In Study II, we investigated the role of different forms of cognitive inhibition abilities and negative pain-related cognitions in modulating the efficacy of distraction from (heat) pain by cognitive demand in healthy young adults. We found a significant influence of better cognitive inhibition and selective attention abilities on the size of the distraction effect; however, this association was moderated by the participant’s level of pain catastrophizing, i.e., high pain catastrophizers showed an especially strong association. In Study III, we tested potential age-related differences in distraction from pain in a group of young and older adults while simultaneously acquiring functional brain images. We found no age-related changes at the behavioural level, but a slightly reduced neural distraction effect in older adults. The neural distraction effect size in older adults was furthermore significantly positively related to better cognitive inhibition abilities. In Study IV, we explored potential age-related differences in placebo analgesia in a group of young and older adults (who were partly re-recruited from Study III) while recording their brain activity with an electroencephalogram. Results revealed no age-related differences in the magnitude of the behavioural or electrophysiological placebo response, but older adults showed a neural signature of the placebo effect that was distinct from young participants. Regression analyses revealed that executive functions that showed an age-related decline (as established via group comparisons) were significant predictors of the behavioural placebo response. We furthermore found that better executive functions significantly moderated the association between age group and placebo response magnitude: older adults with better executive functions showed a larger placebo response than young adults whereas worse executive functions were associated with a smaller placebo response, possibly explaining why we found no significant difference at the group level. In summary, all studies provide converging evidence that differences in cognitive functions can significantly affect the efficacy of cognitive pain modulation. Although older adults showed a significant decline in most cognitive functions that we assessed, we found no systematic reduction in the efficacy of cognitive pain modulation (except for a slight reduction in the neural distraction effect size). Closer inspection of the data revealed that older adults may have engaged compensatory mechanisms that enabled them to experience the same (or even higher) level of pain relief as younger adults. We furthermore found evidence for the notion that pain-related cognitions and emotional distress may affect how individuals respond to cognitive pain modulation although this association was less systematic than for cognitive functions. Overall, the present thesis adds to the emerging body of evidence highlighting the importance of executive functions, as indicators of frontal functioning, in cognitive pain modulation. [less ▲]

Detailed reference viewed: 48 (5 UL)
Full Text
See detailUnderstaining and explaining cross-border mobility : a free will / predisposition approach
Nonnenmacher, Lucas UL

Doctoral thesis (2022)

This dissertation investigates the drivers of cross-border mobility from a multidisciplinary perspective. Both qualitative and quantitative methodologies are used in order to understand and explain why ... [more ▼]

This dissertation investigates the drivers of cross-border mobility from a multidisciplinary perspective. Both qualitative and quantitative methodologies are used in order to understand and explain why workers cross borders. The major contribution of this dissertation is to highlight new determinants of cross-border mobility such as previous migration experience and health state. These drivers have been disregarded in the literature in the past. Moreover, this dissertation validates the motivations of the workers as a relevant driver of cross-border mobility and provides a state of play of the situation of the cross-border workers in Europe, with a specific focus on French cross- border workers. Firstly, this dissertation provides a review of the explanations of cross- border mobility in the existing literature. Secondly, this dissertation analyses the subjective drivers of cross-border mobility using a qualitative dataset composed of 30 interviews of French workers in Luxembourg collected between January 2018 and May 2019. Results highlight that cross-border workers motivate their decision to commute abroad with financial, professional and personal reasons. Furthermore, the motivations of the cross-border workers vary with respect to their socioeconomic profile. Based on these empirical findings, a model of cross-border labour supply was designed. Thirdly, this dissertation assesses the association between migration capital and cross-border mobility using the French part of the European Labour Force Survey called the Enquête Emploi between 2010 and 2018. Results indicate that migrants commute abroad more than non migrants and are also more likely to do so. Migrant children are more likely to commute abroad, suggesting that the capacity to deal with distance and borders can be transmitted throughout generations. The migration capital is a relevant predictor of commuting behaviour, since the higher the capital endowment, the higher the likelihood is to commute abroad. Additional findings can be mentioned. Internal migration does not increase the likelihood to commute abroad. The acquired migration experience is more useful than the inherited migration experience to be engaged in cross-border mobility. Fourthly, this dissertation examines health disparities between cross-border workers and non cross-border workers using the Enquête Emploi between 2013 and 2018. Results 6 suggest a healthy cross-border phenomenon, the existence of major health disparities among cross-border workers and the rejection of the spillover phenomenon for this specific population. Finally, this dissertation concludes that cross-border mobility is a complex phenomenon still partially explained, probably because of the lack of harmonised dataset about cross-border workers within the EU. Further research on cross- border mobility is needed to better understand this population, especially in public health, where everything remains to be done. [less ▲]

Detailed reference viewed: 67 (4 UL)
Full Text
See detailCharacterization of the surface properties of polycrystalline Cu(In,Ga)Se2 using a combination of scanning probe microscopy and X-ray photoelectron spectroscopy
Kameni Boumenou, Christian UL

Doctoral thesis (2022)

Polycrystalline Cu(In,Ga)Se2 (CIGSe) exhibit excellent properties for high power conversion efficiency (PCE) thin film solar cells. In recent years, photovoltaic cells made from CIGSe reached a PCE of 23 ... [more ▼]

Polycrystalline Cu(In,Ga)Se2 (CIGSe) exhibit excellent properties for high power conversion efficiency (PCE) thin film solar cells. In recent years, photovoltaic cells made from CIGSe reached a PCE of 23.4\%, surpassing that of multicrystalline silicon photovoltaic cells. Nevertheless, the changes in surface composition and electronic properties of the absorbers after various solution-based surface treatments are still under intensive investigation and are widely discussed in the literature. In this thesis, the front, the rear surface properties as well as the impact of post-deposition treatments (PDT) on CIGSe absorbers with different elemental compositions were analyzed by scanning tunneling microscopy and spectroscopy, Kelvin probe force microscopy, and X-ray photoelectron spectroscopy. I show that potassium cyanide (KCN) etching reduces the Cu content at the surface of Cu-rich absorbers substantially. The reduction of the Cu-content is accompanied with the formation of a large number of defects at the surface. Scanning tunneling spectroscopy measurements showed that most of these defects could be passivated with Cd ions. A semiconducting surface and no changes in the density of states were measured across the grain boundaries. In addition to the defect passivation an increase in surface band bending due to the substitution of Cu vacancies by Cd ions, which act as shallow donor defects was observed. As in the case of the front surface, the analyses carried out on the back surface of Cu-rich absorbers showed that a detrimental CuxSe secondary phase was also formed at the interface between the MoSe2 layer and CISe absorber after growth. This CuxSe secondary phase at the back contact was not present in Cu-poor absorbers. Regarding the alkali metal post-treated absorbers, I show that the occurrence of an enlarged surface bandgap, often reported on CIGSe absorbers after PDT treatment is only present after H2O rinsing. After ammonia (NH4OH) washing, which is always applied before buffer layer deposition, all the high bandgap precipitates disappeared and an increased amount of an ordered vacancy compound was observed. The thesis thereby gives a comprehensive overview of the CIGSe surfaces after various chemical and post deposition treatments. [less ▲]

Detailed reference viewed: 38 (7 UL)
Full Text
See detailWhen does finance win? A set-theoretic analysis of the conditions of European financial interest groups' lobbying success on post-crisis bank capital requirements
Commain, Sébastien Romain Jean-Louis UL

Doctoral thesis (2022)

Acknowledging the failure of the existing regulatory framework after the global financial crisis of 2008, world leaders vowed to reform financial regulation to strengthen stability and restore trust. The ... [more ▼]

Acknowledging the failure of the existing regulatory framework after the global financial crisis of 2008, world leaders vowed to reform financial regulation to strengthen stability and restore trust. The reform of bank capital requirements was a major item on this agenda: the Group of Twenty (G20) entrusted the reform to the Basel Committee on Bank Supervision (BCBS), whose so-called "Basel framework" constitutes the global standard for the prudential regulation of banking activities. While scholars have highlighted the important concessions that were made to financial interests in this reform, a series of demanding new policy tools—which were strongly opposed by financial industry representatives—were also introduced into the new Basel III framework. This dissertation explores this empirical puzzle and seeks to identify under what conditions European financial interests’ lobbying on the reform of capital requirements was successful, and whether these successes constitute cases of interest group influence. Defining influence as a situation where a proposed reform evolves during the decision-making process (policy shift) in the direction advocated by an actor (lobbying success) and where that evolution is caused by the actors’ lobbying activity vis-à-vis the proposed reform (causal path), this dissertation then considers influence as a multilevel concept, which can be considered present if and only if all three of its components—policy shift, lobbying success and a causal path—are also present. In other words, policy shift, lobbying success and causal paths are the three individually necessary and jointly sufficient conditions for influence, which this study investigates in turn in the case of post-crisis bank capital requirements. The presence or absence of a policy shift is assessed qualitatively by comparing, for twenty-nine policy issues contained in the Basel III framework, the initial BCBS reform proposals with the rules finally enacted at international and European level. The positions of financial and non-financial interest groups on each of these twenty-nine issues are then determined—through a quantitative text analysis of the position papers submitted by interest groups to BCBS and European Commission consultations on Basel III and the CRD and CRR—to determine whether the identified policy shift on a given issue constitutes a case of lobbying success for the interest group. Finally, using fuzzy-set Qualitative Comparative Analysis (fsQCA) to compare in a systematic manner cases in which success is observed and cases where it is absent, I uncover the configurations of conditions sufficient to produce successful lobbying and those sufficient to produce the absence of success, configurations which I then interpret in terms of causal mechanisms. Strong collective action is found, in several forms, to form the basis of causal mechanisms producing successful lobbying. The observed sufficient configurations of conditions however suggest that the causal mechanisms producing success also include key contextual factors that are beyong the control of financial interest groups. The absence of these enabling contextual factors is shown, conversely, to lead to the absence of success. This dissertation contributes to the existing academic literature in several ways. Empirically, first, it adds to the scholarship on bank capital requirements at the international and European level, using novel data to reassess, after the completion of the Basel III reform, the extent to which the final framework meets the initial ambitions. Methodologically, second, this dissertation employs a range of new methods and techniques to take on the challenges of measuring lobbying success and identifying multiple pathways to influence, two fundamental issues for empirical studies of interest group influence. Theoretically, third, the combinatorial approach used here to explore conditions of lobbying success permits an examination of multiple conjunctural causation patterns in interest group influence. [less ▲]

Detailed reference viewed: 58 (9 UL)
Full Text
See detailSYSTEMS METHODS FOR ANALYSIS OF HETEROGENEOUS GLIOBLASTOMA DATASETS TOWARDS ELUCIDATION OF INTER-TUMOURAL RESISTANCE PATHWAYS AND NEW THERAPEUTIC TARGETS
Tching Chi Yen, Romain Mana Hiao Woun UL

Doctoral thesis (2022)

In this PhD thesis is described an endeavour to compile litterature about Glioblastoma key molecular mechanisms into a directed network followin Disease Maps standards, analyse its topology and compare ... [more ▼]

In this PhD thesis is described an endeavour to compile litterature about Glioblastoma key molecular mechanisms into a directed network followin Disease Maps standards, analyse its topology and compare results with quantitative analysis of multi-omics datasets in order to investigate Glioblastoma resistance mechanisms. The work also integrated implementation of Data Management good practices and procedures. [less ▲]

Detailed reference viewed: 20 (2 UL)
Full Text
See detailInvestigation in reusable composite flooring systems in steel and concrete based on composite behaviour by friction
Fodor, Jovan UL

Doctoral thesis (2022)

The steel-concrete composite systems proved to be very efficient structural solution in terms of material consumption and mechanical response regarding the construction of the structural floor systems ... [more ▼]

The steel-concrete composite systems proved to be very efficient structural solution in terms of material consumption and mechanical response regarding the construction of the structural floor systems whether in the case of industrial and residential buildings or especially in the case of car parks. However, their contemporary application that relies on the utilization of the welded headed studs as a mean to provide the shear connection between the steel section and the concrete chord renders the system unable to be disassembled (in the best case its steel and concrete parts are recycled). Considering the ongoing push from the linear to circular economical models and the application of 3R principle (Reduce, Reuse & Recycle) such systems are unable to furtherly improve their environmental and economic efficiency through reuse schemes. The profound task in this research is development and the verification of the new demountable shear connector solutions that could allow modularity and demountability (hence reusability) of the steel-concrete composite floor systems while retaining their inherent structural advantages. Based on the previous investigations of the demountable shear connector systems (at the first-place bolted solutions) and investigations of mechanical components that were not strictly related to the shear connectors, four demountable shear connector devices were developed. Having in mind the drawbacks of the earlier solutions, adequate detailing and structural measures were applied and the ease of assembly and disassembly was proved on the constructed prototypes. Afterwards, the mechanical properties of devised demountable connector systems were investigated thoroughly through experimental campaign (push tests) and numerical investigation. Based on the experimental and numerical results of the shear connector behaviour it is concluded that the proposed shear connector device Type B possess adequate strength and stiffness and might be considered ductile in accordance with the EN 1994-1-1 allowing for the application of existent design strategies in accordance with the same design code. The force-slip behaviour of the proposed shear connector is explained and adequate analytic model is proposed. Based on the force-slip behaviour model the applicability of the shear connector is verified on a range of composite beams that represent the demountable floor. [less ▲]

Detailed reference viewed: 139 (10 UL)
Full Text
See detailSmart cloud collocation: a unified workflow from CAD to enhanced solutions
Jacquemin, Thibault Augustin Marie UL

Doctoral thesis (2022)

Computer Aided Design (CAD) software packages are used in the industry to design mechanical systems. Then, calculations are often performed using simulation software packages to improve the quality of the ... [more ▼]

Computer Aided Design (CAD) software packages are used in the industry to design mechanical systems. Then, calculations are often performed using simulation software packages to improve the quality of the design. To speed up the development costs, companies and research centers have been trying to ease the integration of the computation phase in the design phase. The collocation methods have the potential of easing such integration thanks to their meshless nature. The geometry discretization step which is a key element of all computational method is simplified compared to mesh-based methods such as the finite element method. We propose in this thesis a unified workflow that allows the solution of engineering problems defined by partial differential equations (PDEs) directly from input CAD files. The scheme is based on point collocation methods and proposed techniques to enhance the solution. We introduce the idea of “smart clouds”. Smart clouds refer to point cloud discretizations that are aware of the exact CAD geometry, appropriate to solve a defined problem using a point collocation method and that contain information used to improve locally the solution. We introduce a unified node selection algorithm based on a generalization of the visibility criterion. The proposed algorithm leads to a significant reduction of the error for concave problems and does not have any drawback for convex problems. The point collocation methods rely on many parameters. We select in this thesis parameters for the Generalized Finite Difference (GFD) method and the Discretization-Corrected Particle Strength Exchange (DC PSE) method that we deem appropriate for most problems from the field of linear elasticity. We also show that solution improvement techniques, based on the use of Voronoi diagrams or on a stabilization of the PDE, do not lead to a reduction of the error for all of the considered benchmark problems. These methods shall therefore be used with care. We propose two types of a posteriori error indicators that both succeed in identifying the areas of the domain where the error is the greatest: a ZZ-type and a residual-type error indicator. We couple these indicators to a h-adaptive refinement scheme and show that the approach is effective. Finally, we show the performance of Algebraic Multigrid (AMG) preconditions on the solution of linear systems compared to other preconditioning/solution methods. This family of preconditioners necessitates the selection of a large number of parameters. We assess the impact of some of them on the solution time for a 3D problem from the field of linear elasticity. Despite the performance of AMG preconditions, ILU preconditioners may be preferred thanks to their ease of usage and robustness to lead to a convergence of the solution. [less ▲]

Detailed reference viewed: 70 (3 UL)
Full Text
See detailTowards a Unified and Robust Data-Driven Approach. A Digital Transformation of Production Plants in the Age of Industry 4.0
Benedick, Paul-Lou UL

Doctoral thesis (2022)

Nowadays, industrial companies are engaging their global transition toward the fourth industrial revolution (the so-called Industry 4.0). The main objective is to increase the Overall Equipment ... [more ▼]

Nowadays, industrial companies are engaging their global transition toward the fourth industrial revolution (the so-called Industry 4.0). The main objective is to increase the Overall Equipment Effectiveness (OEE), by collecting, storing and analyzing production data. Several challenges have to be tackled to propose a unified data-driven approach to rely on, from the low-layers data collection on the machine production lines using Operational Technologies (OT), to the monitoring and more importantly the analysis of the data using Information Technologies (IT). This is all the more important for companies having decades of existence – as Cebi Luxembourg S.A., our partner in a Research, Development and Innovation project subsidised by the ministry of the Economy in Luxembourg – to upgrade their on-site technologies and move towards new business models. Artificial Intelligence (AI) now knows a real interest from industrial actors and becomes a cornerstone technology for helping humans in decision-making and data-analysis tasks, thanks to the huge amount of (sensors-based) univariate time-series available in the production floor. However, such amount of data is not sufficient for AI to work properly and to make right decisions. This also requires a good data quality. Indeed, good theoretical performance and high accuracy can be obtained when trained and tested in isolation, but AI models may still provide degraded performance in real/industrial conditions. In that context, the problem is twofold: • Industrial production systems are vertically-oriented closed systems that make difficult their communication and their cooperation with each other, and intrinsically the data collection. • Industrial companies used to implement deterministic processes. Introducing AI - that can be classified as stochastic - in the industry requires a full understanding of the potential deviation of the models in order to be aware of their domain of validity. This dissertation proposes a unified strategy for digitizing an industrial system and methods for evaluating the performance and the robustness of AI models that can be used in such data-driven production plants. In the first part of the dissertation, we propose a three-steps strategy to digitize an industrial system, called TRIDENT, that enables industrial actors to implement data collection on production lines, and in fine to monitor in real-time the production plant. Such strategy has been implemented and evaluated on a pilot case-study at Cebi Luxembourg S.A. Three protocols (OPC-UA, MQTT and O-MI/O-DF) are used for investigating their impact on the real-time performance. The results show that, even if these protocols have some disparity in terms of performance, they are suitable for an industrial deployment. This strategy has now been extended and implemented by our partner - Cebi Luxembourg S.A - in its production environment. In the second part of the thesis dissertation, we aim at investigating the robustness of AI models in industrial settings. We then propose a systematic approach to evaluate the robustness under perturbations. Assuming that i) real perturbations - in particular on the data collection - cannot be recorded or generated in real industrial environment (that could lead to production stops) and ii) a model would not be implemented before evaluating its potential deviations, limits or weaknesses, our approach is based on artificial injections of perturbations into the data sets, and is evaluated on state-of-the-art classifiers (both Machine-Learning and Deep-Learning) and data sets (in particular, public sensors-based univariate time series). First, we propose a coarse-grained study, with two artificial perturbations - called swapping effect and dropping effect - in which simple random algorithms are used. This already highlights a great disparity of the models’ robustness under such perturbations that industrial actors need to be aware of. Second, we propose a fine-grained study where instead of testing randomly some parameters' values, we used Genetic Algorithms to look for the models' limits. To do so, we define our multi-objectives optimisation problem with a fitness function as: maximising the impact of the perturbations (i.e. decreasing the most the model's accuracy), while minimising the changes in the time-series (with regards to our two parameters). This can be seen as an adversarial case, where the goal is not to exploit these weaknesses in a malicious way but to be aware of. Based on such a study, methods for making more robust the model and/or for observing such behaviour on the infrastructure could be investigated and implemented if needed. The tool developed in this latter study is therefore ready for being used in a real industrial case, where data sets and perturbations can now be fitted to the scenario. [less ▲]

Detailed reference viewed: 83 (14 UL)
Full Text
See detailUser Experience Design for Cybersecurity & Privacy: addressing user misperceptions of system security and privacy
Stojkovski, Borce UL

Doctoral thesis (2022)

The increasing magnitude and sophistication of malicious cyber activities by various threat actors poses major risks to our increasingly digitized and inter-connected societies. However, threats can also ... [more ▼]

The increasing magnitude and sophistication of malicious cyber activities by various threat actors poses major risks to our increasingly digitized and inter-connected societies. However, threats can also come from non-malicious users who are being assigned too complex security or privacy-related tasks, who are not motivated to comply with security policies, or who lack the capability to make good security decisions. This thesis posits that UX design methods and practices are necessary to complement security and privacy engineering practices in order to (1) identify and address user misperceptions of system security and privacy; and (2) inform the design of secure systems that are useful and appealing from end-users’ perspective. The first research objective in this thesis is to provide new empirical accounts of UX aspects in three distinct contexts that encompass security and privacy considerations, namely: cyber threat intelligence, secure and private communication, and digital health technology. The second objective is to empirically contribute to the growing research domain of mental models in security and privacy by investigating user perceptions and misperceptions in the afore-mentioned contexts. Our third objective is to explore and propose methodological approaches to incorporating users’ perceptions and misperceptions in the socio-technical security analyses of systems. Qualitative and quantitative user research methods with experts as well as end users of the applications and systems under investigation were used to achieve the first two objectives. To achieve the third objective, we also employed simulation and computational methods. Cyber Threat Intelligence: CTI sharing platforms Reporting on a number of user studies conducted over a period of two years, this thesis offers a unique contribution towards understanding the constraining and enabling factors of security information sharing within one of the leading CTI sharing platforms, called MISP. Further, we propose a conceptual workflow and toolchain that would seek to detect user (mis)perceptions of key tasks in the context of CTI sharing, such as verifying whether users have an accurate comprehension of how far information travels when shared in a CTI sharing platform, and discuss the benefits of our socio-technical approach as a potential security analysis tool, simulation tool, or educational / training support tool. Secure & Private Communication: Secure Email We propose and describe multi-layered user journeys, a conceptual framework that serves to capture the interaction of a user with a system as she performs certain goals along with the associated user beliefs and perceptions about specific security or privacy-related aspects of that system. We instantiate the framework within a use case, a recently introduced secure email system called p≡p, and demonstrate how the approach can be used to detect misperceptions of security and privacy by comparing user opinions and behavior against system values and objective technical guarantees offered by the system. We further present two sets of user studies focusing on the usability and effectiveness of p≡p’s security and privacy indicators and their traffic-light inspired metaphor to represent different privacy states and guarantees. Digital Health Technology: Contact Tracing Apps Considering human factors when exploring the adoption as well as the security and privacy aspects of COVID-19 contact tracing apps is a timely societal challenge as the effectiveness and utility of these apps highly depend on their widespread adoption by the general population. We present the findings of eight focus groups on the factors that impact people’s decisions to adopt, or not to adopt, a contact tracing app, conducted with participants living in France and Germany. We report how our participants perceived the benefits, drawbacks, and threat model of the contact tracing apps in their respective countries, and discuss the similarities and differences between and within the study groups. Finally, we consolidate the findings from these studies and discuss future challenges and directions for UX design methods and practices in cybersecurity and digital privacy. [less ▲]

Detailed reference viewed: 453 (9 UL)
Full Text
See detailData Analysis for Insurance: Recommendation System Based on a Multivariate Hawkes Process
Lesage, Laurent UL

Doctoral thesis (2022)

The objective of the thesis is to build a recommendation system for insurance. By observing the behaviour and the evolution of a customer in the insurance context, customers seem to modify their insurance ... [more ▼]

The objective of the thesis is to build a recommendation system for insurance. By observing the behaviour and the evolution of a customer in the insurance context, customers seem to modify their insurance cover when a significant event happens in their life. In order to take into account the influence of life events (e.g. marriage, birth, change of job) on the insurance covering selection from customers, we model the recommendation system with a Multivariate Hawkes Process (MHP), which includes several specific features aiming to compute relevant recommendations to customers from a Luxembourgish insurance company. Several of these features are intent to propose a personalized background intensity for each customer thanks to a Machine Learning model, to use triggering functions suited for insurance data or to overcome flaws in real-world data by adding a specific penalization term in the objective function. We define a complete framework of Multivariate Hawkes Processes with a Gamma density excitation function (i.e. estimation, simulation, goodness-of-fit) and we demonstrate some mathematical properties (i.e. expectation, variance) about the transient regime of the process. Our recommendation system has been back-tested over a full year. Observations from model parameters and results from this back-test show that taking into account life events by a Multivariate Hawkes Process allows us to improve significantly the accuracy of recommendations. The thesis is presented in five chapters. Chapter 1 explains how the background intensity of the Multivariate Hawkes Process is computed thanks to a Machine Learning algorithm, so that each customer has a personalized recommendation. Chapter 1 is shown an extended version of the method presented in [1], in which the method is used to make the algorithm explainable. Chapter 2 presents a Multivariate Hawkes Processes framework in order to compute the dependency between the propensity to accept a recommendation and the occurrence of life events: definitions, notations, simulation, estimation, properties, etc. Chapter 3 presents several results of the recommendation system: estimated parameters of the model, effects of contributions, backtesting of the model’s accuracy, etc. Chapter 4 presents the implementation of our work into a R package. Chapter 5 concludes on the contributions and perspectives opened by the thesis. [less ▲]

Detailed reference viewed: 57 (4 UL)
Full Text
See detailMetal-oxide nanostructures for low-power gas sensors
Bhusari, Rutuja Dilip UL

Doctoral thesis (2022)

For gas sensing applications, metal oxide (MOx) nanostructures have demonstrated attractive properties due their large surface-over-volume ratio, combined with the possibility to use multiple materials ... [more ▼]

For gas sensing applications, metal oxide (MOx) nanostructures have demonstrated attractive properties due their large surface-over-volume ratio, combined with the possibility to use multiple materials and multi-functional properties. For MOx chemiresistive gas sensors, the temperature activated interaction of atmospheric oxygen with MOx surface plays a major role in the sensor kinetics as it leads to oxygen adsorption-desorption reactions, that eventually affects the gas sensing performance. Thus, MOx sensors are operated at high temperatures to achieve the desired sensitivity. This high temperature operation of MOx sensors limits their application in explosive gas detection, reduces the sensor lifetime and causes power consumption. To overcome these drawbacks of MOx sensors, researchers have proposed the use of heterostructures and light activation as alternatives. In this thesis, we aim to develop low power consuming MOx sensors using these solutions. We show the template-free bottom-up synthesis and shape control of copper hydroxide-based nanostructures grown in liquid phase which act as templates for formation of CuO nanostructures. Precise control over the pH of the solution and the reaction temperature led to intended tuning of the morphology and chemical composition of the nanostructures. We contemplate upon the rationale behind this change in shape and material as CuO nanostructures are further used in a heterostructure. We discuss synthesis and characterisation of CuO bundles and Cu2O truncated cubes, former of which lead to very interesting gas sensing properties and application. Devices made from CuO bundles network are investigated for their electrical and oxygen adsorption- desorption properties as a gas sensor. It was observed that the sensor has faster response and recovery in as deposited condition in comparison to annealed sensor. A detailed inspection of response and recovery curves enabled us to derive parameters like time constants, reaction constants and diffusion coefficients for CuO bundles, an analysis that is scarcely performed on p-type materials. Investigation of the derived parameters, role of network junctions and a hydroxylated CuO surface leads us to discuss the hypotheses for the contributing processes. CuO bundles show conduction transients upon exposure to reducing gas H2 and temperature-based inversion of response upon exposure to reducing gas CO. This has not been reported in literature for CuO exposed to H2 and/or CO. Armed with this fundamental knowledge of gas sensing, we choose ZnO, n type transducer material, and CuO, p type materials with lower band gap and higher absorption in the visible range to synthesise a heterostructure. However, sol-gel synthesis of ZnO and CuO nanostructures have different reactions parameters, like temperature, pH, etc., and do not show natural affinity to grow on the other material. These challenges are overcome by implementing a stepped synthesis procedure to fabricate a heterostructure with Cu-based nanoplatelets on ZnO Nanorods, also represented as CuO@ZnO heterostructure in this thesis. We finally demonstrate electrical and functional characterisation of CuO@ZnO heterostructure. The heterostructure responds differently to tested gasses as compared to its constituent nanostructure ZnO nanorods and a reference CuO nanostructure, CuO bundles. This is an unexpected result as heterostructures usually show response type similar to their base material but with an enhanced sensor response. We present a possible application of e-nose that can differentiate qualitatively between CO, NO2 and ethanol, using the heterostructure, ZnO nanorods and CuO bundles together. [less ▲]

Detailed reference viewed: 76 (5 UL)
Full Text
See detailIL-6 Signaling and long non-coding RNAs in liver cancer
Chowdhary, Anshika UL

Doctoral thesis (2022)

Detailed reference viewed: 32 (2 UL)
See detailCOMBINED HEATING AND VENTILATION SYSTEMS FOR LOW-ENERGY RESIDENTIAL BUILDINGS; OPTIMIZATION OF ENERGY EFFICIENCY AND USER COMFORT
Shirani, Arsalan UL

Doctoral thesis (2022)

Combined heating and ventilation systems are applied here in highly energy-efficient residential buildings to save construction cost. Combining a Heat Pump with a Heat Recovery Ventilation system to heat ... [more ▼]

Combined heating and ventilation systems are applied here in highly energy-efficient residential buildings to save construction cost. Combining a Heat Pump with a Heat Recovery Ventilation system to heat and cool the building offers faster response times, a smaller footprint and an increased cooling capacity, compared to floor heating systems. As a result, such systems are expected to have a larger market share in the future. The available research on Ventilation Based Heating systems focuses mostly on comparing Exhaust Air Heat Pumps with conventional systems in energy-efficient buildings. The majority of published research neglects the usual existence of electrical backup heaters as well as the need to develop and use an adapted and optimized control strategy for such systems. This work compares the energy efficiency of the common-standard ventilation-based heating concepts including Exhaust Air Heat Pumps with the conventional floor heating systems using single room control strategy to achieve similar user comfort. The comparison is carried out in a simulation environment in order to optimize the systems under exactly reproducible boundary conditions. Additionally, two field tests were performed to achieve a better understanding and validation of the simulation models. The measured data was used to model the dynamic behavior of the Exhaust Air Heat Pump and the air distribution system. These field tests revealed that the overall run time and heating output of the heat pump were much lower than expected. This was the motivation to investigate and optimize the heat pump and electric heater control strategy. It could be demonstrated that the applied control strategy has a significant impact on the overall performance of the system. The suggested control strategy was tested and validated in a third field measurement. Based on the gained knowledge using the system simulation tool and the conducted field tests, an improved second concept for Ventilation Based Heating systems was defined with three optimization steps. It could be demonstrated that using the suggested methodologies in the hard- and software of such a system, can significantly improve its overall efficiency. However, Ventilation Based Heating systems cannot compete with floor heating systems in terms of total system energy efficiency, due to the necessity of electrical backup heaters and due to the higher supply temperatures. [less ▲]

Detailed reference viewed: 84 (9 UL)
Full Text
See detailEvaluation von Synergieeffekten zentraler Speichersysteme in Niederspannungsnetzen durch integrative Modellbildung
Zugschwert, Christina UL

Doctoral thesis (2022)

Security of supply, affordability, and sustainability form the pillars of a new energy policy towards renewable generation and decarbonization. However, the dynamics of the power generation due to the ... [more ▼]

Security of supply, affordability, and sustainability form the pillars of a new energy policy towards renewable generation and decarbonization. However, the dynamics of the power generation due to the increasing amount of renewable energies cause temporal and local discrepancies between generation and consumption. Resulting energy transports between grid sections and different voltage levels cause additional load flows. To ensure grid stability, the grid operator provides system services and grid extension measures. With the help of energy storage systems with grid-serving control and placement strategies, the flexibility of the electricity supply can be increased. Besides, a high amount of renewable energy can be used locally while maintaining grid stability. A centralized installation approach focussing single grid sections, instead of many decentralized home storage units, offers economic and environmental advantages. Furthermore, the operation strategy can be optimized by the global view of the grid operator and thus be adapted to local conditions. This research evaluates synergy effects of central storage systems by integrative computational analysis using a rural low-voltage grid section in Luxembourg. Three linked simulation levels are used to calculate operational strategies, storage dimensioning as well as placement based on 15-minute smart meter data. The operation strategy is developed within a power system simulation and is used to control a parameterizable simulation model of a vanadium-redox-flow-battery. The operating strategy focuses on reducing the maximum power flow at the transformer and reactive power compensation to maintain voltage stability. A future photovoltaic scenario is being adopted by doubling the status quo photovoltaic generation. The simultaneous optimization of storage utilization and power reduction at the transformer provides the storage design parameters power and capacity. Storage placement is determined by the system boundary and the resulting data selection. A final sensitivity analysis evaluates an optimized storage placement while enhancing the voltage profiles. The results of this work are a differentiated active as well as reactive power related operating strategy, automated calculation algorithms to determin control parameters, optimized battery design parameters as well as the methodical approach to transfer calculation algorithms to further grid sections. [less ▲]

Detailed reference viewed: 77 (5 UL)
See detailEXCESSIVE MICROBIAL MUCIN FORAGING INDUCED BY DIETARY FIBER DEPRIVATION MODULATES SUSCEPTIBILITY TO INFECTIOUS AND AUTOIMMUNE DISEASES
Wolter, Mathis UL

Doctoral thesis (2022)

The gastrointestinal (GI) mucus layer is a protective and lubricating hydrogel of polymer-forming glycoproteins that covers our intestinal epithelium. This mucus layer serves as an interface between the ... [more ▼]

The gastrointestinal (GI) mucus layer is a protective and lubricating hydrogel of polymer-forming glycoproteins that covers our intestinal epithelium. This mucus layer serves as an interface between the intestinal epithelium and environment as well as a as first line of defense against the potentially harmful microorganisms. While the GI mucus layer closer to the gut epithelium is highly condensed and acts as a physical barrier for invading microorganisms, further away from the epithelium, proteolytic degradation makes it loose. This looser part of the mucus layer serves as an attachment site and a nutrient source for some commensal gut bacteria. The molecular mechanisms that drive the mucus–microbe interactions are emerging and are important to understand the functional role of the gut microbiome in health and disease. Previous work by my research group showed that a dietary fiber-deprived gut microbiota erodes the colonic mucus barrier and enhances susceptibility to a mucosal pathogen Citrobacter rodentium, a mouse model for human Escherichia coli infections. In this PhD thesis, I studied role of the gut mucus layer in the context of various other infectious and autoimmune diseases by inducing the natural erosion of the mucus layer by dietary fiber deprivation. In order to unravel the mechanistic details in the intricate interactions between diet, mucus layer and gut microbiome, I leveraged our previously established gnotobiotic mouse model hosting a synthetic human gut microbiota of fully characterized 14 commensal bacteria (14SM). I employed three different types of infectious diseases for the following reasons: 1) attaching and effacing (A/E) pathogen (C. rodentium), to better understand which commensal bacteria aid in enhancing the pathogen susceptibility when a fiber-deprived gut microbiota erodes the mucus barrier; 2) human intracellular pathogens (Listeria monocytogenes and Salmonella Tyhimurium) to investigate, whether like the A/E pathogen, erosion of the mucus layer could affect the infection dynamics; and 3) a mouse nematode parasite – Trichuris muris, which is a model for the human parasite Trichuris trichiura – to study how changes in the mucin–microbiome interactions drive the worm infection, as mucins play an important role in worm expulsion. In my thesis, I used various combinations of 14SM by dropping out individual or all mucin-degrading bacteria from the microbial community to show that, in the face of reduced dietary fiber, the commensal gut bacterium Akkermansia muciniphila is responsible for enhancing susceptibility to C. rodentium, most likely by eroding the protective gut mucus layer. For my experiments with intracellular pathogens (L. monocytogenes and S. Tyhimurium, I found that dietary fiber deprivation provided protection against the infection by both L. monocytogenes and S. Typhimurium. This protective effect against the pathogens was driven directly by diet and not by the microbial erosion of the mucus layer, since a similar protective effect was observed in both gnotobiotic and germ-free mice. Finally, for the helminth model, I showed that that fiber deprivation-led elevated microbial mucin foraging promotes clearance of the parasitic worm by shifting the host immune response from a susceptible, Th1 type to a resistant, Th2 type. In the context of autoimmune disease, I focused on inflammatory bowel disease (IBD). Although IBD results from genetic predisposition, the contribution of environmental triggers is thought to be crucial. Diet–gut microbiota interactions are considered to be an important environmental trigger, but the precise mechanisms are unknown. As a model for IBD, I employed IL-10-/- mice which are known to spontaneously develop IBD-like colitis in conventional mice. Using our 14SM gnotobiotic mouse model, I showed that in a genetically susceptible host, microbiota-mediated erosion of the mucus layer following dietary fiber deprivation is sufficient to induce lethal colitis. Furthermore, my results show that this effect was clearly dependent on interaction all three factors: microbiome, diet and genetic susceptibility. Leaving out only one of these factors eliminated the lethal phenotype. The novel findings arising from my PhD thesis will help the scientific community to enhance our understanding of the functional role of mucolytic bacteria and the GI mucus layer in shaping our health. Overall, given a reduced consumption of dietary fiber in industrialized countries compared to developing countries, my results have profound implications for potential treatment and prevention strategies by leveraging diet to engineer the gut microbiome, especially in the context of personalized medicine. [less ▲]

Detailed reference viewed: 46 (2 UL)
Full Text
See detailSEMKIS: A CONTRIBUTION TO SOFTWARE ENGINEERING METHODOLOGIES FOR NEURAL NETWORK DEVELOPMENT
Jahic, Benjamin UL

Doctoral thesis (2022)

Today, there is a high demand for neural network-based software systems supporting humans during their daily activities. Neural networks are computer programs that simulate the behaviour of simplified ... [more ▼]

Today, there is a high demand for neural network-based software systems supporting humans during their daily activities. Neural networks are computer programs that simulate the behaviour of simplified human brains. These neural networks can be deployed on various devices e.g. cars, phones, medical devices...) in many domains (e.g. automotive industry, medicine...). To meet the high demand, software engineers require methods and tools to engineer these software systems for their customers. Neural networks acquire their recognition skills e.g. recognising voice, image content...) from large datasets during a training process. Therefore, neural network engineering (NNE) shall not be only about designing and implementing neural network models, but also about dataset engineering (DSE). In the literature, there are no software engineering methodologies supporting DSE with precise dataset selection criteria for improving neural networks. Most traditional approaches focus only on improving the neural network’s architecture or follow crafted approaches based on augmenting datasets with randomly gathered data. Moreover, they do not consider a comparative evaluation of the neural network’s recognition skills and customer’s requirements for building appropriate datasets. In this thesis, we introduce a software engineering methodology (called SEMKIS) supported by a tool for engineering datasets with precise data selection criteria to improve neural networks. Our method considers mainly the improvement of neural networks through augmenting datasets with synthetic data. SEMKIS has been designed as a rigorous iterative process for guiding software engineers during their neural network-based projects. The SEMKIS process is composed of many activities covering different development phases: requirements’ specification; dataset and neural network engineering; recognition skills specification; dataset augmentation with synthetized data. We introduce the notion of key-properties, used all along the process in cooperation with a customer, to describe the recognition skills. We define a domain-specific language (called SEMKIS-DSL) for the specification of the requirements and recognition skills. The SEMKIS-DSL grammar has been designed to support a comparative evaluation of the customer’s requirements with the key-properties. We define a method for interpreting the specification and defining a dataset augmentation. Lastly, we apply the SEMKIS process to a complete case study on the recognition of a meter counter. Our experiment shows a successful application of our process in a concrete example. [less ▲]

Detailed reference viewed: 210 (29 UL)
Full Text
See detailARGUMENT MINING AND ITS APPLICATIONS IN POLITICAL DEBATES
Haddadan, Shohreh UL

Doctoral thesis (2022)

Presidential debates are significant moments in the history of presidential campaigns. In these debates, candidates are challenged to discuss the main contemporary and historical issues in the country and ... [more ▼]

Presidential debates are significant moments in the history of presidential campaigns. In these debates, candidates are challenged to discuss the main contemporary and historical issues in the country and attempt to persuade the voters to their benefit. These debates offer a legitimate ground for argumentative analysis to investigate political discourse argument structure and strategy. The recent advances in machine learning and Natural Language Processing (NLP) algorithms with the rise of deep learning have revolutionized many natural language applications, and argument analysis from textual resources is no exception. This dissertation targets argument mining from political debates data, a platform rifled with the arguments put forward by politicians to convince a general public in voting for them and discourage them from being appealed by the other candidates. The main contributions of the thesis are: i) Creation, release and reliability assessment of a valuable resource for argumentation research. ii) Implementation of a complete argument mining pipeline applying cutting-edge technologies in NLP research. iii) Launching of a demo tool for argumentative analysis of political debates. The original dataset is composed of the transcripts of 41 presidential election debates in the U.S. from 1960 to 2016. Beside argument extraction from political debates, this research also aims at investigating the practical applications of argument structure extraction, such as fallacious argument classification and argument retrieval. In order to apply supervised machine learning and NLP methods to the data, an excessive annotation study has been conducted on the data and led to the creation of a unique dataset with argument structures composed of argument components (i.e., claim and premise) and argument relations (i.e., support and attack). This dataset includes also another annotation layer with six fallacious argument categories and 14 sub-categories annotated on the debates. The final dataset is annotated with 32,296 argument components (i.e., 16,982 claims and 15,314 premises) and 25,012 relations (i.e., 3,723 attacks and 21,289 supports), and 1628 fallacious arguments. As the methodological approach, a complete argument mining pipeline is designed and implemented, composed of the two main stages of argument component detection and argument relation prediction. Each stage takes advantage of various NLP models outperforming standard baselines in the area, with an average F-score of 0.63 for argument components classification and 0.68 for argument relation classification. Additionally, DISPUTool, an argumentative analysis online tool, is developed as proof-of-concept. DISPUTool incorporates two main functionalities. Firstly, it provides the possibility of exploring the arguments which exist in the dataset. And secondly, it allows for extracting arguments from text segments inserted by the user leveraging the embedded trained model. [less ▲]

Detailed reference viewed: 102 (7 UL)